mirror of
https://github.com/armbian/build.git
synced 2025-09-20 13:11:10 +02:00
12656 lines
419 KiB
Diff
12656 lines
419 KiB
Diff
diff --git a/Documentation/filesystems/directory-locking.rst b/Documentation/filesystems/directory-locking.rst
|
|
index de12016ee4198..e59fc830c9af4 100644
|
|
--- a/Documentation/filesystems/directory-locking.rst
|
|
+++ b/Documentation/filesystems/directory-locking.rst
|
|
@@ -22,12 +22,11 @@ exclusive.
|
|
3) object removal. Locking rules: caller locks parent, finds victim,
|
|
locks victim and calls the method. Locks are exclusive.
|
|
|
|
-4) rename() that is _not_ cross-directory. Locking rules: caller locks
|
|
-the parent and finds source and target. In case of exchange (with
|
|
-RENAME_EXCHANGE in flags argument) lock both. In any case,
|
|
-if the target already exists, lock it. If the source is a non-directory,
|
|
-lock it. If we need to lock both, lock them in inode pointer order.
|
|
-Then call the method. All locks are exclusive.
|
|
+4) rename() that is _not_ cross-directory. Locking rules: caller locks the
|
|
+parent and finds source and target. We lock both (provided they exist). If we
|
|
+need to lock two inodes of different type (dir vs non-dir), we lock directory
|
|
+first. If we need to lock two inodes of the same type, lock them in inode
|
|
+pointer order. Then call the method. All locks are exclusive.
|
|
NB: we might get away with locking the the source (and target in exchange
|
|
case) shared.
|
|
|
|
@@ -44,15 +43,17 @@ All locks are exclusive.
|
|
rules:
|
|
|
|
* lock the filesystem
|
|
- * lock parents in "ancestors first" order.
|
|
+ * lock parents in "ancestors first" order. If one is not ancestor of
|
|
+ the other, lock them in inode pointer order.
|
|
* find source and target.
|
|
* if old parent is equal to or is a descendent of target
|
|
fail with -ENOTEMPTY
|
|
* if new parent is equal to or is a descendent of source
|
|
fail with -ELOOP
|
|
- * If it's an exchange, lock both the source and the target.
|
|
- * If the target exists, lock it. If the source is a non-directory,
|
|
- lock it. If we need to lock both, do so in inode pointer order.
|
|
+ * Lock both the source and the target provided they exist. If we
|
|
+ need to lock two inodes of different type (dir vs non-dir), we lock
|
|
+ the directory first. If we need to lock two inodes of the same type,
|
|
+ lock them in inode pointer order.
|
|
* call the method.
|
|
|
|
All ->i_rwsem are taken exclusive. Again, we might get away with locking
|
|
@@ -66,8 +67,9 @@ If no directory is its own ancestor, the scheme above is deadlock-free.
|
|
|
|
Proof:
|
|
|
|
- First of all, at any moment we have a partial ordering of the
|
|
- objects - A < B iff A is an ancestor of B.
|
|
+ First of all, at any moment we have a linear ordering of the
|
|
+ objects - A < B iff (A is an ancestor of B) or (B is not an ancestor
|
|
+ of A and ptr(A) < ptr(B)).
|
|
|
|
That ordering can change. However, the following is true:
|
|
|
|
diff --git a/Documentation/networking/af_xdp.rst b/Documentation/networking/af_xdp.rst
|
|
index 83f7ae5fc045e..09b3943b3b719 100644
|
|
--- a/Documentation/networking/af_xdp.rst
|
|
+++ b/Documentation/networking/af_xdp.rst
|
|
@@ -40,13 +40,13 @@ allocates memory for this UMEM using whatever means it feels is most
|
|
appropriate (malloc, mmap, huge pages, etc). This memory area is then
|
|
registered with the kernel using the new setsockopt XDP_UMEM_REG. The
|
|
UMEM also has two rings: the FILL ring and the COMPLETION ring. The
|
|
-fill ring is used by the application to send down addr for the kernel
|
|
+FILL ring is used by the application to send down addr for the kernel
|
|
to fill in with RX packet data. References to these frames will then
|
|
appear in the RX ring once each packet has been received. The
|
|
-completion ring, on the other hand, contains frame addr that the
|
|
+COMPLETION ring, on the other hand, contains frame addr that the
|
|
kernel has transmitted completely and can now be used again by user
|
|
space, for either TX or RX. Thus, the frame addrs appearing in the
|
|
-completion ring are addrs that were previously transmitted using the
|
|
+COMPLETION ring are addrs that were previously transmitted using the
|
|
TX ring. In summary, the RX and FILL rings are used for the RX path
|
|
and the TX and COMPLETION rings are used for the TX path.
|
|
|
|
@@ -91,11 +91,16 @@ Concepts
|
|
========
|
|
|
|
In order to use an AF_XDP socket, a number of associated objects need
|
|
-to be setup.
|
|
+to be setup. These objects and their options are explained in the
|
|
+following sections.
|
|
|
|
-Jonathan Corbet has also written an excellent article on LWN,
|
|
-"Accelerating networking with AF_XDP". It can be found at
|
|
-https://lwn.net/Articles/750845/.
|
|
+For an overview on how AF_XDP works, you can also take a look at the
|
|
+Linux Plumbers paper from 2018 on the subject:
|
|
+http://vger.kernel.org/lpc_net2018_talks/lpc18_paper_af_xdp_perf-v2.pdf. Do
|
|
+NOT consult the paper from 2017 on "AF_PACKET v4", the first attempt
|
|
+at AF_XDP. Nearly everything changed since then. Jonathan Corbet has
|
|
+also written an excellent article on LWN, "Accelerating networking
|
|
+with AF_XDP". It can be found at https://lwn.net/Articles/750845/.
|
|
|
|
UMEM
|
|
----
|
|
@@ -113,22 +118,22 @@ the next socket B can do this by setting the XDP_SHARED_UMEM flag in
|
|
struct sockaddr_xdp member sxdp_flags, and passing the file descriptor
|
|
of A to struct sockaddr_xdp member sxdp_shared_umem_fd.
|
|
|
|
-The UMEM has two single-producer/single-consumer rings, that are used
|
|
+The UMEM has two single-producer/single-consumer rings that are used
|
|
to transfer ownership of UMEM frames between the kernel and the
|
|
user-space application.
|
|
|
|
Rings
|
|
-----
|
|
|
|
-There are a four different kind of rings: Fill, Completion, RX and
|
|
+There are a four different kind of rings: FILL, COMPLETION, RX and
|
|
TX. All rings are single-producer/single-consumer, so the user-space
|
|
application need explicit synchronization of multiple
|
|
processes/threads are reading/writing to them.
|
|
|
|
-The UMEM uses two rings: Fill and Completion. Each socket associated
|
|
+The UMEM uses two rings: FILL and COMPLETION. Each socket associated
|
|
with the UMEM must have an RX queue, TX queue or both. Say, that there
|
|
is a setup with four sockets (all doing TX and RX). Then there will be
|
|
-one Fill ring, one Completion ring, four TX rings and four RX rings.
|
|
+one FILL ring, one COMPLETION ring, four TX rings and four RX rings.
|
|
|
|
The rings are head(producer)/tail(consumer) based rings. A producer
|
|
writes the data ring at the index pointed out by struct xdp_ring
|
|
@@ -146,7 +151,7 @@ The size of the rings need to be of size power of two.
|
|
UMEM Fill Ring
|
|
~~~~~~~~~~~~~~
|
|
|
|
-The Fill ring is used to transfer ownership of UMEM frames from
|
|
+The FILL ring is used to transfer ownership of UMEM frames from
|
|
user-space to kernel-space. The UMEM addrs are passed in the ring. As
|
|
an example, if the UMEM is 64k and each chunk is 4k, then the UMEM has
|
|
16 chunks and can pass addrs between 0 and 64k.
|
|
@@ -164,8 +169,8 @@ chunks mode, then the incoming addr will be left untouched.
|
|
UMEM Completion Ring
|
|
~~~~~~~~~~~~~~~~~~~~
|
|
|
|
-The Completion Ring is used transfer ownership of UMEM frames from
|
|
-kernel-space to user-space. Just like the Fill ring, UMEM indicies are
|
|
+The COMPLETION Ring is used transfer ownership of UMEM frames from
|
|
+kernel-space to user-space. Just like the FILL ring, UMEM indices are
|
|
used.
|
|
|
|
Frames passed from the kernel to user-space are frames that has been
|
|
@@ -181,7 +186,7 @@ The RX ring is the receiving side of a socket. Each entry in the ring
|
|
is a struct xdp_desc descriptor. The descriptor contains UMEM offset
|
|
(addr) and the length of the data (len).
|
|
|
|
-If no frames have been passed to kernel via the Fill ring, no
|
|
+If no frames have been passed to kernel via the FILL ring, no
|
|
descriptors will (or can) appear on the RX ring.
|
|
|
|
The user application consumes struct xdp_desc descriptors from this
|
|
@@ -199,8 +204,24 @@ be relaxed in the future.
|
|
The user application produces struct xdp_desc descriptors to this
|
|
ring.
|
|
|
|
+Libbpf
|
|
+======
|
|
+
|
|
+Libbpf is a helper library for eBPF and XDP that makes using these
|
|
+technologies a lot simpler. It also contains specific helper functions
|
|
+in tools/lib/bpf/xsk.h for facilitating the use of AF_XDP. It
|
|
+contains two types of functions: those that can be used to make the
|
|
+setup of AF_XDP socket easier and ones that can be used in the data
|
|
+plane to access the rings safely and quickly. To see an example on how
|
|
+to use this API, please take a look at the sample application in
|
|
+samples/bpf/xdpsock_usr.c which uses libbpf for both setup and data
|
|
+plane operations.
|
|
+
|
|
+We recommend that you use this library unless you have become a power
|
|
+user. It will make your program a lot simpler.
|
|
+
|
|
XSKMAP / BPF_MAP_TYPE_XSKMAP
|
|
-----------------------------
|
|
+============================
|
|
|
|
On XDP side there is a BPF map type BPF_MAP_TYPE_XSKMAP (XSKMAP) that
|
|
is used in conjunction with bpf_redirect_map() to pass the ingress
|
|
@@ -216,21 +237,193 @@ queue 17. Only the XDP program executing for eth0 and queue 17 will
|
|
successfully pass data to the socket. Please refer to the sample
|
|
application (samples/bpf/) in for an example.
|
|
|
|
+Configuration Flags and Socket Options
|
|
+======================================
|
|
+
|
|
+These are the various configuration flags that can be used to control
|
|
+and monitor the behavior of AF_XDP sockets.
|
|
+
|
|
+XDP_COPY and XDP_ZERO_COPY bind flags
|
|
+-------------------------------------
|
|
+
|
|
+When you bind to a socket, the kernel will first try to use zero-copy
|
|
+copy. If zero-copy is not supported, it will fall back on using copy
|
|
+mode, i.e. copying all packets out to user space. But if you would
|
|
+like to force a certain mode, you can use the following flags. If you
|
|
+pass the XDP_COPY flag to the bind call, the kernel will force the
|
|
+socket into copy mode. If it cannot use copy mode, the bind call will
|
|
+fail with an error. Conversely, the XDP_ZERO_COPY flag will force the
|
|
+socket into zero-copy mode or fail.
|
|
+
|
|
+XDP_SHARED_UMEM bind flag
|
|
+-------------------------
|
|
+
|
|
+This flag enables you to bind multiple sockets to the same UMEM, but
|
|
+only if they share the same queue id. In this mode, each socket has
|
|
+their own RX and TX rings, but the UMEM (tied to the fist socket
|
|
+created) only has a single FILL ring and a single COMPLETION
|
|
+ring. To use this mode, create the first socket and bind it in the normal
|
|
+way. Create a second socket and create an RX and a TX ring, or at
|
|
+least one of them, but no FILL or COMPLETION rings as the ones from
|
|
+the first socket will be used. In the bind call, set he
|
|
+XDP_SHARED_UMEM option and provide the initial socket's fd in the
|
|
+sxdp_shared_umem_fd field. You can attach an arbitrary number of extra
|
|
+sockets this way.
|
|
+
|
|
+What socket will then a packet arrive on? This is decided by the XDP
|
|
+program. Put all the sockets in the XSK_MAP and just indicate which
|
|
+index in the array you would like to send each packet to. A simple
|
|
+round-robin example of distributing packets is shown below:
|
|
+
|
|
+.. code-block:: c
|
|
+
|
|
+ #include <linux/bpf.h>
|
|
+ #include "bpf_helpers.h"
|
|
+
|
|
+ #define MAX_SOCKS 16
|
|
+
|
|
+ struct {
|
|
+ __uint(type, BPF_MAP_TYPE_XSKMAP);
|
|
+ __uint(max_entries, MAX_SOCKS);
|
|
+ __uint(key_size, sizeof(int));
|
|
+ __uint(value_size, sizeof(int));
|
|
+ } xsks_map SEC(".maps");
|
|
+
|
|
+ static unsigned int rr;
|
|
+
|
|
+ SEC("xdp_sock") int xdp_sock_prog(struct xdp_md *ctx)
|
|
+ {
|
|
+ rr = (rr + 1) & (MAX_SOCKS - 1);
|
|
+
|
|
+ return bpf_redirect_map(&xsks_map, rr, 0);
|
|
+ }
|
|
+
|
|
+Note, that since there is only a single set of FILL and COMPLETION
|
|
+rings, and they are single producer, single consumer rings, you need
|
|
+to make sure that multiple processes or threads do not use these rings
|
|
+concurrently. There are no synchronization primitives in the
|
|
+libbpf code that protects multiple users at this point in time.
|
|
+
|
|
+XDP_USE_NEED_WAKEUP bind flag
|
|
+-----------------------------
|
|
+
|
|
+This option adds support for a new flag called need_wakeup that is
|
|
+present in the FILL ring and the TX ring, the rings for which user
|
|
+space is a producer. When this option is set in the bind call, the
|
|
+need_wakeup flag will be set if the kernel needs to be explicitly
|
|
+woken up by a syscall to continue processing packets. If the flag is
|
|
+zero, no syscall is needed.
|
|
+
|
|
+If the flag is set on the FILL ring, the application needs to call
|
|
+poll() to be able to continue to receive packets on the RX ring. This
|
|
+can happen, for example, when the kernel has detected that there are no
|
|
+more buffers on the FILL ring and no buffers left on the RX HW ring of
|
|
+the NIC. In this case, interrupts are turned off as the NIC cannot
|
|
+receive any packets (as there are no buffers to put them in), and the
|
|
+need_wakeup flag is set so that user space can put buffers on the
|
|
+FILL ring and then call poll() so that the kernel driver can put these
|
|
+buffers on the HW ring and start to receive packets.
|
|
+
|
|
+If the flag is set for the TX ring, it means that the application
|
|
+needs to explicitly notify the kernel to send any packets put on the
|
|
+TX ring. This can be accomplished either by a poll() call, as in the
|
|
+RX path, or by calling sendto().
|
|
+
|
|
+An example of how to use this flag can be found in
|
|
+samples/bpf/xdpsock_user.c. An example with the use of libbpf helpers
|
|
+would look like this for the TX path:
|
|
+
|
|
+.. code-block:: c
|
|
+
|
|
+ if (xsk_ring_prod__needs_wakeup(&my_tx_ring))
|
|
+ sendto(xsk_socket__fd(xsk_handle), NULL, 0, MSG_DONTWAIT, NULL, 0);
|
|
+
|
|
+I.e., only use the syscall if the flag is set.
|
|
+
|
|
+We recommend that you always enable this mode as it usually leads to
|
|
+better performance especially if you run the application and the
|
|
+driver on the same core, but also if you use different cores for the
|
|
+application and the kernel driver, as it reduces the number of
|
|
+syscalls needed for the TX path.
|
|
+
|
|
+XDP_{RX|TX|UMEM_FILL|UMEM_COMPLETION}_RING setsockopts
|
|
+------------------------------------------------------
|
|
+
|
|
+These setsockopts sets the number of descriptors that the RX, TX,
|
|
+FILL, and COMPLETION rings respectively should have. It is mandatory
|
|
+to set the size of at least one of the RX and TX rings. If you set
|
|
+both, you will be able to both receive and send traffic from your
|
|
+application, but if you only want to do one of them, you can save
|
|
+resources by only setting up one of them. Both the FILL ring and the
|
|
+COMPLETION ring are mandatory if you have a UMEM tied to your socket,
|
|
+which is the normal case. But if the XDP_SHARED_UMEM flag is used, any
|
|
+socket after the first one does not have a UMEM and should in that
|
|
+case not have any FILL or COMPLETION rings created.
|
|
+
|
|
+XDP_UMEM_REG setsockopt
|
|
+-----------------------
|
|
+
|
|
+This setsockopt registers a UMEM to a socket. This is the area that
|
|
+contain all the buffers that packet can recide in. The call takes a
|
|
+pointer to the beginning of this area and the size of it. Moreover, it
|
|
+also has parameter called chunk_size that is the size that the UMEM is
|
|
+divided into. It can only be 2K or 4K at the moment. If you have an
|
|
+UMEM area that is 128K and a chunk size of 2K, this means that you
|
|
+will be able to hold a maximum of 128K / 2K = 64 packets in your UMEM
|
|
+area and that your largest packet size can be 2K.
|
|
+
|
|
+There is also an option to set the headroom of each single buffer in
|
|
+the UMEM. If you set this to N bytes, it means that the packet will
|
|
+start N bytes into the buffer leaving the first N bytes for the
|
|
+application to use. The final option is the flags field, but it will
|
|
+be dealt with in separate sections for each UMEM flag.
|
|
+
|
|
+SO_BINDTODEVICE setsockopt
|
|
+--------------------------
|
|
+
|
|
+This is a generic SOL_SOCKET option that can be used to tie AF_XDP
|
|
+socket to a particular network interface. It is useful when a socket
|
|
+is created by a privileged process and passed to a non-privileged one.
|
|
+Once the option is set, kernel will refuse attempts to bind that socket
|
|
+to a different interface. Updating the value requires CAP_NET_RAW.
|
|
+
|
|
+XDP_STATISTICS getsockopt
|
|
+-------------------------
|
|
+
|
|
+Gets drop statistics of a socket that can be useful for debug
|
|
+purposes. The supported statistics are shown below:
|
|
+
|
|
+.. code-block:: c
|
|
+
|
|
+ struct xdp_statistics {
|
|
+ __u64 rx_dropped; /* Dropped for reasons other than invalid desc */
|
|
+ __u64 rx_invalid_descs; /* Dropped due to invalid descriptor */
|
|
+ __u64 tx_invalid_descs; /* Dropped due to invalid descriptor */
|
|
+ };
|
|
+
|
|
+XDP_OPTIONS getsockopt
|
|
+----------------------
|
|
+
|
|
+Gets options from an XDP socket. The only one supported so far is
|
|
+XDP_OPTIONS_ZEROCOPY which tells you if zero-copy is on or not.
|
|
+
|
|
Usage
|
|
=====
|
|
|
|
-In order to use AF_XDP sockets there are two parts needed. The
|
|
+In order to use AF_XDP sockets two parts are needed. The
|
|
user-space application and the XDP program. For a complete setup and
|
|
usage example, please refer to the sample application. The user-space
|
|
side is xdpsock_user.c and the XDP side is part of libbpf.
|
|
|
|
-The XDP code sample included in tools/lib/bpf/xsk.c is the following::
|
|
+The XDP code sample included in tools/lib/bpf/xsk.c is the following:
|
|
+
|
|
+.. code-block:: c
|
|
|
|
SEC("xdp_sock") int xdp_sock_prog(struct xdp_md *ctx)
|
|
{
|
|
int index = ctx->rx_queue_index;
|
|
|
|
- // A set entry here means that the correspnding queue_id
|
|
+ // A set entry here means that the corresponding queue_id
|
|
// has an active AF_XDP socket bound to it.
|
|
if (bpf_map_lookup_elem(&xsks_map, &index))
|
|
return bpf_redirect_map(&xsks_map, index, 0);
|
|
@@ -238,7 +431,10 @@ The XDP code sample included in tools/lib/bpf/xsk.c is the following::
|
|
return XDP_PASS;
|
|
}
|
|
|
|
-Naive ring dequeue and enqueue could look like this::
|
|
+A simple but not so performance ring dequeue and enqueue could look
|
|
+like this:
|
|
+
|
|
+.. code-block:: c
|
|
|
|
// struct xdp_rxtx_ring {
|
|
// __u32 *producer;
|
|
@@ -287,17 +483,16 @@ Naive ring dequeue and enqueue could look like this::
|
|
return 0;
|
|
}
|
|
|
|
-
|
|
-For a more optimized version, please refer to the sample application.
|
|
+But please use the libbpf functions as they are optimized and ready to
|
|
+use. Will make your life easier.
|
|
|
|
Sample application
|
|
==================
|
|
|
|
There is a xdpsock benchmarking/test application included that
|
|
-demonstrates how to use AF_XDP sockets with both private and shared
|
|
-UMEMs. Say that you would like your UDP traffic from port 4242 to end
|
|
-up in queue 16, that we will enable AF_XDP on. Here, we use ethtool
|
|
-for this::
|
|
+demonstrates how to use AF_XDP sockets with private UMEMs. Say that
|
|
+you would like your UDP traffic from port 4242 to end up in queue 16,
|
|
+that we will enable AF_XDP on. Here, we use ethtool for this::
|
|
|
|
ethtool -N p3p2 rx-flow-hash udp4 fn
|
|
ethtool -N p3p2 flow-type udp4 src-port 4242 dst-port 4242 \
|
|
@@ -311,13 +506,18 @@ using::
|
|
For XDP_SKB mode, use the switch "-S" instead of "-N" and all options
|
|
can be displayed with "-h", as usual.
|
|
|
|
+This sample application uses libbpf to make the setup and usage of
|
|
+AF_XDP simpler. If you want to know how the raw uapi of AF_XDP is
|
|
+really used to make something more advanced, take a look at the libbpf
|
|
+code in tools/lib/bpf/xsk.[ch].
|
|
+
|
|
FAQ
|
|
=======
|
|
|
|
Q: I am not seeing any traffic on the socket. What am I doing wrong?
|
|
|
|
A: When a netdev of a physical NIC is initialized, Linux usually
|
|
- allocates one Rx and Tx queue pair per core. So on a 8 core system,
|
|
+ allocates one RX and TX queue pair per core. So on a 8 core system,
|
|
queue ids 0 to 7 will be allocated, one per core. In the AF_XDP
|
|
bind call or the xsk_socket__create libbpf function call, you
|
|
specify a specific queue id to bind to and it is only the traffic
|
|
@@ -343,9 +543,21 @@ A: When a netdev of a physical NIC is initialized, Linux usually
|
|
sudo ethtool -N <interface> flow-type udp4 src-port 4242 dst-port \
|
|
4242 action 2
|
|
|
|
- A number of other ways are possible all up to the capabilitites of
|
|
+ A number of other ways are possible all up to the capabilities of
|
|
the NIC you have.
|
|
|
|
+Q: Can I use the XSKMAP to implement a switch betwen different umems
|
|
+ in copy mode?
|
|
+
|
|
+A: The short answer is no, that is not supported at the moment. The
|
|
+ XSKMAP can only be used to switch traffic coming in on queue id X
|
|
+ to sockets bound to the same queue id X. The XSKMAP can contain
|
|
+ sockets bound to different queue ids, for example X and Y, but only
|
|
+ traffic goming in from queue id Y can be directed to sockets bound
|
|
+ to the same queue id Y. In zero-copy mode, you should use the
|
|
+ switch, or other distribution mechanism, in your NIC to direct
|
|
+ traffic to the correct queue id and socket.
|
|
+
|
|
Credits
|
|
=======
|
|
|
|
diff --git a/Makefile b/Makefile
|
|
index 43d62b7b0a001..0b17d6936c2f9 100644
|
|
--- a/Makefile
|
|
+++ b/Makefile
|
|
@@ -1,7 +1,7 @@
|
|
# SPDX-License-Identifier: GPL-2.0
|
|
VERSION = 5
|
|
PATCHLEVEL = 4
|
|
-SUBLEVEL = 250
|
|
+SUBLEVEL = 251
|
|
EXTRAVERSION =
|
|
NAME = Kleptomaniac Octopus
|
|
|
|
diff --git a/arch/arc/include/asm/linkage.h b/arch/arc/include/asm/linkage.h
|
|
index fe19f1d412e71..284fd513d7c67 100644
|
|
--- a/arch/arc/include/asm/linkage.h
|
|
+++ b/arch/arc/include/asm/linkage.h
|
|
@@ -8,6 +8,10 @@
|
|
|
|
#include <asm/dwarf.h>
|
|
|
|
+#define ASM_NL ` /* use '`' to mark new line in macro */
|
|
+#define __ALIGN .align 4
|
|
+#define __ALIGN_STR __stringify(__ALIGN)
|
|
+
|
|
#ifdef __ASSEMBLY__
|
|
|
|
.macro ST2 e, o, off
|
|
@@ -28,10 +32,6 @@
|
|
#endif
|
|
.endm
|
|
|
|
-#define ASM_NL ` /* use '`' to mark new line in macro */
|
|
-#define __ALIGN .align 4
|
|
-#define __ALIGN_STR __stringify(__ALIGN)
|
|
-
|
|
/* annotation for data we want in DCCM - if enabled in .config */
|
|
.macro ARCFP_DATA nm
|
|
#ifdef CONFIG_ARC_HAS_DCCM
|
|
diff --git a/arch/arm/boot/dts/bcm5301x.dtsi b/arch/arm/boot/dts/bcm5301x.dtsi
|
|
index 05d67f9769118..bf8154aa203a7 100644
|
|
--- a/arch/arm/boot/dts/bcm5301x.dtsi
|
|
+++ b/arch/arm/boot/dts/bcm5301x.dtsi
|
|
@@ -511,7 +511,6 @@
|
|
"spi_lr_session_done",
|
|
"spi_lr_overread";
|
|
clocks = <&iprocmed>;
|
|
- clock-names = "iprocmed";
|
|
num-cs = <2>;
|
|
#address-cells = <1>;
|
|
#size-cells = <0>;
|
|
diff --git a/arch/arm/boot/dts/omap3-gta04a5one.dts b/arch/arm/boot/dts/omap3-gta04a5one.dts
|
|
index 9db9fe67cd63b..95df45cc70c09 100644
|
|
--- a/arch/arm/boot/dts/omap3-gta04a5one.dts
|
|
+++ b/arch/arm/boot/dts/omap3-gta04a5one.dts
|
|
@@ -5,9 +5,11 @@
|
|
|
|
#include "omap3-gta04a5.dts"
|
|
|
|
-&omap3_pmx_core {
|
|
+/ {
|
|
model = "Goldelico GTA04A5/Letux 2804 with OneNAND";
|
|
+};
|
|
|
|
+&omap3_pmx_core {
|
|
gpmc_pins: pinmux_gpmc_pins {
|
|
pinctrl-single,pins = <
|
|
|
|
diff --git a/arch/arm/mach-ep93xx/timer-ep93xx.c b/arch/arm/mach-ep93xx/timer-ep93xx.c
|
|
index de998830f534f..b07956883e165 100644
|
|
--- a/arch/arm/mach-ep93xx/timer-ep93xx.c
|
|
+++ b/arch/arm/mach-ep93xx/timer-ep93xx.c
|
|
@@ -9,6 +9,7 @@
|
|
#include <linux/io.h>
|
|
#include <asm/mach/time.h>
|
|
#include "soc.h"
|
|
+#include "platform.h"
|
|
|
|
/*************************************************************************
|
|
* Timer handling for EP93xx
|
|
@@ -60,7 +61,7 @@ static u64 notrace ep93xx_read_sched_clock(void)
|
|
return ret;
|
|
}
|
|
|
|
-u64 ep93xx_clocksource_read(struct clocksource *c)
|
|
+static u64 ep93xx_clocksource_read(struct clocksource *c)
|
|
{
|
|
u64 ret;
|
|
|
|
diff --git a/arch/arm/mach-orion5x/board-dt.c b/arch/arm/mach-orion5x/board-dt.c
|
|
index 3d36f1d951964..3f651df3a71cf 100644
|
|
--- a/arch/arm/mach-orion5x/board-dt.c
|
|
+++ b/arch/arm/mach-orion5x/board-dt.c
|
|
@@ -63,6 +63,9 @@ static void __init orion5x_dt_init(void)
|
|
if (of_machine_is_compatible("maxtor,shared-storage-2"))
|
|
mss2_init();
|
|
|
|
+ if (of_machine_is_compatible("lacie,d2-network"))
|
|
+ d2net_init();
|
|
+
|
|
of_platform_default_populate(NULL, orion5x_auxdata_lookup, NULL);
|
|
}
|
|
|
|
diff --git a/arch/arm/mach-orion5x/common.h b/arch/arm/mach-orion5x/common.h
|
|
index eb96009e21c4c..b9cfdb4564568 100644
|
|
--- a/arch/arm/mach-orion5x/common.h
|
|
+++ b/arch/arm/mach-orion5x/common.h
|
|
@@ -75,6 +75,12 @@ extern void mss2_init(void);
|
|
static inline void mss2_init(void) {}
|
|
#endif
|
|
|
|
+#ifdef CONFIG_MACH_D2NET_DT
|
|
+void d2net_init(void);
|
|
+#else
|
|
+static inline void d2net_init(void) {}
|
|
+#endif
|
|
+
|
|
/*****************************************************************************
|
|
* Helpers to access Orion registers
|
|
****************************************************************************/
|
|
diff --git a/arch/arm/probes/kprobes/checkers-common.c b/arch/arm/probes/kprobes/checkers-common.c
|
|
index 4d720990cf2a3..eba7ac4725c02 100644
|
|
--- a/arch/arm/probes/kprobes/checkers-common.c
|
|
+++ b/arch/arm/probes/kprobes/checkers-common.c
|
|
@@ -40,7 +40,7 @@ enum probes_insn checker_stack_use_imm_0xx(probes_opcode_t insn,
|
|
* Different from other insn uses imm8, the real addressing offset of
|
|
* STRD in T32 encoding should be imm8 * 4. See ARMARM description.
|
|
*/
|
|
-enum probes_insn checker_stack_use_t32strd(probes_opcode_t insn,
|
|
+static enum probes_insn checker_stack_use_t32strd(probes_opcode_t insn,
|
|
struct arch_probes_insn *asi,
|
|
const struct decode_header *h)
|
|
{
|
|
diff --git a/arch/arm/probes/kprobes/core.c b/arch/arm/probes/kprobes/core.c
|
|
index 0a783bd4641c5..44b5f7dbcc00f 100644
|
|
--- a/arch/arm/probes/kprobes/core.c
|
|
+++ b/arch/arm/probes/kprobes/core.c
|
|
@@ -231,7 +231,7 @@ singlestep(struct kprobe *p, struct pt_regs *regs, struct kprobe_ctlblk *kcb)
|
|
* kprobe, and that level is reserved for user kprobe handlers, so we can't
|
|
* risk encountering a new kprobe in an interrupt handler.
|
|
*/
|
|
-void __kprobes kprobe_handler(struct pt_regs *regs)
|
|
+static void __kprobes kprobe_handler(struct pt_regs *regs)
|
|
{
|
|
struct kprobe *p, *cur;
|
|
struct kprobe_ctlblk *kcb;
|
|
diff --git a/arch/arm/probes/kprobes/opt-arm.c b/arch/arm/probes/kprobes/opt-arm.c
|
|
index c78180172120f..e20304f1d8bc9 100644
|
|
--- a/arch/arm/probes/kprobes/opt-arm.c
|
|
+++ b/arch/arm/probes/kprobes/opt-arm.c
|
|
@@ -145,8 +145,6 @@ __arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty)
|
|
}
|
|
}
|
|
|
|
-extern void kprobe_handler(struct pt_regs *regs);
|
|
-
|
|
static void
|
|
optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
|
|
{
|
|
diff --git a/arch/arm/probes/kprobes/test-core.c b/arch/arm/probes/kprobes/test-core.c
|
|
index c562832b86272..171c7076b89f4 100644
|
|
--- a/arch/arm/probes/kprobes/test-core.c
|
|
+++ b/arch/arm/probes/kprobes/test-core.c
|
|
@@ -720,7 +720,7 @@ static const char coverage_register_lookup[16] = {
|
|
[REG_TYPE_NOSPPCX] = COVERAGE_ANY_REG | COVERAGE_SP,
|
|
};
|
|
|
|
-unsigned coverage_start_registers(const struct decode_header *h)
|
|
+static unsigned coverage_start_registers(const struct decode_header *h)
|
|
{
|
|
unsigned regs = 0;
|
|
int i;
|
|
diff --git a/arch/arm/probes/kprobes/test-core.h b/arch/arm/probes/kprobes/test-core.h
|
|
index 19a5b2add41e1..805116c2ec27c 100644
|
|
--- a/arch/arm/probes/kprobes/test-core.h
|
|
+++ b/arch/arm/probes/kprobes/test-core.h
|
|
@@ -453,3 +453,7 @@ void kprobe_thumb32_test_cases(void);
|
|
#else
|
|
void kprobe_arm_test_cases(void);
|
|
#endif
|
|
+
|
|
+void __kprobes_test_case_start(void);
|
|
+void __kprobes_test_case_end_16(void);
|
|
+void __kprobes_test_case_end_32(void);
|
|
diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
|
|
index 301c1c467c0b7..bf40500adef73 100644
|
|
--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
|
|
+++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
|
|
@@ -1451,7 +1451,7 @@
|
|
};
|
|
};
|
|
|
|
- camss: camss@1b00000 {
|
|
+ camss: camss@1b0ac00 {
|
|
compatible = "qcom,msm8916-camss";
|
|
reg = <0x1b0ac00 0x200>,
|
|
<0x1b00030 0x4>,
|
|
diff --git a/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi b/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
|
|
index 202177706cdeb..df00acb35263d 100644
|
|
--- a/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
|
|
+++ b/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
|
|
@@ -269,7 +269,7 @@
|
|
};
|
|
|
|
scif1_pins: scif1 {
|
|
- groups = "scif1_data_b", "scif1_ctrl";
|
|
+ groups = "scif1_data_b";
|
|
function = "scif1";
|
|
};
|
|
|
|
@@ -329,7 +329,6 @@
|
|
&scif1 {
|
|
pinctrl-0 = <&scif1_pins>;
|
|
pinctrl-names = "default";
|
|
- uart-has-rtscts;
|
|
|
|
status = "okay";
|
|
};
|
|
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
|
|
index 5cf575f23af28..8e934bb44f12e 100644
|
|
--- a/arch/arm64/mm/mmu.c
|
|
+++ b/arch/arm64/mm/mmu.c
|
|
@@ -399,7 +399,7 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
|
|
static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
|
|
phys_addr_t size, pgprot_t prot)
|
|
{
|
|
- if ((virt >= PAGE_END) && (virt < VMALLOC_START)) {
|
|
+ if (virt < PAGE_OFFSET) {
|
|
pr_warn("BUG: not creating mapping for %pa at 0x%016lx - outside kernel range\n",
|
|
&phys, virt);
|
|
return;
|
|
@@ -426,7 +426,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
|
|
static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
|
|
phys_addr_t size, pgprot_t prot)
|
|
{
|
|
- if ((virt >= PAGE_END) && (virt < VMALLOC_START)) {
|
|
+ if (virt < PAGE_OFFSET) {
|
|
pr_warn("BUG: not updating mapping for %pa at 0x%016lx - outside kernel range\n",
|
|
&phys, virt);
|
|
return;
|
|
diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
|
|
index 2ca9114fcf002..0c8436b06c494 100644
|
|
--- a/arch/powerpc/Kconfig.debug
|
|
+++ b/arch/powerpc/Kconfig.debug
|
|
@@ -234,7 +234,7 @@ config PPC_EARLY_DEBUG_40x
|
|
|
|
config PPC_EARLY_DEBUG_CPM
|
|
bool "Early serial debugging for Freescale CPM-based serial ports"
|
|
- depends on SERIAL_CPM
|
|
+ depends on SERIAL_CPM=y
|
|
help
|
|
Select this to enable early debugging for Freescale chips
|
|
using a CPM-based serial port. This assumes that the bootwrapper
|
|
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
|
|
index 6c32ea6dc7558..77649f4cc9453 100644
|
|
--- a/arch/powerpc/Makefile
|
|
+++ b/arch/powerpc/Makefile
|
|
@@ -425,3 +425,11 @@ checkbin:
|
|
echo -n '*** Please use a different binutils version.' ; \
|
|
false ; \
|
|
fi
|
|
+ @if test "x${CONFIG_FTRACE_MCOUNT_USE_RECORDMCOUNT}" = "xy" -a \
|
|
+ "x${CONFIG_LD_IS_BFD}" = "xy" -a \
|
|
+ "${CONFIG_LD_VERSION}" = "23700" ; then \
|
|
+ echo -n '*** binutils 2.37 drops unused section symbols, which recordmcount ' ; \
|
|
+ echo 'is unable to handle.' ; \
|
|
+ echo '*** Please use a different binutils version.' ; \
|
|
+ false ; \
|
|
+ fi
|
|
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
|
|
index 210f1c28b8e41..e4fb5ab41e2d3 100644
|
|
--- a/arch/powerpc/mm/init_64.c
|
|
+++ b/arch/powerpc/mm/init_64.c
|
|
@@ -178,7 +178,7 @@ static bool altmap_cross_boundary(struct vmem_altmap *altmap, unsigned long star
|
|
unsigned long nr_pfn = page_size / sizeof(struct page);
|
|
unsigned long start_pfn = page_to_pfn((struct page *)start);
|
|
|
|
- if ((start_pfn + nr_pfn) > altmap->end_pfn)
|
|
+ if ((start_pfn + nr_pfn - 1) > altmap->end_pfn)
|
|
return true;
|
|
|
|
if (start_pfn < altmap->base_pfn)
|
|
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
|
|
index 9ade970b4232c..b11eb11e2f499 100644
|
|
--- a/arch/s390/kvm/kvm-s390.c
|
|
+++ b/arch/s390/kvm/kvm-s390.c
|
|
@@ -1982,6 +1982,10 @@ static unsigned long kvm_s390_next_dirty_cmma(struct kvm_memslots *slots,
|
|
ms = slots->memslots + slotidx;
|
|
ofs = 0;
|
|
}
|
|
+
|
|
+ if (cur_gfn < ms->base_gfn)
|
|
+ ofs = 0;
|
|
+
|
|
ofs = find_next_bit(kvm_second_dirty_bitmap(ms), ms->npages, ofs);
|
|
while ((slotidx > 0) && (ofs >= ms->npages)) {
|
|
slotidx--;
|
|
diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
|
|
index 2021946176de8..596b2a2cd837d 100644
|
|
--- a/arch/s390/kvm/vsie.c
|
|
+++ b/arch/s390/kvm/vsie.c
|
|
@@ -168,7 +168,8 @@ static int setup_apcb00(struct kvm_vcpu *vcpu, unsigned long *apcb_s,
|
|
sizeof(struct kvm_s390_apcb0)))
|
|
return -EFAULT;
|
|
|
|
- bitmap_and(apcb_s, apcb_s, apcb_h, sizeof(struct kvm_s390_apcb0));
|
|
+ bitmap_and(apcb_s, apcb_s, apcb_h,
|
|
+ BITS_PER_BYTE * sizeof(struct kvm_s390_apcb0));
|
|
|
|
return 0;
|
|
}
|
|
@@ -190,7 +191,8 @@ static int setup_apcb11(struct kvm_vcpu *vcpu, unsigned long *apcb_s,
|
|
sizeof(struct kvm_s390_apcb1)))
|
|
return -EFAULT;
|
|
|
|
- bitmap_and(apcb_s, apcb_s, apcb_h, sizeof(struct kvm_s390_apcb1));
|
|
+ bitmap_and(apcb_s, apcb_s, apcb_h,
|
|
+ BITS_PER_BYTE * sizeof(struct kvm_s390_apcb1));
|
|
|
|
return 0;
|
|
}
|
|
diff --git a/arch/sh/drivers/dma/dma-sh.c b/arch/sh/drivers/dma/dma-sh.c
|
|
index 96c626c2cd0a4..306fba1564e5e 100644
|
|
--- a/arch/sh/drivers/dma/dma-sh.c
|
|
+++ b/arch/sh/drivers/dma/dma-sh.c
|
|
@@ -18,6 +18,18 @@
|
|
#include <cpu/dma-register.h>
|
|
#include <cpu/dma.h>
|
|
|
|
+/*
|
|
+ * Some of the SoCs feature two DMAC modules. In such a case, the channels are
|
|
+ * distributed equally among them.
|
|
+ */
|
|
+#ifdef SH_DMAC_BASE1
|
|
+#define SH_DMAC_NR_MD_CH (CONFIG_NR_ONCHIP_DMA_CHANNELS / 2)
|
|
+#else
|
|
+#define SH_DMAC_NR_MD_CH CONFIG_NR_ONCHIP_DMA_CHANNELS
|
|
+#endif
|
|
+
|
|
+#define SH_DMAC_CH_SZ 0x10
|
|
+
|
|
/*
|
|
* Define the default configuration for dual address memory-memory transfer.
|
|
* The 0x400 value represents auto-request, external->external.
|
|
@@ -29,7 +41,7 @@ static unsigned long dma_find_base(unsigned int chan)
|
|
unsigned long base = SH_DMAC_BASE0;
|
|
|
|
#ifdef SH_DMAC_BASE1
|
|
- if (chan >= 6)
|
|
+ if (chan >= SH_DMAC_NR_MD_CH)
|
|
base = SH_DMAC_BASE1;
|
|
#endif
|
|
|
|
@@ -40,13 +52,13 @@ static unsigned long dma_base_addr(unsigned int chan)
|
|
{
|
|
unsigned long base = dma_find_base(chan);
|
|
|
|
- /* Normalize offset calculation */
|
|
- if (chan >= 9)
|
|
- chan -= 6;
|
|
- if (chan >= 4)
|
|
- base += 0x10;
|
|
+ chan = (chan % SH_DMAC_NR_MD_CH) * SH_DMAC_CH_SZ;
|
|
+
|
|
+ /* DMAOR is placed inside the channel register space. Step over it. */
|
|
+ if (chan >= DMAOR)
|
|
+ base += SH_DMAC_CH_SZ;
|
|
|
|
- return base + (chan * 0x10);
|
|
+ return base + chan;
|
|
}
|
|
|
|
#ifdef CONFIG_SH_DMA_IRQ_MULTI
|
|
@@ -250,12 +262,11 @@ static int sh_dmac_get_dma_residue(struct dma_channel *chan)
|
|
#define NR_DMAOR 1
|
|
#endif
|
|
|
|
-/*
|
|
- * DMAOR bases are broken out amongst channel groups. DMAOR0 manages
|
|
- * channels 0 - 5, DMAOR1 6 - 11 (optional).
|
|
- */
|
|
-#define dmaor_read_reg(n) __raw_readw(dma_find_base((n)*6))
|
|
-#define dmaor_write_reg(n, data) __raw_writew(data, dma_find_base(n)*6)
|
|
+#define dmaor_read_reg(n) __raw_readw(dma_find_base((n) * \
|
|
+ SH_DMAC_NR_MD_CH) + DMAOR)
|
|
+#define dmaor_write_reg(n, data) __raw_writew(data, \
|
|
+ dma_find_base((n) * \
|
|
+ SH_DMAC_NR_MD_CH) + DMAOR)
|
|
|
|
static inline int dmaor_reset(int no)
|
|
{
|
|
diff --git a/arch/sh/kernel/cpu/sh2/probe.c b/arch/sh/kernel/cpu/sh2/probe.c
|
|
index d342ea08843f6..70a07f4f2142f 100644
|
|
--- a/arch/sh/kernel/cpu/sh2/probe.c
|
|
+++ b/arch/sh/kernel/cpu/sh2/probe.c
|
|
@@ -21,7 +21,7 @@ static int __init scan_cache(unsigned long node, const char *uname,
|
|
if (!of_flat_dt_is_compatible(node, "jcore,cache"))
|
|
return 0;
|
|
|
|
- j2_ccr_base = (u32 __iomem *)of_flat_dt_translate_address(node);
|
|
+ j2_ccr_base = ioremap(of_flat_dt_translate_address(node), 4);
|
|
|
|
return 1;
|
|
}
|
|
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
|
|
index 0e4f14dae1c05..91016bb18d4f9 100644
|
|
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
|
|
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
|
|
@@ -593,6 +593,18 @@ static int __rdtgroup_move_task(struct task_struct *tsk,
|
|
return 0;
|
|
}
|
|
|
|
+static bool is_closid_match(struct task_struct *t, struct rdtgroup *r)
|
|
+{
|
|
+ return (rdt_alloc_capable &&
|
|
+ (r->type == RDTCTRL_GROUP) && (t->closid == r->closid));
|
|
+}
|
|
+
|
|
+static bool is_rmid_match(struct task_struct *t, struct rdtgroup *r)
|
|
+{
|
|
+ return (rdt_mon_capable &&
|
|
+ (r->type == RDTMON_GROUP) && (t->rmid == r->mon.rmid));
|
|
+}
|
|
+
|
|
/**
|
|
* rdtgroup_tasks_assigned - Test if tasks have been assigned to resource group
|
|
* @r: Resource group
|
|
@@ -608,8 +620,7 @@ int rdtgroup_tasks_assigned(struct rdtgroup *r)
|
|
|
|
rcu_read_lock();
|
|
for_each_process_thread(p, t) {
|
|
- if ((r->type == RDTCTRL_GROUP && t->closid == r->closid) ||
|
|
- (r->type == RDTMON_GROUP && t->rmid == r->mon.rmid)) {
|
|
+ if (is_closid_match(t, r) || is_rmid_match(t, r)) {
|
|
ret = 1;
|
|
break;
|
|
}
|
|
@@ -704,12 +715,15 @@ unlock:
|
|
static void show_rdt_tasks(struct rdtgroup *r, struct seq_file *s)
|
|
{
|
|
struct task_struct *p, *t;
|
|
+ pid_t pid;
|
|
|
|
rcu_read_lock();
|
|
for_each_process_thread(p, t) {
|
|
- if ((r->type == RDTCTRL_GROUP && t->closid == r->closid) ||
|
|
- (r->type == RDTMON_GROUP && t->rmid == r->mon.rmid))
|
|
- seq_printf(s, "%d\n", t->pid);
|
|
+ if (is_closid_match(t, r) || is_rmid_match(t, r)) {
|
|
+ pid = task_pid_vnr(t);
|
|
+ if (pid)
|
|
+ seq_printf(s, "%d\n", pid);
|
|
+ }
|
|
}
|
|
rcu_read_unlock();
|
|
}
|
|
@@ -2148,18 +2162,6 @@ static int reset_all_ctrls(struct rdt_resource *r)
|
|
return 0;
|
|
}
|
|
|
|
-static bool is_closid_match(struct task_struct *t, struct rdtgroup *r)
|
|
-{
|
|
- return (rdt_alloc_capable &&
|
|
- (r->type == RDTCTRL_GROUP) && (t->closid == r->closid));
|
|
-}
|
|
-
|
|
-static bool is_rmid_match(struct task_struct *t, struct rdtgroup *r)
|
|
-{
|
|
- return (rdt_mon_capable &&
|
|
- (r->type == RDTMON_GROUP) && (t->rmid == r->mon.rmid));
|
|
-}
|
|
-
|
|
/*
|
|
* Move tasks from one to the other group. If @from is NULL, then all tasks
|
|
* in the systems are moved unconditionally (used for teardown).
|
|
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
|
|
index 1699d18bd1548..45e5ecb43393b 100644
|
|
--- a/arch/x86/kernel/smpboot.c
|
|
+++ b/arch/x86/kernel/smpboot.c
|
|
@@ -99,6 +99,17 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
|
|
DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
|
|
EXPORT_PER_CPU_SYMBOL(cpu_info);
|
|
|
|
+struct mwait_cpu_dead {
|
|
+ unsigned int control;
|
|
+ unsigned int status;
|
|
+};
|
|
+
|
|
+/*
|
|
+ * Cache line aligned data for mwait_play_dead(). Separate on purpose so
|
|
+ * that it's unlikely to be touched by other CPUs.
|
|
+ */
|
|
+static DEFINE_PER_CPU_ALIGNED(struct mwait_cpu_dead, mwait_cpu_dead);
|
|
+
|
|
/* Logical package management. We might want to allocate that dynamically */
|
|
unsigned int __max_logical_packages __read_mostly;
|
|
EXPORT_SYMBOL(__max_logical_packages);
|
|
@@ -1675,10 +1686,10 @@ static bool wakeup_cpu0(void)
|
|
*/
|
|
static inline void mwait_play_dead(void)
|
|
{
|
|
+ struct mwait_cpu_dead *md = this_cpu_ptr(&mwait_cpu_dead);
|
|
unsigned int eax, ebx, ecx, edx;
|
|
unsigned int highest_cstate = 0;
|
|
unsigned int highest_subcstate = 0;
|
|
- void *mwait_ptr;
|
|
int i;
|
|
|
|
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
|
|
@@ -1713,13 +1724,6 @@ static inline void mwait_play_dead(void)
|
|
(highest_subcstate - 1);
|
|
}
|
|
|
|
- /*
|
|
- * This should be a memory location in a cache line which is
|
|
- * unlikely to be touched by other processors. The actual
|
|
- * content is immaterial as it is not actually modified in any way.
|
|
- */
|
|
- mwait_ptr = ¤t_thread_info()->flags;
|
|
-
|
|
wbinvd();
|
|
|
|
while (1) {
|
|
@@ -1731,9 +1735,9 @@ static inline void mwait_play_dead(void)
|
|
* case where we return around the loop.
|
|
*/
|
|
mb();
|
|
- clflush(mwait_ptr);
|
|
+ clflush(md);
|
|
mb();
|
|
- __monitor(mwait_ptr, 0, 0);
|
|
+ __monitor(md, 0, 0);
|
|
mb();
|
|
__mwait(eax, 0);
|
|
/*
|
|
diff --git a/arch/xtensa/platforms/iss/network.c b/arch/xtensa/platforms/iss/network.c
|
|
index fa9f3893b0021..cbca91bb5334a 100644
|
|
--- a/arch/xtensa/platforms/iss/network.c
|
|
+++ b/arch/xtensa/platforms/iss/network.c
|
|
@@ -231,7 +231,7 @@ static int tuntap_probe(struct iss_net_private *lp, int index, char *init)
|
|
|
|
init += sizeof(TRANSPORT_TUNTAP_NAME) - 1;
|
|
if (*init == ',') {
|
|
- rem = split_if_spec(init + 1, &mac_str, &dev_name);
|
|
+ rem = split_if_spec(init + 1, &mac_str, &dev_name, NULL);
|
|
if (rem != NULL) {
|
|
pr_err("%s: extra garbage on specification : '%s'\n",
|
|
dev->name, rem);
|
|
diff --git a/block/partitions/amiga.c b/block/partitions/amiga.c
|
|
index 560936617d9c1..484a51bce39b5 100644
|
|
--- a/block/partitions/amiga.c
|
|
+++ b/block/partitions/amiga.c
|
|
@@ -11,11 +11,19 @@
|
|
#define pr_fmt(fmt) fmt
|
|
|
|
#include <linux/types.h>
|
|
+#include <linux/mm_types.h>
|
|
+#include <linux/overflow.h>
|
|
#include <linux/affs_hardblocks.h>
|
|
|
|
#include "check.h"
|
|
#include "amiga.h"
|
|
|
|
+/* magic offsets in partition DosEnvVec */
|
|
+#define NR_HD 3
|
|
+#define NR_SECT 5
|
|
+#define LO_CYL 9
|
|
+#define HI_CYL 10
|
|
+
|
|
static __inline__ u32
|
|
checksum_block(__be32 *m, int size)
|
|
{
|
|
@@ -32,8 +40,12 @@ int amiga_partition(struct parsed_partitions *state)
|
|
unsigned char *data;
|
|
struct RigidDiskBlock *rdb;
|
|
struct PartitionBlock *pb;
|
|
- int start_sect, nr_sects, blk, part, res = 0;
|
|
- int blksize = 1; /* Multiplier for disk block size */
|
|
+ u64 start_sect, nr_sects;
|
|
+ sector_t blk, end_sect;
|
|
+ u32 cylblk; /* rdb_CylBlocks = nr_heads*sect_per_track */
|
|
+ u32 nr_hd, nr_sect, lo_cyl, hi_cyl;
|
|
+ int part, res = 0;
|
|
+ unsigned int blksize = 1; /* Multiplier for disk block size */
|
|
int slot = 1;
|
|
char b[BDEVNAME_SIZE];
|
|
|
|
@@ -43,7 +55,7 @@ int amiga_partition(struct parsed_partitions *state)
|
|
data = read_part_sector(state, blk, §);
|
|
if (!data) {
|
|
if (warn_no_part)
|
|
- pr_err("Dev %s: unable to read RDB block %d\n",
|
|
+ pr_err("Dev %s: unable to read RDB block %llu\n",
|
|
bdevname(state->bdev, b), blk);
|
|
res = -1;
|
|
goto rdb_done;
|
|
@@ -60,12 +72,12 @@ int amiga_partition(struct parsed_partitions *state)
|
|
*(__be32 *)(data+0xdc) = 0;
|
|
if (checksum_block((__be32 *)data,
|
|
be32_to_cpu(rdb->rdb_SummedLongs) & 0x7F)==0) {
|
|
- pr_err("Trashed word at 0xd0 in block %d ignored in checksum calculation\n",
|
|
+ pr_err("Trashed word at 0xd0 in block %llu ignored in checksum calculation\n",
|
|
blk);
|
|
break;
|
|
}
|
|
|
|
- pr_err("Dev %s: RDB in block %d has bad checksum\n",
|
|
+ pr_err("Dev %s: RDB in block %llu has bad checksum\n",
|
|
bdevname(state->bdev, b), blk);
|
|
}
|
|
|
|
@@ -81,12 +93,17 @@ int amiga_partition(struct parsed_partitions *state)
|
|
}
|
|
blk = be32_to_cpu(rdb->rdb_PartitionList);
|
|
put_dev_sector(sect);
|
|
- for (part = 1; blk>0 && part<=16; part++, put_dev_sector(sect)) {
|
|
- blk *= blksize; /* Read in terms partition table understands */
|
|
+ for (part = 1; (s32) blk>0 && part<=16; part++, put_dev_sector(sect)) {
|
|
+ /* Read in terms partition table understands */
|
|
+ if (check_mul_overflow(blk, (sector_t) blksize, &blk)) {
|
|
+ pr_err("Dev %s: overflow calculating partition block %llu! Skipping partitions %u and beyond\n",
|
|
+ bdevname(state->bdev, b), blk, part);
|
|
+ break;
|
|
+ }
|
|
data = read_part_sector(state, blk, §);
|
|
if (!data) {
|
|
if (warn_no_part)
|
|
- pr_err("Dev %s: unable to read partition block %d\n",
|
|
+ pr_err("Dev %s: unable to read partition block %llu\n",
|
|
bdevname(state->bdev, b), blk);
|
|
res = -1;
|
|
goto rdb_done;
|
|
@@ -98,19 +115,70 @@ int amiga_partition(struct parsed_partitions *state)
|
|
if (checksum_block((__be32 *)pb, be32_to_cpu(pb->pb_SummedLongs) & 0x7F) != 0 )
|
|
continue;
|
|
|
|
- /* Tell Kernel about it */
|
|
+ /* RDB gives us more than enough rope to hang ourselves with,
|
|
+ * many times over (2^128 bytes if all fields max out).
|
|
+ * Some careful checks are in order, so check for potential
|
|
+ * overflows.
|
|
+ * We are multiplying four 32 bit numbers to one sector_t!
|
|
+ */
|
|
+
|
|
+ nr_hd = be32_to_cpu(pb->pb_Environment[NR_HD]);
|
|
+ nr_sect = be32_to_cpu(pb->pb_Environment[NR_SECT]);
|
|
+
|
|
+ /* CylBlocks is total number of blocks per cylinder */
|
|
+ if (check_mul_overflow(nr_hd, nr_sect, &cylblk)) {
|
|
+ pr_err("Dev %s: heads*sects %u overflows u32, skipping partition!\n",
|
|
+ bdevname(state->bdev, b), cylblk);
|
|
+ continue;
|
|
+ }
|
|
+
|
|
+ /* check for consistency with RDB defined CylBlocks */
|
|
+ if (cylblk > be32_to_cpu(rdb->rdb_CylBlocks)) {
|
|
+ pr_warn("Dev %s: cylblk %u > rdb_CylBlocks %u!\n",
|
|
+ bdevname(state->bdev, b), cylblk,
|
|
+ be32_to_cpu(rdb->rdb_CylBlocks));
|
|
+ }
|
|
+
|
|
+ /* RDB allows for variable logical block size -
|
|
+ * normalize to 512 byte blocks and check result.
|
|
+ */
|
|
+
|
|
+ if (check_mul_overflow(cylblk, blksize, &cylblk)) {
|
|
+ pr_err("Dev %s: partition %u bytes per cyl. overflows u32, skipping partition!\n",
|
|
+ bdevname(state->bdev, b), part);
|
|
+ continue;
|
|
+ }
|
|
+
|
|
+ /* Calculate partition start and end. Limit of 32 bit on cylblk
|
|
+ * guarantees no overflow occurs if LBD support is enabled.
|
|
+ */
|
|
+
|
|
+ lo_cyl = be32_to_cpu(pb->pb_Environment[LO_CYL]);
|
|
+ start_sect = ((u64) lo_cyl * cylblk);
|
|
+
|
|
+ hi_cyl = be32_to_cpu(pb->pb_Environment[HI_CYL]);
|
|
+ nr_sects = (((u64) hi_cyl - lo_cyl + 1) * cylblk);
|
|
|
|
- nr_sects = (be32_to_cpu(pb->pb_Environment[10]) + 1 -
|
|
- be32_to_cpu(pb->pb_Environment[9])) *
|
|
- be32_to_cpu(pb->pb_Environment[3]) *
|
|
- be32_to_cpu(pb->pb_Environment[5]) *
|
|
- blksize;
|
|
if (!nr_sects)
|
|
continue;
|
|
- start_sect = be32_to_cpu(pb->pb_Environment[9]) *
|
|
- be32_to_cpu(pb->pb_Environment[3]) *
|
|
- be32_to_cpu(pb->pb_Environment[5]) *
|
|
- blksize;
|
|
+
|
|
+ /* Warn user if partition end overflows u32 (AmigaDOS limit) */
|
|
+
|
|
+ if ((start_sect + nr_sects) > UINT_MAX) {
|
|
+ pr_warn("Dev %s: partition %u (%llu-%llu) needs 64 bit device support!\n",
|
|
+ bdevname(state->bdev, b), part,
|
|
+ start_sect, start_sect + nr_sects);
|
|
+ }
|
|
+
|
|
+ if (check_add_overflow(start_sect, nr_sects, &end_sect)) {
|
|
+ pr_err("Dev %s: partition %u (%llu-%llu) needs LBD device support, skipping partition!\n",
|
|
+ bdevname(state->bdev, b), part,
|
|
+ start_sect, end_sect);
|
|
+ continue;
|
|
+ }
|
|
+
|
|
+ /* Tell Kernel about it */
|
|
+
|
|
put_partition(state,slot++,start_sect,nr_sects);
|
|
{
|
|
/* Be even more informative to aid mounting */
|
|
diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
|
|
index edb791354421b..5be76197bc361 100644
|
|
--- a/drivers/base/power/domain.c
|
|
+++ b/drivers/base/power/domain.c
|
|
@@ -2596,10 +2596,10 @@ static int genpd_parse_state(struct genpd_power_state *genpd_state,
|
|
|
|
err = of_property_read_u32(state_node, "min-residency-us", &residency);
|
|
if (!err)
|
|
- genpd_state->residency_ns = 1000 * residency;
|
|
+ genpd_state->residency_ns = 1000LL * residency;
|
|
|
|
- genpd_state->power_on_latency_ns = 1000 * exit_latency;
|
|
- genpd_state->power_off_latency_ns = 1000 * entry_latency;
|
|
+ genpd_state->power_on_latency_ns = 1000LL * exit_latency;
|
|
+ genpd_state->power_off_latency_ns = 1000LL * entry_latency;
|
|
genpd_state->fwnode = &state_node->fwnode;
|
|
|
|
return 0;
|
|
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
|
|
index 218aa7e419700..37994a7a1b6f4 100644
|
|
--- a/drivers/block/nbd.c
|
|
+++ b/drivers/block/nbd.c
|
|
@@ -1708,7 +1708,8 @@ static int nbd_dev_add(int index)
|
|
if (err == -ENOSPC)
|
|
err = -EEXIST;
|
|
} else {
|
|
- err = idr_alloc(&nbd_index_idr, nbd, 0, 0, GFP_KERNEL);
|
|
+ err = idr_alloc(&nbd_index_idr, nbd, 0,
|
|
+ (MINORMASK >> part_shift) + 1, GFP_KERNEL);
|
|
if (err >= 0)
|
|
index = err;
|
|
}
|
|
diff --git a/drivers/char/hw_random/imx-rngc.c b/drivers/char/hw_random/imx-rngc.c
|
|
index 0576801944fdd..2e902419601de 100644
|
|
--- a/drivers/char/hw_random/imx-rngc.c
|
|
+++ b/drivers/char/hw_random/imx-rngc.c
|
|
@@ -99,7 +99,7 @@ static int imx_rngc_self_test(struct imx_rngc *rngc)
|
|
cmd = readl(rngc->base + RNGC_COMMAND);
|
|
writel(cmd | RNGC_CMD_SELF_TEST, rngc->base + RNGC_COMMAND);
|
|
|
|
- ret = wait_for_completion_timeout(&rngc->rng_op_done, RNGC_TIMEOUT);
|
|
+ ret = wait_for_completion_timeout(&rngc->rng_op_done, msecs_to_jiffies(RNGC_TIMEOUT));
|
|
if (!ret) {
|
|
imx_rngc_irq_mask_clear(rngc);
|
|
return -ETIMEDOUT;
|
|
@@ -182,9 +182,7 @@ static int imx_rngc_init(struct hwrng *rng)
|
|
cmd = readl(rngc->base + RNGC_COMMAND);
|
|
writel(cmd | RNGC_CMD_SEED, rngc->base + RNGC_COMMAND);
|
|
|
|
- ret = wait_for_completion_timeout(&rngc->rng_op_done,
|
|
- RNGC_TIMEOUT);
|
|
-
|
|
+ ret = wait_for_completion_timeout(&rngc->rng_op_done, msecs_to_jiffies(RNGC_TIMEOUT));
|
|
if (!ret) {
|
|
imx_rngc_irq_mask_clear(rngc);
|
|
return -ETIMEDOUT;
|
|
diff --git a/drivers/char/hw_random/st-rng.c b/drivers/char/hw_random/st-rng.c
|
|
index 863448360a7da..f708a99619ecb 100644
|
|
--- a/drivers/char/hw_random/st-rng.c
|
|
+++ b/drivers/char/hw_random/st-rng.c
|
|
@@ -12,6 +12,7 @@
|
|
#include <linux/delay.h>
|
|
#include <linux/hw_random.h>
|
|
#include <linux/io.h>
|
|
+#include <linux/kernel.h>
|
|
#include <linux/module.h>
|
|
#include <linux/of.h>
|
|
#include <linux/platform_device.h>
|
|
@@ -41,7 +42,6 @@
|
|
|
|
struct st_rng_data {
|
|
void __iomem *base;
|
|
- struct clk *clk;
|
|
struct hwrng ops;
|
|
};
|
|
|
|
@@ -86,26 +86,18 @@ static int st_rng_probe(struct platform_device *pdev)
|
|
if (IS_ERR(base))
|
|
return PTR_ERR(base);
|
|
|
|
- clk = devm_clk_get(&pdev->dev, NULL);
|
|
+ clk = devm_clk_get_enabled(&pdev->dev, NULL);
|
|
if (IS_ERR(clk))
|
|
return PTR_ERR(clk);
|
|
|
|
- ret = clk_prepare_enable(clk);
|
|
- if (ret)
|
|
- return ret;
|
|
-
|
|
ddata->ops.priv = (unsigned long)ddata;
|
|
ddata->ops.read = st_rng_read;
|
|
ddata->ops.name = pdev->name;
|
|
ddata->base = base;
|
|
- ddata->clk = clk;
|
|
-
|
|
- dev_set_drvdata(&pdev->dev, ddata);
|
|
|
|
ret = devm_hwrng_register(&pdev->dev, &ddata->ops);
|
|
if (ret) {
|
|
dev_err(&pdev->dev, "Failed to register HW RNG\n");
|
|
- clk_disable_unprepare(clk);
|
|
return ret;
|
|
}
|
|
|
|
@@ -114,16 +106,7 @@ static int st_rng_probe(struct platform_device *pdev)
|
|
return 0;
|
|
}
|
|
|
|
-static int st_rng_remove(struct platform_device *pdev)
|
|
-{
|
|
- struct st_rng_data *ddata = dev_get_drvdata(&pdev->dev);
|
|
-
|
|
- clk_disable_unprepare(ddata->clk);
|
|
-
|
|
- return 0;
|
|
-}
|
|
-
|
|
-static const struct of_device_id st_rng_match[] = {
|
|
+static const struct of_device_id st_rng_match[] __maybe_unused = {
|
|
{ .compatible = "st,rng" },
|
|
{},
|
|
};
|
|
@@ -135,7 +118,6 @@ static struct platform_driver st_rng_driver = {
|
|
.of_match_table = of_match_ptr(st_rng_match),
|
|
},
|
|
.probe = st_rng_probe,
|
|
- .remove = st_rng_remove
|
|
};
|
|
|
|
module_platform_driver(st_rng_driver);
|
|
diff --git a/drivers/char/hw_random/virtio-rng.c b/drivers/char/hw_random/virtio-rng.c
|
|
index 718d8c0876506..145d7b1055c07 100644
|
|
--- a/drivers/char/hw_random/virtio-rng.c
|
|
+++ b/drivers/char/hw_random/virtio-rng.c
|
|
@@ -4,6 +4,7 @@
|
|
* Copyright (C) 2007, 2008 Rusty Russell IBM Corporation
|
|
*/
|
|
|
|
+#include <asm/barrier.h>
|
|
#include <linux/err.h>
|
|
#include <linux/hw_random.h>
|
|
#include <linux/scatterlist.h>
|
|
@@ -17,71 +18,111 @@ static DEFINE_IDA(rng_index_ida);
|
|
struct virtrng_info {
|
|
struct hwrng hwrng;
|
|
struct virtqueue *vq;
|
|
- struct completion have_data;
|
|
char name[25];
|
|
- unsigned int data_avail;
|
|
int index;
|
|
- bool busy;
|
|
bool hwrng_register_done;
|
|
bool hwrng_removed;
|
|
+ /* data transfer */
|
|
+ struct completion have_data;
|
|
+ unsigned int data_avail;
|
|
+ unsigned int data_idx;
|
|
+ /* minimal size returned by rng_buffer_size() */
|
|
+#if SMP_CACHE_BYTES < 32
|
|
+ u8 data[32];
|
|
+#else
|
|
+ u8 data[SMP_CACHE_BYTES];
|
|
+#endif
|
|
};
|
|
|
|
static void random_recv_done(struct virtqueue *vq)
|
|
{
|
|
struct virtrng_info *vi = vq->vdev->priv;
|
|
+ unsigned int len;
|
|
|
|
/* We can get spurious callbacks, e.g. shared IRQs + virtio_pci. */
|
|
- if (!virtqueue_get_buf(vi->vq, &vi->data_avail))
|
|
+ if (!virtqueue_get_buf(vi->vq, &len))
|
|
return;
|
|
|
|
+ smp_store_release(&vi->data_avail, len);
|
|
complete(&vi->have_data);
|
|
}
|
|
|
|
-/* The host will fill any buffer we give it with sweet, sweet randomness. */
|
|
-static void register_buffer(struct virtrng_info *vi, u8 *buf, size_t size)
|
|
+static void request_entropy(struct virtrng_info *vi)
|
|
{
|
|
struct scatterlist sg;
|
|
|
|
- sg_init_one(&sg, buf, size);
|
|
+ reinit_completion(&vi->have_data);
|
|
+ vi->data_idx = 0;
|
|
+
|
|
+ sg_init_one(&sg, vi->data, sizeof(vi->data));
|
|
|
|
/* There should always be room for one buffer. */
|
|
- virtqueue_add_inbuf(vi->vq, &sg, 1, buf, GFP_KERNEL);
|
|
+ virtqueue_add_inbuf(vi->vq, &sg, 1, vi->data, GFP_KERNEL);
|
|
|
|
virtqueue_kick(vi->vq);
|
|
}
|
|
|
|
+static unsigned int copy_data(struct virtrng_info *vi, void *buf,
|
|
+ unsigned int size)
|
|
+{
|
|
+ size = min_t(unsigned int, size, vi->data_avail);
|
|
+ memcpy(buf, vi->data + vi->data_idx, size);
|
|
+ vi->data_idx += size;
|
|
+ vi->data_avail -= size;
|
|
+ if (vi->data_avail == 0)
|
|
+ request_entropy(vi);
|
|
+ return size;
|
|
+}
|
|
+
|
|
static int virtio_read(struct hwrng *rng, void *buf, size_t size, bool wait)
|
|
{
|
|
int ret;
|
|
struct virtrng_info *vi = (struct virtrng_info *)rng->priv;
|
|
+ unsigned int chunk;
|
|
+ size_t read;
|
|
|
|
if (vi->hwrng_removed)
|
|
return -ENODEV;
|
|
|
|
- if (!vi->busy) {
|
|
- vi->busy = true;
|
|
- reinit_completion(&vi->have_data);
|
|
- register_buffer(vi, buf, size);
|
|
+ read = 0;
|
|
+
|
|
+ /* copy available data */
|
|
+ if (smp_load_acquire(&vi->data_avail)) {
|
|
+ chunk = copy_data(vi, buf, size);
|
|
+ size -= chunk;
|
|
+ read += chunk;
|
|
}
|
|
|
|
if (!wait)
|
|
- return 0;
|
|
-
|
|
- ret = wait_for_completion_killable(&vi->have_data);
|
|
- if (ret < 0)
|
|
- return ret;
|
|
+ return read;
|
|
+
|
|
+ /* We have already copied available entropy,
|
|
+ * so either size is 0 or data_avail is 0
|
|
+ */
|
|
+ while (size != 0) {
|
|
+ /* data_avail is 0 but a request is pending */
|
|
+ ret = wait_for_completion_killable(&vi->have_data);
|
|
+ if (ret < 0)
|
|
+ return ret;
|
|
+ /* if vi->data_avail is 0, we have been interrupted
|
|
+ * by a cleanup, but buffer stays in the queue
|
|
+ */
|
|
+ if (vi->data_avail == 0)
|
|
+ return read;
|
|
|
|
- vi->busy = false;
|
|
+ chunk = copy_data(vi, buf + read, size);
|
|
+ size -= chunk;
|
|
+ read += chunk;
|
|
+ }
|
|
|
|
- return vi->data_avail;
|
|
+ return read;
|
|
}
|
|
|
|
static void virtio_cleanup(struct hwrng *rng)
|
|
{
|
|
struct virtrng_info *vi = (struct virtrng_info *)rng->priv;
|
|
|
|
- if (vi->busy)
|
|
- wait_for_completion(&vi->have_data);
|
|
+ complete(&vi->have_data);
|
|
}
|
|
|
|
static int probe_common(struct virtio_device *vdev)
|
|
@@ -117,6 +158,9 @@ static int probe_common(struct virtio_device *vdev)
|
|
goto err_find;
|
|
}
|
|
|
|
+ /* we always have a pending entropy request */
|
|
+ request_entropy(vi);
|
|
+
|
|
return 0;
|
|
|
|
err_find:
|
|
@@ -132,9 +176,9 @@ static void remove_common(struct virtio_device *vdev)
|
|
|
|
vi->hwrng_removed = true;
|
|
vi->data_avail = 0;
|
|
+ vi->data_idx = 0;
|
|
complete(&vi->have_data);
|
|
vdev->config->reset(vdev);
|
|
- vi->busy = false;
|
|
if (vi->hwrng_register_done)
|
|
hwrng_unregister(&vi->hwrng);
|
|
vdev->config->del_vqs(vdev);
|
|
diff --git a/drivers/char/tpm/tpm_vtpm_proxy.c b/drivers/char/tpm/tpm_vtpm_proxy.c
|
|
index 2f6e087ec4965..ff6b88fa4f47b 100644
|
|
--- a/drivers/char/tpm/tpm_vtpm_proxy.c
|
|
+++ b/drivers/char/tpm/tpm_vtpm_proxy.c
|
|
@@ -693,37 +693,21 @@ static struct miscdevice vtpmx_miscdev = {
|
|
.fops = &vtpmx_fops,
|
|
};
|
|
|
|
-static int vtpmx_init(void)
|
|
-{
|
|
- return misc_register(&vtpmx_miscdev);
|
|
-}
|
|
-
|
|
-static void vtpmx_cleanup(void)
|
|
-{
|
|
- misc_deregister(&vtpmx_miscdev);
|
|
-}
|
|
-
|
|
static int __init vtpm_module_init(void)
|
|
{
|
|
int rc;
|
|
|
|
- rc = vtpmx_init();
|
|
- if (rc) {
|
|
- pr_err("couldn't create vtpmx device\n");
|
|
- return rc;
|
|
- }
|
|
-
|
|
workqueue = create_workqueue("tpm-vtpm");
|
|
if (!workqueue) {
|
|
pr_err("couldn't create workqueue\n");
|
|
- rc = -ENOMEM;
|
|
- goto err_vtpmx_cleanup;
|
|
+ return -ENOMEM;
|
|
}
|
|
|
|
- return 0;
|
|
-
|
|
-err_vtpmx_cleanup:
|
|
- vtpmx_cleanup();
|
|
+ rc = misc_register(&vtpmx_miscdev);
|
|
+ if (rc) {
|
|
+ pr_err("couldn't create vtpmx device\n");
|
|
+ destroy_workqueue(workqueue);
|
|
+ }
|
|
|
|
return rc;
|
|
}
|
|
@@ -731,7 +715,7 @@ err_vtpmx_cleanup:
|
|
static void __exit vtpm_module_exit(void)
|
|
{
|
|
destroy_workqueue(workqueue);
|
|
- vtpmx_cleanup();
|
|
+ misc_deregister(&vtpmx_miscdev);
|
|
}
|
|
|
|
module_init(vtpm_module_init);
|
|
diff --git a/drivers/clk/clk-cdce925.c b/drivers/clk/clk-cdce925.c
|
|
index 308b353815e17..470d91d7314db 100644
|
|
--- a/drivers/clk/clk-cdce925.c
|
|
+++ b/drivers/clk/clk-cdce925.c
|
|
@@ -705,6 +705,10 @@ static int cdce925_probe(struct i2c_client *client,
|
|
for (i = 0; i < data->chip_info->num_plls; ++i) {
|
|
pll_clk_name[i] = kasprintf(GFP_KERNEL, "%pOFn.pll%d",
|
|
client->dev.of_node, i);
|
|
+ if (!pll_clk_name[i]) {
|
|
+ err = -ENOMEM;
|
|
+ goto error;
|
|
+ }
|
|
init.name = pll_clk_name[i];
|
|
data->pll[i].chip = data;
|
|
data->pll[i].hw.init = &init;
|
|
@@ -746,6 +750,10 @@ static int cdce925_probe(struct i2c_client *client,
|
|
init.num_parents = 1;
|
|
init.parent_names = &parent_name; /* Mux Y1 to input */
|
|
init.name = kasprintf(GFP_KERNEL, "%pOFn.Y1", client->dev.of_node);
|
|
+ if (!init.name) {
|
|
+ err = -ENOMEM;
|
|
+ goto error;
|
|
+ }
|
|
data->clk[0].chip = data;
|
|
data->clk[0].hw.init = &init;
|
|
data->clk[0].index = 0;
|
|
@@ -764,6 +772,10 @@ static int cdce925_probe(struct i2c_client *client,
|
|
for (i = 1; i < data->chip_info->num_outputs; ++i) {
|
|
init.name = kasprintf(GFP_KERNEL, "%pOFn.Y%d",
|
|
client->dev.of_node, i+1);
|
|
+ if (!init.name) {
|
|
+ err = -ENOMEM;
|
|
+ goto error;
|
|
+ }
|
|
data->clk[i].chip = data;
|
|
data->clk[i].hw.init = &init;
|
|
data->clk[i].index = i;
|
|
diff --git a/drivers/clk/keystone/sci-clk.c b/drivers/clk/keystone/sci-clk.c
|
|
index 64ea895f1a7df..8e28e3489ded3 100644
|
|
--- a/drivers/clk/keystone/sci-clk.c
|
|
+++ b/drivers/clk/keystone/sci-clk.c
|
|
@@ -287,6 +287,8 @@ static int _sci_clk_build(struct sci_clk_provider *provider,
|
|
|
|
name = kasprintf(GFP_KERNEL, "clk:%d:%d", sci_clk->dev_id,
|
|
sci_clk->clk_id);
|
|
+ if (!name)
|
|
+ return -ENOMEM;
|
|
|
|
init.name = name;
|
|
|
|
diff --git a/drivers/clk/tegra/clk-emc.c b/drivers/clk/tegra/clk-emc.c
|
|
index 0c1b83bedb73d..eb2411a4cd783 100644
|
|
--- a/drivers/clk/tegra/clk-emc.c
|
|
+++ b/drivers/clk/tegra/clk-emc.c
|
|
@@ -459,6 +459,7 @@ static int load_timings_from_dt(struct tegra_clk_emc *tegra,
|
|
err = load_one_timing_from_dt(tegra, timing, child);
|
|
if (err) {
|
|
of_node_put(child);
|
|
+ kfree(tegra->timings);
|
|
return err;
|
|
}
|
|
|
|
@@ -510,6 +511,7 @@ struct clk *tegra_clk_register_emc(void __iomem *base, struct device_node *np,
|
|
err = load_timings_from_dt(tegra, node, node_ram_code);
|
|
if (err) {
|
|
of_node_put(node);
|
|
+ kfree(tegra);
|
|
return ERR_PTR(err);
|
|
}
|
|
}
|
|
diff --git a/drivers/clocksource/timer-cadence-ttc.c b/drivers/clocksource/timer-cadence-ttc.c
|
|
index 160bc6597de5b..bd49385178d0f 100644
|
|
--- a/drivers/clocksource/timer-cadence-ttc.c
|
|
+++ b/drivers/clocksource/timer-cadence-ttc.c
|
|
@@ -15,6 +15,8 @@
|
|
#include <linux/of_irq.h>
|
|
#include <linux/slab.h>
|
|
#include <linux/sched_clock.h>
|
|
+#include <linux/module.h>
|
|
+#include <linux/of_platform.h>
|
|
|
|
/*
|
|
* This driver configures the 2 16/32-bit count-up timers as follows:
|
|
@@ -464,13 +466,7 @@ out_kfree:
|
|
return err;
|
|
}
|
|
|
|
-/**
|
|
- * ttc_timer_init - Initialize the timer
|
|
- *
|
|
- * Initializes the timer hardware and register the clock source and clock event
|
|
- * timers with Linux kernal timer framework
|
|
- */
|
|
-static int __init ttc_timer_init(struct device_node *timer)
|
|
+static int __init ttc_timer_probe(struct platform_device *pdev)
|
|
{
|
|
unsigned int irq;
|
|
void __iomem *timer_baseaddr;
|
|
@@ -478,6 +474,7 @@ static int __init ttc_timer_init(struct device_node *timer)
|
|
static int initialized;
|
|
int clksel, ret;
|
|
u32 timer_width = 16;
|
|
+ struct device_node *timer = pdev->dev.of_node;
|
|
|
|
if (initialized)
|
|
return 0;
|
|
@@ -489,10 +486,10 @@ static int __init ttc_timer_init(struct device_node *timer)
|
|
* and use it. Note that the event timer uses the interrupt and it's the
|
|
* 2nd TTC hence the irq_of_parse_and_map(,1)
|
|
*/
|
|
- timer_baseaddr = of_iomap(timer, 0);
|
|
- if (!timer_baseaddr) {
|
|
+ timer_baseaddr = devm_of_iomap(&pdev->dev, timer, 0, NULL);
|
|
+ if (IS_ERR(timer_baseaddr)) {
|
|
pr_err("ERROR: invalid timer base address\n");
|
|
- return -ENXIO;
|
|
+ return PTR_ERR(timer_baseaddr);
|
|
}
|
|
|
|
irq = irq_of_parse_and_map(timer, 1);
|
|
@@ -516,20 +513,40 @@ static int __init ttc_timer_init(struct device_node *timer)
|
|
clk_ce = of_clk_get(timer, clksel);
|
|
if (IS_ERR(clk_ce)) {
|
|
pr_err("ERROR: timer input clock not found\n");
|
|
- return PTR_ERR(clk_ce);
|
|
+ ret = PTR_ERR(clk_ce);
|
|
+ goto put_clk_cs;
|
|
}
|
|
|
|
ret = ttc_setup_clocksource(clk_cs, timer_baseaddr, timer_width);
|
|
if (ret)
|
|
- return ret;
|
|
+ goto put_clk_ce;
|
|
|
|
ret = ttc_setup_clockevent(clk_ce, timer_baseaddr + 4, irq);
|
|
if (ret)
|
|
- return ret;
|
|
+ goto put_clk_ce;
|
|
|
|
pr_info("%pOFn #0 at %p, irq=%d\n", timer, timer_baseaddr, irq);
|
|
|
|
return 0;
|
|
+
|
|
+put_clk_ce:
|
|
+ clk_put(clk_ce);
|
|
+put_clk_cs:
|
|
+ clk_put(clk_cs);
|
|
+ return ret;
|
|
}
|
|
|
|
-TIMER_OF_DECLARE(ttc, "cdns,ttc", ttc_timer_init);
|
|
+static const struct of_device_id ttc_timer_of_match[] = {
|
|
+ {.compatible = "cdns,ttc"},
|
|
+ {},
|
|
+};
|
|
+
|
|
+MODULE_DEVICE_TABLE(of, ttc_timer_of_match);
|
|
+
|
|
+static struct platform_driver ttc_timer_driver = {
|
|
+ .driver = {
|
|
+ .name = "cdns_ttc_timer",
|
|
+ .of_match_table = ttc_timer_of_match,
|
|
+ },
|
|
+};
|
|
+builtin_platform_driver_probe(ttc_timer_driver, ttc_timer_probe);
|
|
diff --git a/drivers/crypto/marvell/cipher.c b/drivers/crypto/marvell/cipher.c
|
|
index 708dc63b2f099..c7d433d1cd99d 100644
|
|
--- a/drivers/crypto/marvell/cipher.c
|
|
+++ b/drivers/crypto/marvell/cipher.c
|
|
@@ -287,7 +287,7 @@ static int mv_cesa_des_setkey(struct crypto_skcipher *cipher, const u8 *key,
|
|
static int mv_cesa_des3_ede_setkey(struct crypto_skcipher *cipher,
|
|
const u8 *key, unsigned int len)
|
|
{
|
|
- struct mv_cesa_des_ctx *ctx = crypto_skcipher_ctx(cipher);
|
|
+ struct mv_cesa_des3_ctx *ctx = crypto_skcipher_ctx(cipher);
|
|
int err;
|
|
|
|
err = verify_skcipher_des3_key(cipher, key);
|
|
diff --git a/drivers/crypto/nx/Makefile b/drivers/crypto/nx/Makefile
|
|
index 015155da59c29..76139865d7fa1 100644
|
|
--- a/drivers/crypto/nx/Makefile
|
|
+++ b/drivers/crypto/nx/Makefile
|
|
@@ -1,7 +1,6 @@
|
|
# SPDX-License-Identifier: GPL-2.0
|
|
obj-$(CONFIG_CRYPTO_DEV_NX_ENCRYPT) += nx-crypto.o
|
|
nx-crypto-objs := nx.o \
|
|
- nx_debugfs.o \
|
|
nx-aes-cbc.o \
|
|
nx-aes-ecb.o \
|
|
nx-aes-gcm.o \
|
|
@@ -11,6 +10,7 @@ nx-crypto-objs := nx.o \
|
|
nx-sha256.o \
|
|
nx-sha512.o
|
|
|
|
+nx-crypto-$(CONFIG_DEBUG_FS) += nx_debugfs.o
|
|
obj-$(CONFIG_CRYPTO_DEV_NX_COMPRESS_PSERIES) += nx-compress-pseries.o nx-compress.o
|
|
obj-$(CONFIG_CRYPTO_DEV_NX_COMPRESS_POWERNV) += nx-compress-powernv.o nx-compress.o
|
|
nx-compress-objs := nx-842.o
|
|
diff --git a/drivers/crypto/nx/nx.h b/drivers/crypto/nx/nx.h
|
|
index 7ecca168f8c48..5c77aba450cf8 100644
|
|
--- a/drivers/crypto/nx/nx.h
|
|
+++ b/drivers/crypto/nx/nx.h
|
|
@@ -169,8 +169,8 @@ struct nx_sg *nx_walk_and_build(struct nx_sg *, unsigned int,
|
|
void nx_debugfs_init(struct nx_crypto_driver *);
|
|
void nx_debugfs_fini(struct nx_crypto_driver *);
|
|
#else
|
|
-#define NX_DEBUGFS_INIT(drv) (0)
|
|
-#define NX_DEBUGFS_FINI(drv) (0)
|
|
+#define NX_DEBUGFS_INIT(drv) do {} while (0)
|
|
+#define NX_DEBUGFS_FINI(drv) do {} while (0)
|
|
#endif
|
|
|
|
#define NX_PAGE_NUM(x) ((u64)(x) & 0xfffffffffffff000ULL)
|
|
diff --git a/drivers/extcon/extcon.c b/drivers/extcon/extcon.c
|
|
index 6b905c3d30f4f..12f9ae2aac113 100644
|
|
--- a/drivers/extcon/extcon.c
|
|
+++ b/drivers/extcon/extcon.c
|
|
@@ -196,6 +196,14 @@ static const struct __extcon_info {
|
|
* @attr_name: "name" sysfs entry
|
|
* @attr_state: "state" sysfs entry
|
|
* @attrs: the array pointing to attr_name and attr_state for attr_g
|
|
+ * @usb_propval: the array of USB connector properties
|
|
+ * @chg_propval: the array of charger connector properties
|
|
+ * @jack_propval: the array of jack connector properties
|
|
+ * @disp_propval: the array of display connector properties
|
|
+ * @usb_bits: the bit array of the USB connector property capabilities
|
|
+ * @chg_bits: the bit array of the charger connector property capabilities
|
|
+ * @jack_bits: the bit array of the jack connector property capabilities
|
|
+ * @disp_bits: the bit array of the display connector property capabilities
|
|
*/
|
|
struct extcon_cable {
|
|
struct extcon_dev *edev;
|
|
diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c
|
|
index 7122bc6ea796b..2da2aa79c87e2 100644
|
|
--- a/drivers/firmware/stratix10-svc.c
|
|
+++ b/drivers/firmware/stratix10-svc.c
|
|
@@ -615,7 +615,7 @@ svc_create_memory_pool(struct platform_device *pdev,
|
|
end = rounddown(sh_memory->addr + sh_memory->size, PAGE_SIZE);
|
|
paddr = begin;
|
|
size = end - begin;
|
|
- va = memremap(paddr, size, MEMREMAP_WC);
|
|
+ va = devm_memremap(dev, paddr, size, MEMREMAP_WC);
|
|
if (!va) {
|
|
dev_err(dev, "fail to remap shared memory\n");
|
|
return ERR_PTR(-EINVAL);
|
|
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
|
|
index fb47ddc6f7f4e..dcf23b43f323c 100644
|
|
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
|
|
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
|
|
@@ -3076,6 +3076,10 @@ int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
|
|
struct amdgpu_fpriv *fpriv = filp->driver_priv;
|
|
int r;
|
|
|
|
+ /* No valid flags defined yet */
|
|
+ if (args->in.flags)
|
|
+ return -EINVAL;
|
|
+
|
|
switch (args->in.op) {
|
|
case AMDGPU_VM_OP_RESERVE_VMID:
|
|
/* current, we only have requirement to reserve vmid from gfxhub */
|
|
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
|
|
index d3380c5bdbdea..d978fcac26651 100644
|
|
--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
|
|
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
|
|
@@ -101,18 +101,19 @@ static struct kfd_mem_obj *allocate_mqd(struct kfd_dev *kfd,
|
|
&(mqd_mem_obj->gtt_mem),
|
|
&(mqd_mem_obj->gpu_addr),
|
|
(void *)&(mqd_mem_obj->cpu_ptr), true);
|
|
+
|
|
+ if (retval) {
|
|
+ kfree(mqd_mem_obj);
|
|
+ return NULL;
|
|
+ }
|
|
} else {
|
|
retval = kfd_gtt_sa_allocate(kfd, sizeof(struct v9_mqd),
|
|
&mqd_mem_obj);
|
|
- }
|
|
-
|
|
- if (retval) {
|
|
- kfree(mqd_mem_obj);
|
|
- return NULL;
|
|
+ if (retval)
|
|
+ return NULL;
|
|
}
|
|
|
|
return mqd_mem_obj;
|
|
-
|
|
}
|
|
|
|
static void init_mqd(struct mqd_manager *mm, void **mqd,
|
|
diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
|
|
index 2fc0f221fb4e2..47649186fed70 100644
|
|
--- a/drivers/gpu/drm/drm_atomic.c
|
|
+++ b/drivers/gpu/drm/drm_atomic.c
|
|
@@ -97,6 +97,12 @@ drm_atomic_state_init(struct drm_device *dev, struct drm_atomic_state *state)
|
|
if (!state->planes)
|
|
goto fail;
|
|
|
|
+ /*
|
|
+ * Because drm_atomic_state can be committed asynchronously we need our
|
|
+ * own reference and cannot rely on the on implied by drm_file in the
|
|
+ * ioctl call.
|
|
+ */
|
|
+ drm_dev_get(dev);
|
|
state->dev = dev;
|
|
|
|
DRM_DEBUG_ATOMIC("Allocated atomic state %p\n", state);
|
|
@@ -256,7 +262,8 @@ EXPORT_SYMBOL(drm_atomic_state_clear);
|
|
void __drm_atomic_state_free(struct kref *ref)
|
|
{
|
|
struct drm_atomic_state *state = container_of(ref, typeof(*state), ref);
|
|
- struct drm_mode_config *config = &state->dev->mode_config;
|
|
+ struct drm_device *dev = state->dev;
|
|
+ struct drm_mode_config *config = &dev->mode_config;
|
|
|
|
drm_atomic_state_clear(state);
|
|
|
|
@@ -268,6 +275,8 @@ void __drm_atomic_state_free(struct kref *ref)
|
|
drm_atomic_state_default_release(state);
|
|
kfree(state);
|
|
}
|
|
+
|
|
+ drm_dev_put(dev);
|
|
}
|
|
EXPORT_SYMBOL(__drm_atomic_state_free);
|
|
|
|
diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
|
|
index 62b77f3a950b8..d91d6c063a1d2 100644
|
|
--- a/drivers/gpu/drm/drm_atomic_helper.c
|
|
+++ b/drivers/gpu/drm/drm_atomic_helper.c
|
|
@@ -1086,7 +1086,16 @@ disable_outputs(struct drm_device *dev, struct drm_atomic_state *old_state)
|
|
continue;
|
|
|
|
ret = drm_crtc_vblank_get(crtc);
|
|
- WARN_ONCE(ret != -EINVAL, "driver forgot to call drm_crtc_vblank_off()\n");
|
|
+ /*
|
|
+ * Self-refresh is not a true "disable"; ensure vblank remains
|
|
+ * enabled.
|
|
+ */
|
|
+ if (new_crtc_state->self_refresh_active)
|
|
+ WARN_ONCE(ret != 0,
|
|
+ "driver disabled vblank in self-refresh\n");
|
|
+ else
|
|
+ WARN_ONCE(ret != -EINVAL,
|
|
+ "driver forgot to call drm_crtc_vblank_off()\n");
|
|
if (ret == 0)
|
|
drm_crtc_vblank_put(crtc);
|
|
}
|
|
diff --git a/drivers/gpu/drm/drm_client_modeset.c b/drivers/gpu/drm/drm_client_modeset.c
|
|
index bf1bdb0aac19b..10769efaf7cb3 100644
|
|
--- a/drivers/gpu/drm/drm_client_modeset.c
|
|
+++ b/drivers/gpu/drm/drm_client_modeset.c
|
|
@@ -281,6 +281,9 @@ static bool drm_client_target_cloned(struct drm_device *dev,
|
|
can_clone = true;
|
|
dmt_mode = drm_mode_find_dmt(dev, 1024, 768, 60, false);
|
|
|
|
+ if (!dmt_mode)
|
|
+ goto fail;
|
|
+
|
|
for (i = 0; i < connector_count; i++) {
|
|
if (!enabled[i])
|
|
continue;
|
|
@@ -296,11 +299,13 @@ static bool drm_client_target_cloned(struct drm_device *dev,
|
|
if (!modes[i])
|
|
can_clone = false;
|
|
}
|
|
+ kfree(dmt_mode);
|
|
|
|
if (can_clone) {
|
|
DRM_DEBUG_KMS("can clone using 1024x768\n");
|
|
return true;
|
|
}
|
|
+fail:
|
|
DRM_INFO("kms: can't enable cloning when we probably wanted to.\n");
|
|
return false;
|
|
}
|
|
@@ -785,6 +790,7 @@ int drm_client_modeset_probe(struct drm_client_dev *client, unsigned int width,
|
|
break;
|
|
}
|
|
|
|
+ kfree(modeset->mode);
|
|
modeset->mode = drm_mode_duplicate(dev, mode);
|
|
drm_connector_get(connector);
|
|
modeset->connectors[modeset->num_connectors++] = connector;
|
|
diff --git a/drivers/gpu/drm/drm_panel.c b/drivers/gpu/drm/drm_panel.c
|
|
index 6b0bf42039cfa..ed7985c0535a2 100644
|
|
--- a/drivers/gpu/drm/drm_panel.c
|
|
+++ b/drivers/gpu/drm/drm_panel.c
|
|
@@ -44,13 +44,21 @@ static LIST_HEAD(panel_list);
|
|
/**
|
|
* drm_panel_init - initialize a panel
|
|
* @panel: DRM panel
|
|
+ * @dev: parent device of the panel
|
|
+ * @funcs: panel operations
|
|
+ * @connector_type: the connector type (DRM_MODE_CONNECTOR_*) corresponding to
|
|
+ * the panel interface
|
|
*
|
|
- * Sets up internal fields of the panel so that it can subsequently be added
|
|
- * to the registry.
|
|
+ * Initialize the panel structure for subsequent registration with
|
|
+ * drm_panel_add().
|
|
*/
|
|
-void drm_panel_init(struct drm_panel *panel)
|
|
+void drm_panel_init(struct drm_panel *panel, struct device *dev,
|
|
+ const struct drm_panel_funcs *funcs, int connector_type)
|
|
{
|
|
INIT_LIST_HEAD(&panel->list);
|
|
+ panel->dev = dev;
|
|
+ panel->funcs = funcs;
|
|
+ panel->connector_type = connector_type;
|
|
}
|
|
EXPORT_SYMBOL(drm_panel_init);
|
|
|
|
diff --git a/drivers/gpu/drm/i915/intel_uncore.c b/drivers/gpu/drm/i915/intel_uncore.c
|
|
index dae6b33fc4c4a..dc5483b31c1ba 100644
|
|
--- a/drivers/gpu/drm/i915/intel_uncore.c
|
|
+++ b/drivers/gpu/drm/i915/intel_uncore.c
|
|
@@ -1926,13 +1926,14 @@ int __intel_wait_for_register_fw(struct intel_uncore *uncore,
|
|
unsigned int slow_timeout_ms,
|
|
u32 *out_value)
|
|
{
|
|
- u32 reg_value;
|
|
+ u32 reg_value = 0;
|
|
#define done (((reg_value = intel_uncore_read_fw(uncore, reg)) & mask) == value)
|
|
int ret;
|
|
|
|
/* Catch any overuse of this function */
|
|
might_sleep_if(slow_timeout_ms);
|
|
GEM_BUG_ON(fast_timeout_us > 20000);
|
|
+ GEM_BUG_ON(!fast_timeout_us && !slow_timeout_ms);
|
|
|
|
ret = -ETIMEDOUT;
|
|
if (fast_timeout_us && fast_timeout_us <= 20000)
|
|
diff --git a/drivers/gpu/drm/panel/panel-arm-versatile.c b/drivers/gpu/drm/panel/panel-arm-versatile.c
|
|
index 5f72c922a04b1..a0574dc03e16f 100644
|
|
--- a/drivers/gpu/drm/panel/panel-arm-versatile.c
|
|
+++ b/drivers/gpu/drm/panel/panel-arm-versatile.c
|
|
@@ -350,9 +350,8 @@ static int versatile_panel_probe(struct platform_device *pdev)
|
|
dev_info(dev, "panel mounted on IB2 daughterboard\n");
|
|
}
|
|
|
|
- drm_panel_init(&vpanel->panel);
|
|
- vpanel->panel.dev = dev;
|
|
- vpanel->panel.funcs = &versatile_panel_drm_funcs;
|
|
+ drm_panel_init(&vpanel->panel, dev, &versatile_panel_drm_funcs,
|
|
+ DRM_MODE_CONNECTOR_DPI);
|
|
|
|
return drm_panel_add(&vpanel->panel);
|
|
}
|
|
diff --git a/drivers/gpu/drm/panel/panel-feiyang-fy07024di26a30d.c b/drivers/gpu/drm/panel/panel-feiyang-fy07024di26a30d.c
|
|
index dabf59e0f56fa..98f184b811873 100644
|
|
--- a/drivers/gpu/drm/panel/panel-feiyang-fy07024di26a30d.c
|
|
+++ b/drivers/gpu/drm/panel/panel-feiyang-fy07024di26a30d.c
|
|
@@ -204,9 +204,8 @@ static int feiyang_dsi_probe(struct mipi_dsi_device *dsi)
|
|
mipi_dsi_set_drvdata(dsi, ctx);
|
|
ctx->dsi = dsi;
|
|
|
|
- drm_panel_init(&ctx->panel);
|
|
- ctx->panel.dev = &dsi->dev;
|
|
- ctx->panel.funcs = &feiyang_funcs;
|
|
+ drm_panel_init(&ctx->panel, &dsi->dev, &feiyang_funcs,
|
|
+ DRM_MODE_CONNECTOR_DSI);
|
|
|
|
ctx->dvdd = devm_regulator_get(&dsi->dev, "dvdd");
|
|
if (IS_ERR(ctx->dvdd)) {
|
|
diff --git a/drivers/gpu/drm/panel/panel-ilitek-ili9322.c b/drivers/gpu/drm/panel/panel-ilitek-ili9322.c
|
|
index 3c58f63adbf7e..24955bec1958b 100644
|
|
--- a/drivers/gpu/drm/panel/panel-ilitek-ili9322.c
|
|
+++ b/drivers/gpu/drm/panel/panel-ilitek-ili9322.c
|
|
@@ -895,9 +895,8 @@ static int ili9322_probe(struct spi_device *spi)
|
|
ili->input = ili->conf->input;
|
|
}
|
|
|
|
- drm_panel_init(&ili->panel);
|
|
- ili->panel.dev = dev;
|
|
- ili->panel.funcs = &ili9322_drm_funcs;
|
|
+ drm_panel_init(&ili->panel, dev, &ili9322_drm_funcs,
|
|
+ DRM_MODE_CONNECTOR_DPI);
|
|
|
|
return drm_panel_add(&ili->panel);
|
|
}
|
|
diff --git a/drivers/gpu/drm/panel/panel-ilitek-ili9881c.c b/drivers/gpu/drm/panel/panel-ilitek-ili9881c.c
|
|
index 3ad4a46c4e945..e8789e460a169 100644
|
|
--- a/drivers/gpu/drm/panel/panel-ilitek-ili9881c.c
|
|
+++ b/drivers/gpu/drm/panel/panel-ilitek-ili9881c.c
|
|
@@ -433,9 +433,8 @@ static int ili9881c_dsi_probe(struct mipi_dsi_device *dsi)
|
|
mipi_dsi_set_drvdata(dsi, ctx);
|
|
ctx->dsi = dsi;
|
|
|
|
- drm_panel_init(&ctx->panel);
|
|
- ctx->panel.dev = &dsi->dev;
|
|
- ctx->panel.funcs = &ili9881c_funcs;
|
|
+ drm_panel_init(&ctx->panel, &dsi->dev, &ili9881c_funcs,
|
|
+ DRM_MODE_CONNECTOR_DSI);
|
|
|
|
ctx->power = devm_regulator_get(&dsi->dev, "power");
|
|
if (IS_ERR(ctx->power)) {
|
|
diff --git a/drivers/gpu/drm/panel/panel-innolux-p079zca.c b/drivers/gpu/drm/panel/panel-innolux-p079zca.c
|
|
index df90b66079816..327fca97977ee 100644
|
|
--- a/drivers/gpu/drm/panel/panel-innolux-p079zca.c
|
|
+++ b/drivers/gpu/drm/panel/panel-innolux-p079zca.c
|
|
@@ -487,9 +487,8 @@ static int innolux_panel_add(struct mipi_dsi_device *dsi,
|
|
if (IS_ERR(innolux->backlight))
|
|
return PTR_ERR(innolux->backlight);
|
|
|
|
- drm_panel_init(&innolux->base);
|
|
- innolux->base.funcs = &innolux_panel_funcs;
|
|
- innolux->base.dev = dev;
|
|
+ drm_panel_init(&innolux->base, dev, &innolux_panel_funcs,
|
|
+ DRM_MODE_CONNECTOR_DSI);
|
|
|
|
err = drm_panel_add(&innolux->base);
|
|
if (err < 0)
|
|
diff --git a/drivers/gpu/drm/panel/panel-jdi-lt070me05000.c b/drivers/gpu/drm/panel/panel-jdi-lt070me05000.c
|
|
index ff3e89e61e3fc..56364a93f0b81 100644
|
|
--- a/drivers/gpu/drm/panel/panel-jdi-lt070me05000.c
|
|
+++ b/drivers/gpu/drm/panel/panel-jdi-lt070me05000.c
|
|
@@ -437,9 +437,8 @@ static int jdi_panel_add(struct jdi_panel *jdi)
|
|
return ret;
|
|
}
|
|
|
|
- drm_panel_init(&jdi->base);
|
|
- jdi->base.funcs = &jdi_panel_funcs;
|
|
- jdi->base.dev = &jdi->dsi->dev;
|
|
+ drm_panel_init(&jdi->base, &jdi->dsi->dev, &jdi_panel_funcs,
|
|
+ DRM_MODE_CONNECTOR_DSI);
|
|
|
|
ret = drm_panel_add(&jdi->base);
|
|
|
|
diff --git a/drivers/gpu/drm/panel/panel-kingdisplay-kd097d04.c b/drivers/gpu/drm/panel/panel-kingdisplay-kd097d04.c
|
|
index 1e7fecab72a9f..2c576e7eee72f 100644
|
|
--- a/drivers/gpu/drm/panel/panel-kingdisplay-kd097d04.c
|
|
+++ b/drivers/gpu/drm/panel/panel-kingdisplay-kd097d04.c
|
|
@@ -391,9 +391,8 @@ static int kingdisplay_panel_add(struct kingdisplay_panel *kingdisplay)
|
|
if (IS_ERR(kingdisplay->backlight))
|
|
return PTR_ERR(kingdisplay->backlight);
|
|
|
|
- drm_panel_init(&kingdisplay->base);
|
|
- kingdisplay->base.funcs = &kingdisplay_panel_funcs;
|
|
- kingdisplay->base.dev = &kingdisplay->link->dev;
|
|
+ drm_panel_init(&kingdisplay->base, &kingdisplay->link->dev,
|
|
+ &kingdisplay_panel_funcs, DRM_MODE_CONNECTOR_DSI);
|
|
|
|
return drm_panel_add(&kingdisplay->base);
|
|
}
|
|
diff --git a/drivers/gpu/drm/panel/panel-lg-lb035q02.c b/drivers/gpu/drm/panel/panel-lg-lb035q02.c
|
|
index ee4379729a5b8..7a1385e834f0e 100644
|
|
--- a/drivers/gpu/drm/panel/panel-lg-lb035q02.c
|
|
+++ b/drivers/gpu/drm/panel/panel-lg-lb035q02.c
|
|
@@ -196,9 +196,8 @@ static int lb035q02_probe(struct spi_device *spi)
|
|
if (ret < 0)
|
|
return ret;
|
|
|
|
- drm_panel_init(&lcd->panel);
|
|
- lcd->panel.dev = &lcd->spi->dev;
|
|
- lcd->panel.funcs = &lb035q02_funcs;
|
|
+ drm_panel_init(&lcd->panel, &lcd->spi->dev, &lb035q02_funcs,
|
|
+ DRM_MODE_CONNECTOR_DPI);
|
|
|
|
return drm_panel_add(&lcd->panel);
|
|
}
|
|
diff --git a/drivers/gpu/drm/panel/panel-lg-lg4573.c b/drivers/gpu/drm/panel/panel-lg-lg4573.c
|
|
index 41bf02d122a1f..db4865a4c2b98 100644
|
|
--- a/drivers/gpu/drm/panel/panel-lg-lg4573.c
|
|
+++ b/drivers/gpu/drm/panel/panel-lg-lg4573.c
|
|
@@ -259,9 +259,8 @@ static int lg4573_probe(struct spi_device *spi)
|
|
return ret;
|
|
}
|
|
|
|
- drm_panel_init(&ctx->panel);
|
|
- ctx->panel.dev = &spi->dev;
|
|
- ctx->panel.funcs = &lg4573_drm_funcs;
|
|
+ drm_panel_init(&ctx->panel, &spi->dev, &lg4573_drm_funcs,
|
|
+ DRM_MODE_CONNECTOR_DPI);
|
|
|
|
return drm_panel_add(&ctx->panel);
|
|
}
|
|
diff --git a/drivers/gpu/drm/panel/panel-lvds.c b/drivers/gpu/drm/panel/panel-lvds.c
|
|
index bf5fcc3e53791..2405f26e5d31f 100644
|
|
--- a/drivers/gpu/drm/panel/panel-lvds.c
|
|
+++ b/drivers/gpu/drm/panel/panel-lvds.c
|
|
@@ -254,9 +254,8 @@ static int panel_lvds_probe(struct platform_device *pdev)
|
|
*/
|
|
|
|
/* Register the panel. */
|
|
- drm_panel_init(&lvds->panel);
|
|
- lvds->panel.dev = lvds->dev;
|
|
- lvds->panel.funcs = &panel_lvds_funcs;
|
|
+ drm_panel_init(&lvds->panel, lvds->dev, &panel_lvds_funcs,
|
|
+ DRM_MODE_CONNECTOR_LVDS);
|
|
|
|
ret = drm_panel_add(&lvds->panel);
|
|
if (ret < 0)
|
|
diff --git a/drivers/gpu/drm/panel/panel-nec-nl8048hl11.c b/drivers/gpu/drm/panel/panel-nec-nl8048hl11.c
|
|
index 20f17e46e65da..fd593532ab23c 100644
|
|
--- a/drivers/gpu/drm/panel/panel-nec-nl8048hl11.c
|
|
+++ b/drivers/gpu/drm/panel/panel-nec-nl8048hl11.c
|
|
@@ -205,9 +205,8 @@ static int nl8048_probe(struct spi_device *spi)
|
|
if (ret < 0)
|
|
return ret;
|
|
|
|
- drm_panel_init(&lcd->panel);
|
|
- lcd->panel.dev = &lcd->spi->dev;
|
|
- lcd->panel.funcs = &nl8048_funcs;
|
|
+ drm_panel_init(&lcd->panel, &lcd->spi->dev, &nl8048_funcs,
|
|
+ DRM_MODE_CONNECTOR_DPI);
|
|
|
|
return drm_panel_add(&lcd->panel);
|
|
}
|
|
diff --git a/drivers/gpu/drm/panel/panel-novatek-nt39016.c b/drivers/gpu/drm/panel/panel-novatek-nt39016.c
|
|
index 2ad1063b068d5..60ccedce530c2 100644
|
|
--- a/drivers/gpu/drm/panel/panel-novatek-nt39016.c
|
|
+++ b/drivers/gpu/drm/panel/panel-novatek-nt39016.c
|
|
@@ -292,9 +292,8 @@ static int nt39016_probe(struct spi_device *spi)
|
|
return err;
|
|
}
|
|
|
|
- drm_panel_init(&panel->drm_panel);
|
|
- panel->drm_panel.dev = dev;
|
|
- panel->drm_panel.funcs = &nt39016_funcs;
|
|
+ drm_panel_init(&panel->drm_panel, dev, &nt39016_funcs,
|
|
+ DRM_MODE_CONNECTOR_DPI);
|
|
|
|
err = drm_panel_add(&panel->drm_panel);
|
|
if (err < 0) {
|
|
diff --git a/drivers/gpu/drm/panel/panel-olimex-lcd-olinuxino.c b/drivers/gpu/drm/panel/panel-olimex-lcd-olinuxino.c
|
|
index 2bae1db3ff344..f2a72ee6ee07d 100644
|
|
--- a/drivers/gpu/drm/panel/panel-olimex-lcd-olinuxino.c
|
|
+++ b/drivers/gpu/drm/panel/panel-olimex-lcd-olinuxino.c
|
|
@@ -288,9 +288,8 @@ static int lcd_olinuxino_probe(struct i2c_client *client,
|
|
if (IS_ERR(lcd->backlight))
|
|
return PTR_ERR(lcd->backlight);
|
|
|
|
- drm_panel_init(&lcd->panel);
|
|
- lcd->panel.dev = dev;
|
|
- lcd->panel.funcs = &lcd_olinuxino_funcs;
|
|
+ drm_panel_init(&lcd->panel, dev, &lcd_olinuxino_funcs,
|
|
+ DRM_MODE_CONNECTOR_DPI);
|
|
|
|
return drm_panel_add(&lcd->panel);
|
|
}
|
|
diff --git a/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c b/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
|
|
index 3ee265f1755f4..938826f326658 100644
|
|
--- a/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
|
|
+++ b/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
|
|
@@ -455,9 +455,8 @@ static int otm8009a_probe(struct mipi_dsi_device *dsi)
|
|
dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST |
|
|
MIPI_DSI_MODE_LPM;
|
|
|
|
- drm_panel_init(&ctx->panel);
|
|
- ctx->panel.dev = dev;
|
|
- ctx->panel.funcs = &otm8009a_drm_funcs;
|
|
+ drm_panel_init(&ctx->panel, dev, &otm8009a_drm_funcs,
|
|
+ DRM_MODE_CONNECTOR_DSI);
|
|
|
|
ctx->bl_dev = devm_backlight_device_register(dev, dev_name(dev),
|
|
dev, ctx,
|
|
diff --git a/drivers/gpu/drm/panel/panel-osd-osd101t2587-53ts.c b/drivers/gpu/drm/panel/panel-osd-osd101t2587-53ts.c
|
|
index e0e20ecff916d..2b40913899d88 100644
|
|
--- a/drivers/gpu/drm/panel/panel-osd-osd101t2587-53ts.c
|
|
+++ b/drivers/gpu/drm/panel/panel-osd-osd101t2587-53ts.c
|
|
@@ -166,9 +166,8 @@ static int osd101t2587_panel_add(struct osd101t2587_panel *osd101t2587)
|
|
if (IS_ERR(osd101t2587->backlight))
|
|
return PTR_ERR(osd101t2587->backlight);
|
|
|
|
- drm_panel_init(&osd101t2587->base);
|
|
- osd101t2587->base.funcs = &osd101t2587_panel_funcs;
|
|
- osd101t2587->base.dev = &osd101t2587->dsi->dev;
|
|
+ drm_panel_init(&osd101t2587->base, &osd101t2587->dsi->dev,
|
|
+ &osd101t2587_panel_funcs, DRM_MODE_CONNECTOR_DSI);
|
|
|
|
return drm_panel_add(&osd101t2587->base);
|
|
}
|
|
diff --git a/drivers/gpu/drm/panel/panel-panasonic-vvx10f034n00.c b/drivers/gpu/drm/panel/panel-panasonic-vvx10f034n00.c
|
|
index 3dff0b3f73c23..664605071d342 100644
|
|
--- a/drivers/gpu/drm/panel/panel-panasonic-vvx10f034n00.c
|
|
+++ b/drivers/gpu/drm/panel/panel-panasonic-vvx10f034n00.c
|
|
@@ -223,9 +223,8 @@ static int wuxga_nt_panel_add(struct wuxga_nt_panel *wuxga_nt)
|
|
return -EPROBE_DEFER;
|
|
}
|
|
|
|
- drm_panel_init(&wuxga_nt->base);
|
|
- wuxga_nt->base.funcs = &wuxga_nt_panel_funcs;
|
|
- wuxga_nt->base.dev = &wuxga_nt->dsi->dev;
|
|
+ drm_panel_init(&wuxga_nt->base, &wuxga_nt->dsi->dev,
|
|
+ &wuxga_nt_panel_funcs, DRM_MODE_CONNECTOR_DSI);
|
|
|
|
ret = drm_panel_add(&wuxga_nt->base);
|
|
if (ret < 0)
|
|
diff --git a/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c b/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
|
|
index a621dd28ff70d..2ccb74debc8ab 100644
|
|
--- a/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
|
|
+++ b/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
|
|
@@ -433,9 +433,8 @@ static int rpi_touchscreen_probe(struct i2c_client *i2c,
|
|
return PTR_ERR(ts->dsi);
|
|
}
|
|
|
|
- drm_panel_init(&ts->base);
|
|
- ts->base.dev = dev;
|
|
- ts->base.funcs = &rpi_touchscreen_funcs;
|
|
+ drm_panel_init(&ts->base, dev, &rpi_touchscreen_funcs,
|
|
+ DRM_MODE_CONNECTOR_DSI);
|
|
|
|
/* This appears last, as it's what will unblock the DSI host
|
|
* driver's component bind function.
|
|
diff --git a/drivers/gpu/drm/panel/panel-raydium-rm67191.c b/drivers/gpu/drm/panel/panel-raydium-rm67191.c
|
|
index 6a5d37006103e..fd67fc6185c4f 100644
|
|
--- a/drivers/gpu/drm/panel/panel-raydium-rm67191.c
|
|
+++ b/drivers/gpu/drm/panel/panel-raydium-rm67191.c
|
|
@@ -606,9 +606,8 @@ static int rad_panel_probe(struct mipi_dsi_device *dsi)
|
|
if (ret)
|
|
return ret;
|
|
|
|
- drm_panel_init(&panel->panel);
|
|
- panel->panel.funcs = &rad_panel_funcs;
|
|
- panel->panel.dev = dev;
|
|
+ drm_panel_init(&panel->panel, dev, &rad_panel_funcs,
|
|
+ DRM_MODE_CONNECTOR_DSI);
|
|
dev_set_drvdata(dev, panel);
|
|
|
|
ret = drm_panel_add(&panel->panel);
|
|
diff --git a/drivers/gpu/drm/panel/panel-raydium-rm68200.c b/drivers/gpu/drm/panel/panel-raydium-rm68200.c
|
|
index ba889625ad435..994e855721f4b 100644
|
|
--- a/drivers/gpu/drm/panel/panel-raydium-rm68200.c
|
|
+++ b/drivers/gpu/drm/panel/panel-raydium-rm68200.c
|
|
@@ -404,9 +404,8 @@ static int rm68200_probe(struct mipi_dsi_device *dsi)
|
|
dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST |
|
|
MIPI_DSI_MODE_LPM;
|
|
|
|
- drm_panel_init(&ctx->panel);
|
|
- ctx->panel.dev = dev;
|
|
- ctx->panel.funcs = &rm68200_drm_funcs;
|
|
+ drm_panel_init(&ctx->panel, dev, &rm68200_drm_funcs,
|
|
+ DRM_MODE_CONNECTOR_DSI);
|
|
|
|
drm_panel_add(&ctx->panel);
|
|
|
|
diff --git a/drivers/gpu/drm/panel/panel-rocktech-jh057n00900.c b/drivers/gpu/drm/panel/panel-rocktech-jh057n00900.c
|
|
index b9109922397ff..31234b79d3b1a 100644
|
|
--- a/drivers/gpu/drm/panel/panel-rocktech-jh057n00900.c
|
|
+++ b/drivers/gpu/drm/panel/panel-rocktech-jh057n00900.c
|
|
@@ -343,9 +343,8 @@ static int jh057n_probe(struct mipi_dsi_device *dsi)
|
|
return ret;
|
|
}
|
|
|
|
- drm_panel_init(&ctx->panel);
|
|
- ctx->panel.dev = dev;
|
|
- ctx->panel.funcs = &jh057n_drm_funcs;
|
|
+ drm_panel_init(&ctx->panel, dev, &jh057n_drm_funcs,
|
|
+ DRM_MODE_CONNECTOR_DSI);
|
|
|
|
drm_panel_add(&ctx->panel);
|
|
|
|
diff --git a/drivers/gpu/drm/panel/panel-ronbo-rb070d30.c b/drivers/gpu/drm/panel/panel-ronbo-rb070d30.c
|
|
index 3c15764f0c039..170a5cda21b93 100644
|
|
--- a/drivers/gpu/drm/panel/panel-ronbo-rb070d30.c
|
|
+++ b/drivers/gpu/drm/panel/panel-ronbo-rb070d30.c
|
|
@@ -173,9 +173,8 @@ static int rb070d30_panel_dsi_probe(struct mipi_dsi_device *dsi)
|
|
mipi_dsi_set_drvdata(dsi, ctx);
|
|
ctx->dsi = dsi;
|
|
|
|
- drm_panel_init(&ctx->panel);
|
|
- ctx->panel.dev = &dsi->dev;
|
|
- ctx->panel.funcs = &rb070d30_panel_funcs;
|
|
+ drm_panel_init(&ctx->panel, &dsi->dev, &rb070d30_panel_funcs,
|
|
+ DRM_MODE_CONNECTOR_DSI);
|
|
|
|
ctx->gpios.reset = devm_gpiod_get(&dsi->dev, "reset", GPIOD_OUT_LOW);
|
|
if (IS_ERR(ctx->gpios.reset)) {
|
|
diff --git a/drivers/gpu/drm/panel/panel-samsung-ld9040.c b/drivers/gpu/drm/panel/panel-samsung-ld9040.c
|
|
index 3be902dcedc02..250809ba37c7e 100644
|
|
--- a/drivers/gpu/drm/panel/panel-samsung-ld9040.c
|
|
+++ b/drivers/gpu/drm/panel/panel-samsung-ld9040.c
|
|
@@ -351,9 +351,8 @@ static int ld9040_probe(struct spi_device *spi)
|
|
return ret;
|
|
}
|
|
|
|
- drm_panel_init(&ctx->panel);
|
|
- ctx->panel.dev = dev;
|
|
- ctx->panel.funcs = &ld9040_drm_funcs;
|
|
+ drm_panel_init(&ctx->panel, dev, &ld9040_drm_funcs,
|
|
+ DRM_MODE_CONNECTOR_DPI);
|
|
|
|
return drm_panel_add(&ctx->panel);
|
|
}
|
|
diff --git a/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c b/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c
|
|
index f75bef24e0504..e3a0397e953ee 100644
|
|
--- a/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c
|
|
+++ b/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c
|
|
@@ -215,9 +215,8 @@ static int s6d16d0_probe(struct mipi_dsi_device *dsi)
|
|
return ret;
|
|
}
|
|
|
|
- drm_panel_init(&s6->panel);
|
|
- s6->panel.dev = dev;
|
|
- s6->panel.funcs = &s6d16d0_drm_funcs;
|
|
+ drm_panel_init(&s6->panel, dev, &s6d16d0_drm_funcs,
|
|
+ DRM_MODE_CONNECTOR_DSI);
|
|
|
|
ret = drm_panel_add(&s6->panel);
|
|
if (ret < 0)
|
|
diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e3ha2.c b/drivers/gpu/drm/panel/panel-samsung-s6e3ha2.c
|
|
index b923de23ed654..938ab72c55404 100644
|
|
--- a/drivers/gpu/drm/panel/panel-samsung-s6e3ha2.c
|
|
+++ b/drivers/gpu/drm/panel/panel-samsung-s6e3ha2.c
|
|
@@ -732,9 +732,8 @@ static int s6e3ha2_probe(struct mipi_dsi_device *dsi)
|
|
ctx->bl_dev->props.brightness = S6E3HA2_DEFAULT_BRIGHTNESS;
|
|
ctx->bl_dev->props.power = FB_BLANK_POWERDOWN;
|
|
|
|
- drm_panel_init(&ctx->panel);
|
|
- ctx->panel.dev = dev;
|
|
- ctx->panel.funcs = &s6e3ha2_drm_funcs;
|
|
+ drm_panel_init(&ctx->panel, dev, &s6e3ha2_drm_funcs,
|
|
+ DRM_MODE_CONNECTOR_DSI);
|
|
|
|
ret = drm_panel_add(&ctx->panel);
|
|
if (ret < 0)
|
|
diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e63j0x03.c b/drivers/gpu/drm/panel/panel-samsung-s6e63j0x03.c
|
|
index cd90fa700c493..a60635e9226da 100644
|
|
--- a/drivers/gpu/drm/panel/panel-samsung-s6e63j0x03.c
|
|
+++ b/drivers/gpu/drm/panel/panel-samsung-s6e63j0x03.c
|
|
@@ -466,9 +466,8 @@ static int s6e63j0x03_probe(struct mipi_dsi_device *dsi)
|
|
return PTR_ERR(ctx->reset_gpio);
|
|
}
|
|
|
|
- drm_panel_init(&ctx->panel);
|
|
- ctx->panel.dev = dev;
|
|
- ctx->panel.funcs = &s6e63j0x03_funcs;
|
|
+ drm_panel_init(&ctx->panel, dev, &s6e63j0x03_funcs,
|
|
+ DRM_MODE_CONNECTOR_DSI);
|
|
|
|
ctx->bl_dev = backlight_device_register("s6e63j0x03", dev, ctx,
|
|
&s6e63j0x03_bl_ops, NULL);
|
|
diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e63m0.c b/drivers/gpu/drm/panel/panel-samsung-s6e63m0.c
|
|
index 142d395ea5129..ba01af0b14fd3 100644
|
|
--- a/drivers/gpu/drm/panel/panel-samsung-s6e63m0.c
|
|
+++ b/drivers/gpu/drm/panel/panel-samsung-s6e63m0.c
|
|
@@ -473,9 +473,8 @@ static int s6e63m0_probe(struct spi_device *spi)
|
|
return ret;
|
|
}
|
|
|
|
- drm_panel_init(&ctx->panel);
|
|
- ctx->panel.dev = dev;
|
|
- ctx->panel.funcs = &s6e63m0_drm_funcs;
|
|
+ drm_panel_init(&ctx->panel, dev, &s6e63m0_drm_funcs,
|
|
+ DRM_MODE_CONNECTOR_DPI);
|
|
|
|
ret = s6e63m0_backlight_register(ctx);
|
|
if (ret < 0)
|
|
diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c b/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
|
|
index 81858267723ad..dbced65012045 100644
|
|
--- a/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
|
|
+++ b/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
|
|
@@ -1017,9 +1017,8 @@ static int s6e8aa0_probe(struct mipi_dsi_device *dsi)
|
|
|
|
ctx->brightness = GAMMA_LEVEL_NUM - 1;
|
|
|
|
- drm_panel_init(&ctx->panel);
|
|
- ctx->panel.dev = dev;
|
|
- ctx->panel.funcs = &s6e8aa0_drm_funcs;
|
|
+ drm_panel_init(&ctx->panel, dev, &s6e8aa0_drm_funcs,
|
|
+ DRM_MODE_CONNECTOR_DSI);
|
|
|
|
ret = drm_panel_add(&ctx->panel);
|
|
if (ret < 0)
|
|
diff --git a/drivers/gpu/drm/panel/panel-seiko-43wvf1g.c b/drivers/gpu/drm/panel/panel-seiko-43wvf1g.c
|
|
index 18b22b1294fbc..b3619ba443bd2 100644
|
|
--- a/drivers/gpu/drm/panel/panel-seiko-43wvf1g.c
|
|
+++ b/drivers/gpu/drm/panel/panel-seiko-43wvf1g.c
|
|
@@ -274,9 +274,8 @@ static int seiko_panel_probe(struct device *dev,
|
|
return -EPROBE_DEFER;
|
|
}
|
|
|
|
- drm_panel_init(&panel->base);
|
|
- panel->base.dev = dev;
|
|
- panel->base.funcs = &seiko_panel_funcs;
|
|
+ drm_panel_init(&panel->base, dev, &seiko_panel_funcs,
|
|
+ DRM_MODE_CONNECTOR_DPI);
|
|
|
|
err = drm_panel_add(&panel->base);
|
|
if (err < 0)
|
|
diff --git a/drivers/gpu/drm/panel/panel-sharp-lq101r1sx01.c b/drivers/gpu/drm/panel/panel-sharp-lq101r1sx01.c
|
|
index e910b4ad13104..5e136c3ba1850 100644
|
|
--- a/drivers/gpu/drm/panel/panel-sharp-lq101r1sx01.c
|
|
+++ b/drivers/gpu/drm/panel/panel-sharp-lq101r1sx01.c
|
|
@@ -329,9 +329,8 @@ static int sharp_panel_add(struct sharp_panel *sharp)
|
|
if (IS_ERR(sharp->backlight))
|
|
return PTR_ERR(sharp->backlight);
|
|
|
|
- drm_panel_init(&sharp->base);
|
|
- sharp->base.funcs = &sharp_panel_funcs;
|
|
- sharp->base.dev = &sharp->link1->dev;
|
|
+ drm_panel_init(&sharp->base, &sharp->link1->dev, &sharp_panel_funcs,
|
|
+ DRM_MODE_CONNECTOR_DSI);
|
|
|
|
return drm_panel_add(&sharp->base);
|
|
}
|
|
diff --git a/drivers/gpu/drm/panel/panel-sharp-ls037v7dw01.c b/drivers/gpu/drm/panel/panel-sharp-ls037v7dw01.c
|
|
index 46cd9a2501298..eeab7998c7de4 100644
|
|
--- a/drivers/gpu/drm/panel/panel-sharp-ls037v7dw01.c
|
|
+++ b/drivers/gpu/drm/panel/panel-sharp-ls037v7dw01.c
|
|
@@ -185,9 +185,8 @@ static int ls037v7dw01_probe(struct platform_device *pdev)
|
|
return PTR_ERR(lcd->ud_gpio);
|
|
}
|
|
|
|
- drm_panel_init(&lcd->panel);
|
|
- lcd->panel.dev = &pdev->dev;
|
|
- lcd->panel.funcs = &ls037v7dw01_funcs;
|
|
+ drm_panel_init(&lcd->panel, &pdev->dev, &ls037v7dw01_funcs,
|
|
+ DRM_MODE_CONNECTOR_DPI);
|
|
|
|
return drm_panel_add(&lcd->panel);
|
|
}
|
|
diff --git a/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c b/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
|
|
index c39abde9f9f10..b963ba4ab5898 100644
|
|
--- a/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
|
|
+++ b/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
|
|
@@ -264,9 +264,8 @@ static int sharp_nt_panel_add(struct sharp_nt_panel *sharp_nt)
|
|
if (IS_ERR(sharp_nt->backlight))
|
|
return PTR_ERR(sharp_nt->backlight);
|
|
|
|
- drm_panel_init(&sharp_nt->base);
|
|
- sharp_nt->base.funcs = &sharp_nt_panel_funcs;
|
|
- sharp_nt->base.dev = &sharp_nt->dsi->dev;
|
|
+ drm_panel_init(&sharp_nt->base, &sharp_nt->dsi->dev,
|
|
+ &sharp_nt_panel_funcs, DRM_MODE_CONNECTOR_DSI);
|
|
|
|
return drm_panel_add(&sharp_nt->base);
|
|
}
|
|
diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
|
|
index 312a3c4e23318..a87b79c8d76f7 100644
|
|
--- a/drivers/gpu/drm/panel/panel-simple.c
|
|
+++ b/drivers/gpu/drm/panel/panel-simple.c
|
|
@@ -94,6 +94,7 @@ struct panel_desc {
|
|
|
|
u32 bus_format;
|
|
u32 bus_flags;
|
|
+ int connector_type;
|
|
};
|
|
|
|
struct panel_simple {
|
|
@@ -464,9 +465,8 @@ static int panel_simple_probe(struct device *dev, const struct panel_desc *desc)
|
|
if (!of_get_display_timing(dev->of_node, "panel-timing", &dt))
|
|
panel_simple_parse_panel_timing_node(dev, panel, &dt);
|
|
|
|
- drm_panel_init(&panel->base);
|
|
- panel->base.dev = dev;
|
|
- panel->base.funcs = &panel_simple_funcs;
|
|
+ drm_panel_init(&panel->base, dev, &panel_simple_funcs,
|
|
+ desc->connector_type);
|
|
|
|
err = drm_panel_add(&panel->base);
|
|
if (err < 0)
|
|
@@ -531,8 +531,8 @@ static const struct panel_desc ampire_am_480272h3tmqw_t01h = {
|
|
.num_modes = 1,
|
|
.bpc = 8,
|
|
.size = {
|
|
- .width = 105,
|
|
- .height = 67,
|
|
+ .width = 99,
|
|
+ .height = 58,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X24,
|
|
};
|
|
@@ -833,6 +833,7 @@ static const struct panel_desc auo_g133han01 = {
|
|
.unprepare = 1000,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct display_timing auo_g185han01_timings = {
|
|
@@ -862,6 +863,7 @@ static const struct panel_desc auo_g185han01 = {
|
|
.unprepare = 1000,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct display_timing auo_p320hvn03_timings = {
|
|
@@ -890,6 +892,7 @@ static const struct panel_desc auo_p320hvn03 = {
|
|
.unprepare = 500,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct drm_display_mode auo_t215hvn01_mode = {
|
|
@@ -1205,6 +1208,7 @@ static const struct panel_desc dlc_dlc0700yzg_1 = {
|
|
.disable = 200,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct display_timing dlc_dlc1010gig_timing = {
|
|
@@ -1235,6 +1239,7 @@ static const struct panel_desc dlc_dlc1010gig = {
|
|
.unprepare = 60,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct drm_display_mode edt_et035012dm6_mode = {
|
|
@@ -1501,6 +1506,7 @@ static const struct panel_desc hannstar_hsd070pww1 = {
|
|
.height = 94,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct display_timing hannstar_hsd100pxn1_timing = {
|
|
@@ -1525,6 +1531,7 @@ static const struct panel_desc hannstar_hsd100pxn1 = {
|
|
.height = 152,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct drm_display_mode hitachi_tx23d38vm0caa_mode = {
|
|
@@ -1577,6 +1584,7 @@ static const struct panel_desc innolux_at043tn24 = {
|
|
.height = 54,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X24,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_DPI,
|
|
.bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE,
|
|
};
|
|
|
|
@@ -1631,6 +1639,7 @@ static const struct panel_desc innolux_g070y2_l01 = {
|
|
.unprepare = 800,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct display_timing innolux_g101ice_l01_timing = {
|
|
@@ -1659,6 +1668,7 @@ static const struct panel_desc innolux_g101ice_l01 = {
|
|
.disable = 200,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct display_timing innolux_g121i1_l01_timing = {
|
|
@@ -1686,6 +1696,7 @@ static const struct panel_desc innolux_g121i1_l01 = {
|
|
.disable = 20,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct drm_display_mode innolux_g121x1_l03_mode = {
|
|
@@ -1869,6 +1880,7 @@ static const struct panel_desc koe_tx31d200vm0baa = {
|
|
.height = 109,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct display_timing kyo_tcg121xglp_timing = {
|
|
@@ -1893,6 +1905,7 @@ static const struct panel_desc kyo_tcg121xglp = {
|
|
.height = 184,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct drm_display_mode lemaker_bl035_rgb_002_mode = {
|
|
@@ -1941,6 +1954,7 @@ static const struct panel_desc lg_lb070wv8 = {
|
|
.height = 91,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct drm_display_mode lg_lp079qx1_sp0v_mode = {
|
|
@@ -2097,6 +2111,7 @@ static const struct panel_desc mitsubishi_aa070mc01 = {
|
|
.disable = 400,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
.bus_flags = DRM_BUS_FLAG_DE_HIGH,
|
|
};
|
|
|
|
@@ -2125,6 +2140,7 @@ static const struct panel_desc nec_nl12880bc20_05 = {
|
|
.disable = 50,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct drm_display_mode nec_nl4827hc19_05b_mode = {
|
|
@@ -2227,6 +2243,7 @@ static const struct panel_desc nlt_nl192108ac18_02d = {
|
|
.unprepare = 500,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct drm_display_mode nvd_9128_mode = {
|
|
@@ -2250,6 +2267,7 @@ static const struct panel_desc nvd_9128 = {
|
|
.height = 88,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct display_timing okaya_rs800480t_7x0gp_timing = {
|
|
@@ -2662,6 +2680,7 @@ static const struct panel_desc sharp_lq101k1ly04 = {
|
|
.height = 136,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct display_timing sharp_lq123p1jx31_timing = {
|
|
@@ -2841,6 +2860,7 @@ static const struct panel_desc tianma_tm070jdhg30 = {
|
|
.height = 95,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct display_timing tianma_tm070rvhg71_timing = {
|
|
@@ -2865,6 +2885,7 @@ static const struct panel_desc tianma_tm070rvhg71 = {
|
|
.height = 86,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct drm_display_mode ti_nspire_cx_lcd_mode[] = {
|
|
@@ -2947,6 +2968,7 @@ static const struct panel_desc toshiba_lt089ac29000 = {
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X24,
|
|
.bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct drm_display_mode tpk_f07a_0102_mode = {
|
|
@@ -3017,6 +3039,7 @@ static const struct panel_desc urt_umsh_8596md_lvds = {
|
|
.height = 91,
|
|
},
|
|
.bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG,
|
|
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
|
|
};
|
|
|
|
static const struct panel_desc urt_umsh_8596md_parallel = {
|
|
diff --git a/drivers/gpu/drm/panel/panel-sitronix-st7701.c b/drivers/gpu/drm/panel/panel-sitronix-st7701.c
|
|
index 638f605acb2db..1d2fd6cc66740 100644
|
|
--- a/drivers/gpu/drm/panel/panel-sitronix-st7701.c
|
|
+++ b/drivers/gpu/drm/panel/panel-sitronix-st7701.c
|
|
@@ -369,7 +369,8 @@ static int st7701_dsi_probe(struct mipi_dsi_device *dsi)
|
|
if (IS_ERR(st7701->backlight))
|
|
return PTR_ERR(st7701->backlight);
|
|
|
|
- drm_panel_init(&st7701->panel);
|
|
+ drm_panel_init(&st7701->panel, &dsi->dev, &st7701_funcs,
|
|
+ DRM_MODE_CONNECTOR_DSI);
|
|
|
|
/**
|
|
* Once sleep out has been issued, ST7701 IC required to wait 120ms
|
|
@@ -381,8 +382,6 @@ static int st7701_dsi_probe(struct mipi_dsi_device *dsi)
|
|
* ts8550b and there is no valid documentation for that.
|
|
*/
|
|
st7701->sleep_delay = 120 + desc->panel_sleep_delay;
|
|
- st7701->panel.funcs = &st7701_funcs;
|
|
- st7701->panel.dev = &dsi->dev;
|
|
|
|
ret = drm_panel_add(&st7701->panel);
|
|
if (ret < 0)
|
|
diff --git a/drivers/gpu/drm/panel/panel-sitronix-st7789v.c b/drivers/gpu/drm/panel/panel-sitronix-st7789v.c
|
|
index 3b2612ae931e8..108a85bb66672 100644
|
|
--- a/drivers/gpu/drm/panel/panel-sitronix-st7789v.c
|
|
+++ b/drivers/gpu/drm/panel/panel-sitronix-st7789v.c
|
|
@@ -381,9 +381,8 @@ static int st7789v_probe(struct spi_device *spi)
|
|
spi_set_drvdata(spi, ctx);
|
|
ctx->spi = spi;
|
|
|
|
- drm_panel_init(&ctx->panel);
|
|
- ctx->panel.dev = &spi->dev;
|
|
- ctx->panel.funcs = &st7789v_drm_funcs;
|
|
+ drm_panel_init(&ctx->panel, &spi->dev, &st7789v_drm_funcs,
|
|
+ DRM_MODE_CONNECTOR_DPI);
|
|
|
|
ctx->power = devm_regulator_get(&spi->dev, "power");
|
|
if (IS_ERR(ctx->power))
|
|
diff --git a/drivers/gpu/drm/panel/panel-sony-acx565akm.c b/drivers/gpu/drm/panel/panel-sony-acx565akm.c
|
|
index 3d5b9c4f68d98..d6387d8f88a3f 100644
|
|
--- a/drivers/gpu/drm/panel/panel-sony-acx565akm.c
|
|
+++ b/drivers/gpu/drm/panel/panel-sony-acx565akm.c
|
|
@@ -648,9 +648,8 @@ static int acx565akm_probe(struct spi_device *spi)
|
|
return ret;
|
|
}
|
|
|
|
- drm_panel_init(&lcd->panel);
|
|
- lcd->panel.dev = &lcd->spi->dev;
|
|
- lcd->panel.funcs = &acx565akm_funcs;
|
|
+ drm_panel_init(&lcd->panel, &lcd->spi->dev, &acx565akm_funcs,
|
|
+ DRM_MODE_CONNECTOR_DPI);
|
|
|
|
ret = drm_panel_add(&lcd->panel);
|
|
if (ret < 0) {
|
|
diff --git a/drivers/gpu/drm/panel/panel-tpo-td028ttec1.c b/drivers/gpu/drm/panel/panel-tpo-td028ttec1.c
|
|
index f2baff827f507..c44d6a65c0aa2 100644
|
|
--- a/drivers/gpu/drm/panel/panel-tpo-td028ttec1.c
|
|
+++ b/drivers/gpu/drm/panel/panel-tpo-td028ttec1.c
|
|
@@ -347,9 +347,8 @@ static int td028ttec1_probe(struct spi_device *spi)
|
|
return ret;
|
|
}
|
|
|
|
- drm_panel_init(&lcd->panel);
|
|
- lcd->panel.dev = &lcd->spi->dev;
|
|
- lcd->panel.funcs = &td028ttec1_funcs;
|
|
+ drm_panel_init(&lcd->panel, &lcd->spi->dev, &td028ttec1_funcs,
|
|
+ DRM_MODE_CONNECTOR_DPI);
|
|
|
|
return drm_panel_add(&lcd->panel);
|
|
}
|
|
diff --git a/drivers/gpu/drm/panel/panel-tpo-td043mtea1.c b/drivers/gpu/drm/panel/panel-tpo-td043mtea1.c
|
|
index ba163c779084c..621b65feec070 100644
|
|
--- a/drivers/gpu/drm/panel/panel-tpo-td043mtea1.c
|
|
+++ b/drivers/gpu/drm/panel/panel-tpo-td043mtea1.c
|
|
@@ -458,9 +458,8 @@ static int td043mtea1_probe(struct spi_device *spi)
|
|
return ret;
|
|
}
|
|
|
|
- drm_panel_init(&lcd->panel);
|
|
- lcd->panel.dev = &lcd->spi->dev;
|
|
- lcd->panel.funcs = &td043mtea1_funcs;
|
|
+ drm_panel_init(&lcd->panel, &lcd->spi->dev, &td043mtea1_funcs,
|
|
+ DRM_MODE_CONNECTOR_DPI);
|
|
|
|
ret = drm_panel_add(&lcd->panel);
|
|
if (ret < 0) {
|
|
diff --git a/drivers/gpu/drm/panel/panel-tpo-tpg110.c b/drivers/gpu/drm/panel/panel-tpo-tpg110.c
|
|
index 71591e5f59383..1a5418ae2ccf3 100644
|
|
--- a/drivers/gpu/drm/panel/panel-tpo-tpg110.c
|
|
+++ b/drivers/gpu/drm/panel/panel-tpo-tpg110.c
|
|
@@ -457,9 +457,8 @@ static int tpg110_probe(struct spi_device *spi)
|
|
if (ret)
|
|
return ret;
|
|
|
|
- drm_panel_init(&tpg->panel);
|
|
- tpg->panel.dev = dev;
|
|
- tpg->panel.funcs = &tpg110_drm_funcs;
|
|
+ drm_panel_init(&tpg->panel, dev, &tpg110_drm_funcs,
|
|
+ DRM_MODE_CONNECTOR_DPI);
|
|
spi_set_drvdata(spi, tpg);
|
|
|
|
return drm_panel_add(&tpg->panel);
|
|
diff --git a/drivers/gpu/drm/panel/panel-truly-nt35597.c b/drivers/gpu/drm/panel/panel-truly-nt35597.c
|
|
index 77e1311b7c692..0feea2456e14b 100644
|
|
--- a/drivers/gpu/drm/panel/panel-truly-nt35597.c
|
|
+++ b/drivers/gpu/drm/panel/panel-truly-nt35597.c
|
|
@@ -518,9 +518,8 @@ static int truly_nt35597_panel_add(struct truly_nt35597 *ctx)
|
|
/* dual port */
|
|
gpiod_set_value(ctx->mode_gpio, 0);
|
|
|
|
- drm_panel_init(&ctx->panel);
|
|
- ctx->panel.dev = dev;
|
|
- ctx->panel.funcs = &truly_nt35597_drm_funcs;
|
|
+ drm_panel_init(&ctx->panel, dev, &truly_nt35597_drm_funcs,
|
|
+ DRM_MODE_CONNECTOR_DSI);
|
|
drm_panel_add(&ctx->panel);
|
|
|
|
return 0;
|
|
diff --git a/drivers/gpu/drm/radeon/ci_dpm.c b/drivers/gpu/drm/radeon/ci_dpm.c
|
|
index 1e62e7bbf1b1d..5403f4c902b64 100644
|
|
--- a/drivers/gpu/drm/radeon/ci_dpm.c
|
|
+++ b/drivers/gpu/drm/radeon/ci_dpm.c
|
|
@@ -5556,6 +5556,7 @@ static int ci_parse_power_table(struct radeon_device *rdev)
|
|
u8 frev, crev;
|
|
u8 *power_state_offset;
|
|
struct ci_ps *ps;
|
|
+ int ret;
|
|
|
|
if (!atom_parse_data_header(mode_info->atom_context, index, NULL,
|
|
&frev, &crev, &data_offset))
|
|
@@ -5585,11 +5586,15 @@ static int ci_parse_power_table(struct radeon_device *rdev)
|
|
non_clock_array_index = power_state->v2.nonClockInfoIndex;
|
|
non_clock_info = (struct _ATOM_PPLIB_NONCLOCK_INFO *)
|
|
&non_clock_info_array->nonClockInfo[non_clock_array_index];
|
|
- if (!rdev->pm.power_state[i].clock_info)
|
|
- return -EINVAL;
|
|
+ if (!rdev->pm.power_state[i].clock_info) {
|
|
+ ret = -EINVAL;
|
|
+ goto err_free_ps;
|
|
+ }
|
|
ps = kzalloc(sizeof(struct ci_ps), GFP_KERNEL);
|
|
- if (ps == NULL)
|
|
- return -ENOMEM;
|
|
+ if (ps == NULL) {
|
|
+ ret = -ENOMEM;
|
|
+ goto err_free_ps;
|
|
+ }
|
|
rdev->pm.dpm.ps[i].ps_priv = ps;
|
|
ci_parse_pplib_non_clock_info(rdev, &rdev->pm.dpm.ps[i],
|
|
non_clock_info,
|
|
@@ -5629,6 +5634,12 @@ static int ci_parse_power_table(struct radeon_device *rdev)
|
|
}
|
|
|
|
return 0;
|
|
+
|
|
+err_free_ps:
|
|
+ for (i = 0; i < rdev->pm.dpm.num_ps; i++)
|
|
+ kfree(rdev->pm.dpm.ps[i].ps_priv);
|
|
+ kfree(rdev->pm.dpm.ps);
|
|
+ return ret;
|
|
}
|
|
|
|
static int ci_get_vbios_boot_values(struct radeon_device *rdev,
|
|
@@ -5717,25 +5728,26 @@ int ci_dpm_init(struct radeon_device *rdev)
|
|
|
|
ret = ci_get_vbios_boot_values(rdev, &pi->vbios_boot_state);
|
|
if (ret) {
|
|
- ci_dpm_fini(rdev);
|
|
+ kfree(rdev->pm.dpm.priv);
|
|
return ret;
|
|
}
|
|
|
|
ret = r600_get_platform_caps(rdev);
|
|
if (ret) {
|
|
- ci_dpm_fini(rdev);
|
|
+ kfree(rdev->pm.dpm.priv);
|
|
return ret;
|
|
}
|
|
|
|
ret = r600_parse_extended_power_table(rdev);
|
|
if (ret) {
|
|
- ci_dpm_fini(rdev);
|
|
+ kfree(rdev->pm.dpm.priv);
|
|
return ret;
|
|
}
|
|
|
|
ret = ci_parse_power_table(rdev);
|
|
if (ret) {
|
|
- ci_dpm_fini(rdev);
|
|
+ kfree(rdev->pm.dpm.priv);
|
|
+ r600_free_extended_power_table(rdev);
|
|
return ret;
|
|
}
|
|
|
|
diff --git a/drivers/gpu/drm/radeon/cypress_dpm.c b/drivers/gpu/drm/radeon/cypress_dpm.c
|
|
index 32ed60f1048bd..b31d65a6752f1 100644
|
|
--- a/drivers/gpu/drm/radeon/cypress_dpm.c
|
|
+++ b/drivers/gpu/drm/radeon/cypress_dpm.c
|
|
@@ -559,8 +559,12 @@ static int cypress_populate_mclk_value(struct radeon_device *rdev,
|
|
ASIC_INTERNAL_MEMORY_SS, vco_freq)) {
|
|
u32 reference_clock = rdev->clock.mpll.reference_freq;
|
|
u32 decoded_ref = rv740_get_decoded_reference_divider(dividers.ref_div);
|
|
- u32 clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
|
|
- u32 clk_v = ss.percentage *
|
|
+ u32 clk_s, clk_v;
|
|
+
|
|
+ if (!decoded_ref)
|
|
+ return -EINVAL;
|
|
+ clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
|
|
+ clk_v = ss.percentage *
|
|
(0x4000 * dividers.whole_fb_div + 0x800 * dividers.frac_fb_div) / (clk_s * 625);
|
|
|
|
mpll_ss1 &= ~CLKV_MASK;
|
|
diff --git a/drivers/gpu/drm/radeon/ni_dpm.c b/drivers/gpu/drm/radeon/ni_dpm.c
|
|
index 288ec3039bc2c..cad7a73a551f7 100644
|
|
--- a/drivers/gpu/drm/radeon/ni_dpm.c
|
|
+++ b/drivers/gpu/drm/radeon/ni_dpm.c
|
|
@@ -2241,8 +2241,12 @@ static int ni_populate_mclk_value(struct radeon_device *rdev,
|
|
ASIC_INTERNAL_MEMORY_SS, vco_freq)) {
|
|
u32 reference_clock = rdev->clock.mpll.reference_freq;
|
|
u32 decoded_ref = rv740_get_decoded_reference_divider(dividers.ref_div);
|
|
- u32 clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
|
|
- u32 clk_v = ss.percentage *
|
|
+ u32 clk_s, clk_v;
|
|
+
|
|
+ if (!decoded_ref)
|
|
+ return -EINVAL;
|
|
+ clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
|
|
+ clk_v = ss.percentage *
|
|
(0x4000 * dividers.whole_fb_div + 0x800 * dividers.frac_fb_div) / (clk_s * 625);
|
|
|
|
mpll_ss1 &= ~CLKV_MASK;
|
|
diff --git a/drivers/gpu/drm/radeon/rv740_dpm.c b/drivers/gpu/drm/radeon/rv740_dpm.c
|
|
index 327d65a76e1f4..79b2de65e905e 100644
|
|
--- a/drivers/gpu/drm/radeon/rv740_dpm.c
|
|
+++ b/drivers/gpu/drm/radeon/rv740_dpm.c
|
|
@@ -250,8 +250,12 @@ int rv740_populate_mclk_value(struct radeon_device *rdev,
|
|
ASIC_INTERNAL_MEMORY_SS, vco_freq)) {
|
|
u32 reference_clock = rdev->clock.mpll.reference_freq;
|
|
u32 decoded_ref = rv740_get_decoded_reference_divider(dividers.ref_div);
|
|
- u32 clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
|
|
- u32 clk_v = 0x40000 * ss.percentage *
|
|
+ u32 clk_s, clk_v;
|
|
+
|
|
+ if (!decoded_ref)
|
|
+ return -EINVAL;
|
|
+ clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
|
|
+ clk_v = 0x40000 * ss.percentage *
|
|
(dividers.whole_fb_div + (dividers.frac_fb_div / 8)) / (clk_s * 10000);
|
|
|
|
mpll_ss1 &= ~CLKV_MASK;
|
|
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
|
|
index 57e0396662c34..1795adbd81d38 100644
|
|
--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
|
|
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
|
|
@@ -654,13 +654,13 @@ static void vop_crtc_atomic_disable(struct drm_crtc *crtc,
|
|
if (crtc->state->self_refresh_active)
|
|
rockchip_drm_set_win_enabled(crtc, false);
|
|
|
|
+ if (crtc->state->self_refresh_active)
|
|
+ goto out;
|
|
+
|
|
mutex_lock(&vop->vop_lock);
|
|
|
|
drm_crtc_vblank_off(crtc);
|
|
|
|
- if (crtc->state->self_refresh_active)
|
|
- goto out;
|
|
-
|
|
/*
|
|
* Vop standby will take effect at end of current frame,
|
|
* if dsp hold valid irq happen, it means standby complete.
|
|
@@ -692,9 +692,9 @@ static void vop_crtc_atomic_disable(struct drm_crtc *crtc,
|
|
vop_core_clks_disable(vop);
|
|
pm_runtime_put(vop->dev);
|
|
|
|
-out:
|
|
mutex_unlock(&vop->vop_lock);
|
|
|
|
+out:
|
|
if (crtc->state->event && !crtc->state->active) {
|
|
spin_lock_irq(&crtc->dev->event_lock);
|
|
drm_crtc_send_vblank_event(crtc, crtc->state->event);
|
|
diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c b/drivers/gpu/drm/sun4i/sun4i_tcon.c
|
|
index eb3b2350687fb..193c7f979bcaa 100644
|
|
--- a/drivers/gpu/drm/sun4i/sun4i_tcon.c
|
|
+++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c
|
|
@@ -753,21 +753,19 @@ static irqreturn_t sun4i_tcon_handler(int irq, void *private)
|
|
static int sun4i_tcon_init_clocks(struct device *dev,
|
|
struct sun4i_tcon *tcon)
|
|
{
|
|
- tcon->clk = devm_clk_get(dev, "ahb");
|
|
+ tcon->clk = devm_clk_get_enabled(dev, "ahb");
|
|
if (IS_ERR(tcon->clk)) {
|
|
dev_err(dev, "Couldn't get the TCON bus clock\n");
|
|
return PTR_ERR(tcon->clk);
|
|
}
|
|
- clk_prepare_enable(tcon->clk);
|
|
|
|
if (tcon->quirks->has_channel_0) {
|
|
- tcon->sclk0 = devm_clk_get(dev, "tcon-ch0");
|
|
+ tcon->sclk0 = devm_clk_get_enabled(dev, "tcon-ch0");
|
|
if (IS_ERR(tcon->sclk0)) {
|
|
dev_err(dev, "Couldn't get the TCON channel 0 clock\n");
|
|
return PTR_ERR(tcon->sclk0);
|
|
}
|
|
}
|
|
- clk_prepare_enable(tcon->sclk0);
|
|
|
|
if (tcon->quirks->has_channel_1) {
|
|
tcon->sclk1 = devm_clk_get(dev, "tcon-ch1");
|
|
@@ -780,12 +778,6 @@ static int sun4i_tcon_init_clocks(struct device *dev,
|
|
return 0;
|
|
}
|
|
|
|
-static void sun4i_tcon_free_clocks(struct sun4i_tcon *tcon)
|
|
-{
|
|
- clk_disable_unprepare(tcon->sclk0);
|
|
- clk_disable_unprepare(tcon->clk);
|
|
-}
|
|
-
|
|
static int sun4i_tcon_init_irq(struct device *dev,
|
|
struct sun4i_tcon *tcon)
|
|
{
|
|
@@ -1202,14 +1194,14 @@ static int sun4i_tcon_bind(struct device *dev, struct device *master,
|
|
ret = sun4i_tcon_init_regmap(dev, tcon);
|
|
if (ret) {
|
|
dev_err(dev, "Couldn't init our TCON regmap\n");
|
|
- goto err_free_clocks;
|
|
+ goto err_assert_reset;
|
|
}
|
|
|
|
if (tcon->quirks->has_channel_0) {
|
|
ret = sun4i_dclk_create(dev, tcon);
|
|
if (ret) {
|
|
dev_err(dev, "Couldn't create our TCON dot clock\n");
|
|
- goto err_free_clocks;
|
|
+ goto err_assert_reset;
|
|
}
|
|
}
|
|
|
|
@@ -1272,8 +1264,6 @@ static int sun4i_tcon_bind(struct device *dev, struct device *master,
|
|
err_free_dotclock:
|
|
if (tcon->quirks->has_channel_0)
|
|
sun4i_dclk_free(tcon);
|
|
-err_free_clocks:
|
|
- sun4i_tcon_free_clocks(tcon);
|
|
err_assert_reset:
|
|
reset_control_assert(tcon->lcd_rst);
|
|
return ret;
|
|
@@ -1287,7 +1277,6 @@ static void sun4i_tcon_unbind(struct device *dev, struct device *master,
|
|
list_del(&tcon->list);
|
|
if (tcon->quirks->has_channel_0)
|
|
sun4i_dclk_free(tcon);
|
|
- sun4i_tcon_free_clocks(tcon);
|
|
}
|
|
|
|
static const struct component_ops sun4i_tcon_ops = {
|
|
diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
|
|
index 75761939f02bd..28da9b4087c3b 100644
|
|
--- a/drivers/hid/wacom_wac.c
|
|
+++ b/drivers/hid/wacom_wac.c
|
|
@@ -1307,7 +1307,7 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
|
|
struct input_dev *pen_input = wacom->pen_input;
|
|
unsigned char *data = wacom->data;
|
|
int number_of_valid_frames = 0;
|
|
- int time_interval = 15000000;
|
|
+ ktime_t time_interval = 15000000;
|
|
ktime_t time_packet_received = ktime_get();
|
|
int i;
|
|
|
|
@@ -1341,7 +1341,7 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
|
|
if (number_of_valid_frames) {
|
|
if (wacom->hid_data.time_delayed)
|
|
time_interval = ktime_get() - wacom->hid_data.time_delayed;
|
|
- time_interval /= number_of_valid_frames;
|
|
+ time_interval = div_u64(time_interval, number_of_valid_frames);
|
|
wacom->hid_data.time_delayed = time_packet_received;
|
|
}
|
|
|
|
@@ -1352,7 +1352,7 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
|
|
bool range = frame[0] & 0x20;
|
|
bool invert = frame[0] & 0x10;
|
|
int frames_number_reversed = number_of_valid_frames - i - 1;
|
|
- int event_timestamp = time_packet_received - frames_number_reversed * time_interval;
|
|
+ ktime_t event_timestamp = time_packet_received - frames_number_reversed * time_interval;
|
|
|
|
if (!valid)
|
|
continue;
|
|
diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
|
|
index 88badfbae999c..166731292c359 100644
|
|
--- a/drivers/hid/wacom_wac.h
|
|
+++ b/drivers/hid/wacom_wac.h
|
|
@@ -320,7 +320,7 @@ struct hid_data {
|
|
int bat_connected;
|
|
int ps_connected;
|
|
bool pad_input_event_flag;
|
|
- int time_delayed;
|
|
+ ktime_t time_delayed;
|
|
};
|
|
|
|
struct wacom_remote_data {
|
|
diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
|
|
index c92ea6990ec69..6bcb46cc28cdf 100644
|
|
--- a/drivers/i2c/busses/i2c-xiic.c
|
|
+++ b/drivers/i2c/busses/i2c-xiic.c
|
|
@@ -353,6 +353,9 @@ static irqreturn_t xiic_process(int irq, void *dev_id)
|
|
struct xiic_i2c *i2c = dev_id;
|
|
u32 pend, isr, ier;
|
|
u32 clr = 0;
|
|
+ int xfer_more = 0;
|
|
+ int wakeup_req = 0;
|
|
+ int wakeup_code = 0;
|
|
|
|
/* Get the interrupt Status from the IPIF. There is no clearing of
|
|
* interrupts in the IPIF. Interrupts must be cleared at the source.
|
|
@@ -389,10 +392,16 @@ static irqreturn_t xiic_process(int irq, void *dev_id)
|
|
*/
|
|
xiic_reinit(i2c);
|
|
|
|
- if (i2c->rx_msg)
|
|
- xiic_wakeup(i2c, STATE_ERROR);
|
|
- if (i2c->tx_msg)
|
|
- xiic_wakeup(i2c, STATE_ERROR);
|
|
+ if (i2c->rx_msg) {
|
|
+ wakeup_req = 1;
|
|
+ wakeup_code = STATE_ERROR;
|
|
+ }
|
|
+ if (i2c->tx_msg) {
|
|
+ wakeup_req = 1;
|
|
+ wakeup_code = STATE_ERROR;
|
|
+ }
|
|
+ /* don't try to handle other events */
|
|
+ goto out;
|
|
}
|
|
if (pend & XIIC_INTR_RX_FULL_MASK) {
|
|
/* Receive register/FIFO is full */
|
|
@@ -426,8 +435,7 @@ static irqreturn_t xiic_process(int irq, void *dev_id)
|
|
i2c->tx_msg++;
|
|
dev_dbg(i2c->adap.dev.parent,
|
|
"%s will start next...\n", __func__);
|
|
-
|
|
- __xiic_start_xfer(i2c);
|
|
+ xfer_more = 1;
|
|
}
|
|
}
|
|
}
|
|
@@ -441,11 +449,13 @@ static irqreturn_t xiic_process(int irq, void *dev_id)
|
|
if (!i2c->tx_msg)
|
|
goto out;
|
|
|
|
- if ((i2c->nmsgs == 1) && !i2c->rx_msg &&
|
|
- xiic_tx_space(i2c) == 0)
|
|
- xiic_wakeup(i2c, STATE_DONE);
|
|
+ wakeup_req = 1;
|
|
+
|
|
+ if (i2c->nmsgs == 1 && !i2c->rx_msg &&
|
|
+ xiic_tx_space(i2c) == 0)
|
|
+ wakeup_code = STATE_DONE;
|
|
else
|
|
- xiic_wakeup(i2c, STATE_ERROR);
|
|
+ wakeup_code = STATE_ERROR;
|
|
}
|
|
if (pend & (XIIC_INTR_TX_EMPTY_MASK | XIIC_INTR_TX_HALF_MASK)) {
|
|
/* Transmit register/FIFO is empty or ½ empty */
|
|
@@ -469,7 +479,7 @@ static irqreturn_t xiic_process(int irq, void *dev_id)
|
|
if (i2c->nmsgs > 1) {
|
|
i2c->nmsgs--;
|
|
i2c->tx_msg++;
|
|
- __xiic_start_xfer(i2c);
|
|
+ xfer_more = 1;
|
|
} else {
|
|
xiic_irq_dis(i2c, XIIC_INTR_TX_HALF_MASK);
|
|
|
|
@@ -487,6 +497,13 @@ out:
|
|
dev_dbg(i2c->adap.dev.parent, "%s clr: 0x%x\n", __func__, clr);
|
|
|
|
xiic_setreg32(i2c, XIIC_IISR_OFFSET, clr);
|
|
+ if (xfer_more)
|
|
+ __xiic_start_xfer(i2c);
|
|
+ if (wakeup_req)
|
|
+ xiic_wakeup(i2c, wakeup_code);
|
|
+
|
|
+ WARN_ON(xfer_more && wakeup_req);
|
|
+
|
|
mutex_unlock(&i2c->lock);
|
|
return IRQ_HANDLED;
|
|
}
|
|
diff --git a/drivers/iio/adc/meson_saradc.c b/drivers/iio/adc/meson_saradc.c
|
|
index 7b27306330a35..1a82be03624cb 100644
|
|
--- a/drivers/iio/adc/meson_saradc.c
|
|
+++ b/drivers/iio/adc/meson_saradc.c
|
|
@@ -71,7 +71,7 @@
|
|
#define MESON_SAR_ADC_REG3_PANEL_DETECT_COUNT_MASK GENMASK(20, 18)
|
|
#define MESON_SAR_ADC_REG3_PANEL_DETECT_FILTER_TB_MASK GENMASK(17, 16)
|
|
#define MESON_SAR_ADC_REG3_ADC_CLK_DIV_SHIFT 10
|
|
- #define MESON_SAR_ADC_REG3_ADC_CLK_DIV_WIDTH 5
|
|
+ #define MESON_SAR_ADC_REG3_ADC_CLK_DIV_WIDTH 6
|
|
#define MESON_SAR_ADC_REG3_BLOCK_DLY_SEL_MASK GENMASK(9, 8)
|
|
#define MESON_SAR_ADC_REG3_BLOCK_DLY_MASK GENMASK(7, 0)
|
|
|
|
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
|
|
index 5fc5ab7813c0f..18b579c8a8c55 100644
|
|
--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
|
|
+++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
|
|
@@ -2606,11 +2606,8 @@ static int bnxt_qplib_cq_process_terminal(struct bnxt_qplib_cq *cq,
|
|
|
|
qp = (struct bnxt_qplib_qp *)((unsigned long)
|
|
le64_to_cpu(hwcqe->qp_handle));
|
|
- if (!qp) {
|
|
- dev_err(&cq->hwq.pdev->dev,
|
|
- "FP: CQ Process terminal qp is NULL\n");
|
|
+ if (!qp)
|
|
return -EINVAL;
|
|
- }
|
|
|
|
/* Must block new posting of SQ and RQ */
|
|
qp->state = CMDQ_MODIFY_QP_NEW_STATE_ERR;
|
|
diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
|
|
index 2a684fc6056e1..057c9ffcd02e1 100644
|
|
--- a/drivers/infiniband/hw/hfi1/sdma.c
|
|
+++ b/drivers/infiniband/hw/hfi1/sdma.c
|
|
@@ -3203,8 +3203,7 @@ int _pad_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
|
|
{
|
|
int rval = 0;
|
|
|
|
- tx->num_desc++;
|
|
- if ((unlikely(tx->num_desc == tx->desc_limit))) {
|
|
+ if ((unlikely(tx->num_desc + 1 == tx->desc_limit))) {
|
|
rval = _extend_sdma_tx_descs(dd, tx);
|
|
if (rval) {
|
|
__sdma_txclean(dd, tx);
|
|
@@ -3217,6 +3216,7 @@ int _pad_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
|
|
SDMA_MAP_NONE,
|
|
dd->sdma_pad_phys,
|
|
sizeof(u32) - (tx->packet_len & (sizeof(u32) - 1)));
|
|
+ tx->num_desc++;
|
|
_sdma_close_tx(dd, tx);
|
|
return rval;
|
|
}
|
|
diff --git a/drivers/infiniband/hw/hfi1/sdma.h b/drivers/infiniband/hw/hfi1/sdma.h
|
|
index 1e2e40f79cb20..6ac00755848db 100644
|
|
--- a/drivers/infiniband/hw/hfi1/sdma.h
|
|
+++ b/drivers/infiniband/hw/hfi1/sdma.h
|
|
@@ -672,14 +672,13 @@ static inline void sdma_txclean(struct hfi1_devdata *dd, struct sdma_txreq *tx)
|
|
static inline void _sdma_close_tx(struct hfi1_devdata *dd,
|
|
struct sdma_txreq *tx)
|
|
{
|
|
- tx->descp[tx->num_desc].qw[0] |=
|
|
- SDMA_DESC0_LAST_DESC_FLAG;
|
|
- tx->descp[tx->num_desc].qw[1] |=
|
|
- dd->default_desc1;
|
|
+ u16 last_desc = tx->num_desc - 1;
|
|
+
|
|
+ tx->descp[last_desc].qw[0] |= SDMA_DESC0_LAST_DESC_FLAG;
|
|
+ tx->descp[last_desc].qw[1] |= dd->default_desc1;
|
|
if (tx->flags & SDMA_TXREQ_F_URGENT)
|
|
- tx->descp[tx->num_desc].qw[1] |=
|
|
- (SDMA_DESC1_HEAD_TO_HOST_FLAG |
|
|
- SDMA_DESC1_INT_REQ_FLAG);
|
|
+ tx->descp[last_desc].qw[1] |= (SDMA_DESC1_HEAD_TO_HOST_FLAG |
|
|
+ SDMA_DESC1_INT_REQ_FLAG);
|
|
}
|
|
|
|
static inline int _sdma_txadd_daddr(
|
|
@@ -696,6 +695,7 @@ static inline int _sdma_txadd_daddr(
|
|
type,
|
|
addr, len);
|
|
WARN_ON(len > tx->tlen);
|
|
+ tx->num_desc++;
|
|
tx->tlen -= len;
|
|
/* special cases for last */
|
|
if (!tx->tlen) {
|
|
@@ -707,7 +707,6 @@ static inline int _sdma_txadd_daddr(
|
|
_sdma_close_tx(dd, tx);
|
|
}
|
|
}
|
|
- tx->num_desc++;
|
|
return rval;
|
|
}
|
|
|
|
diff --git a/drivers/input/misc/adxl34x.c b/drivers/input/misc/adxl34x.c
|
|
index 4cc4e8ff42b33..ad035c342cd3b 100644
|
|
--- a/drivers/input/misc/adxl34x.c
|
|
+++ b/drivers/input/misc/adxl34x.c
|
|
@@ -811,8 +811,7 @@ struct adxl34x *adxl34x_probe(struct device *dev, int irq,
|
|
AC_WRITE(ac, POWER_CTL, 0);
|
|
|
|
err = request_threaded_irq(ac->irq, NULL, adxl34x_irq,
|
|
- IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
|
|
- dev_name(dev), ac);
|
|
+ IRQF_ONESHOT, dev_name(dev), ac);
|
|
if (err) {
|
|
dev_err(dev, "irq %d busy?\n", ac->irq);
|
|
goto err_free_mem;
|
|
diff --git a/drivers/input/misc/drv260x.c b/drivers/input/misc/drv260x.c
|
|
index 79d7fa710a714..54002d1a446b7 100644
|
|
--- a/drivers/input/misc/drv260x.c
|
|
+++ b/drivers/input/misc/drv260x.c
|
|
@@ -435,6 +435,7 @@ static int drv260x_init(struct drv260x_data *haptics)
|
|
}
|
|
|
|
do {
|
|
+ usleep_range(15000, 15500);
|
|
error = regmap_read(haptics->regmap, DRV260X_GO, &cal_buf);
|
|
if (error) {
|
|
dev_err(&haptics->client->dev,
|
|
diff --git a/drivers/irqchip/irq-jcore-aic.c b/drivers/irqchip/irq-jcore-aic.c
|
|
index 033bccb41455c..b9dcc8e78c750 100644
|
|
--- a/drivers/irqchip/irq-jcore-aic.c
|
|
+++ b/drivers/irqchip/irq-jcore-aic.c
|
|
@@ -68,6 +68,7 @@ static int __init aic_irq_of_init(struct device_node *node,
|
|
unsigned min_irq = JCORE_AIC2_MIN_HWIRQ;
|
|
unsigned dom_sz = JCORE_AIC_MAX_HWIRQ+1;
|
|
struct irq_domain *domain;
|
|
+ int ret;
|
|
|
|
pr_info("Initializing J-Core AIC\n");
|
|
|
|
@@ -100,11 +101,17 @@ static int __init aic_irq_of_init(struct device_node *node,
|
|
jcore_aic.irq_unmask = noop;
|
|
jcore_aic.name = "AIC";
|
|
|
|
- domain = irq_domain_add_linear(node, dom_sz, &jcore_aic_irqdomain_ops,
|
|
+ ret = irq_alloc_descs(-1, min_irq, dom_sz - min_irq,
|
|
+ of_node_to_nid(node));
|
|
+
|
|
+ if (ret < 0)
|
|
+ return ret;
|
|
+
|
|
+ domain = irq_domain_add_legacy(node, dom_sz - min_irq, min_irq, min_irq,
|
|
+ &jcore_aic_irqdomain_ops,
|
|
&jcore_aic);
|
|
if (!domain)
|
|
return -ENOMEM;
|
|
- irq_create_strict_mappings(domain, min_irq, min_irq, dom_sz - min_irq);
|
|
|
|
return 0;
|
|
}
|
|
diff --git a/drivers/mailbox/ti-msgmgr.c b/drivers/mailbox/ti-msgmgr.c
|
|
index 88047d835211c..75f14b624ca22 100644
|
|
--- a/drivers/mailbox/ti-msgmgr.c
|
|
+++ b/drivers/mailbox/ti-msgmgr.c
|
|
@@ -385,14 +385,20 @@ static int ti_msgmgr_send_data(struct mbox_chan *chan, void *data)
|
|
/* Ensure all unused data is 0 */
|
|
data_trail &= 0xFFFFFFFF >> (8 * (sizeof(u32) - trail_bytes));
|
|
writel(data_trail, data_reg);
|
|
- data_reg++;
|
|
+ data_reg += sizeof(u32);
|
|
}
|
|
+
|
|
/*
|
|
* 'data_reg' indicates next register to write. If we did not already
|
|
* write on tx complete reg(last reg), we must do so for transmit
|
|
+ * In addition, we also need to make sure all intermediate data
|
|
+ * registers(if any required), are reset to 0 for TISCI backward
|
|
+ * compatibility to be maintained.
|
|
*/
|
|
- if (data_reg <= qinst->queue_buff_end)
|
|
- writel(0, qinst->queue_buff_end);
|
|
+ while (data_reg <= qinst->queue_buff_end) {
|
|
+ writel(0, data_reg);
|
|
+ data_reg += sizeof(u32);
|
|
+ }
|
|
|
|
return 0;
|
|
}
|
|
diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
|
|
index 5a33910aea788..b7fea84d19ad9 100644
|
|
--- a/drivers/md/bcache/btree.c
|
|
+++ b/drivers/md/bcache/btree.c
|
|
@@ -1186,7 +1186,7 @@ static struct btree *btree_node_alloc_replacement(struct btree *b,
|
|
{
|
|
struct btree *n = bch_btree_node_alloc(b->c, op, b->level, b->parent);
|
|
|
|
- if (!IS_ERR_OR_NULL(n)) {
|
|
+ if (!IS_ERR(n)) {
|
|
mutex_lock(&n->write_lock);
|
|
bch_btree_sort_into(&b->keys, &n->keys, &b->c->sort);
|
|
bkey_copy_key(&n->key, &b->key);
|
|
@@ -1389,7 +1389,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
|
|
memset(new_nodes, 0, sizeof(new_nodes));
|
|
closure_init_stack(&cl);
|
|
|
|
- while (nodes < GC_MERGE_NODES && !IS_ERR_OR_NULL(r[nodes].b))
|
|
+ while (nodes < GC_MERGE_NODES && !IS_ERR(r[nodes].b))
|
|
keys += r[nodes++].keys;
|
|
|
|
blocks = btree_default_blocks(b->c) * 2 / 3;
|
|
@@ -1401,7 +1401,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
|
|
|
|
for (i = 0; i < nodes; i++) {
|
|
new_nodes[i] = btree_node_alloc_replacement(r[i].b, NULL);
|
|
- if (IS_ERR_OR_NULL(new_nodes[i]))
|
|
+ if (IS_ERR(new_nodes[i]))
|
|
goto out_nocoalesce;
|
|
}
|
|
|
|
@@ -1536,7 +1536,7 @@ out_nocoalesce:
|
|
bch_keylist_free(&keylist);
|
|
|
|
for (i = 0; i < nodes; i++)
|
|
- if (!IS_ERR_OR_NULL(new_nodes[i])) {
|
|
+ if (!IS_ERR(new_nodes[i])) {
|
|
btree_node_free(new_nodes[i]);
|
|
rw_unlock(true, new_nodes[i]);
|
|
}
|
|
@@ -1718,7 +1718,7 @@ static int bch_btree_gc_root(struct btree *b, struct btree_op *op,
|
|
if (should_rewrite) {
|
|
n = btree_node_alloc_replacement(b, NULL);
|
|
|
|
- if (!IS_ERR_OR_NULL(n)) {
|
|
+ if (!IS_ERR(n)) {
|
|
bch_btree_node_write_sync(n);
|
|
|
|
bch_btree_set_root(n);
|
|
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
|
|
index efdf6ce0443ea..70e46e0d2f1ac 100644
|
|
--- a/drivers/md/bcache/super.c
|
|
+++ b/drivers/md/bcache/super.c
|
|
@@ -1633,7 +1633,7 @@ static void cache_set_flush(struct closure *cl)
|
|
if (!IS_ERR_OR_NULL(c->gc_thread))
|
|
kthread_stop(c->gc_thread);
|
|
|
|
- if (!IS_ERR_OR_NULL(c->root))
|
|
+ if (!IS_ERR(c->root))
|
|
list_add(&c->root->list, &c->btree_cache);
|
|
|
|
/*
|
|
@@ -2000,7 +2000,7 @@ static int run_cache_set(struct cache_set *c)
|
|
|
|
err = "cannot allocate new btree root";
|
|
c->root = __bch_btree_node_alloc(c, NULL, 0, true, NULL);
|
|
- if (IS_ERR_OR_NULL(c->root))
|
|
+ if (IS_ERR(c->root))
|
|
goto err;
|
|
|
|
mutex_lock(&c->root->write_lock);
|
|
diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
|
|
index 0545cdccf6369..bea8265ce9b8e 100644
|
|
--- a/drivers/md/md-bitmap.c
|
|
+++ b/drivers/md/md-bitmap.c
|
|
@@ -54,14 +54,7 @@ __acquires(bitmap->lock)
|
|
{
|
|
unsigned char *mappage;
|
|
|
|
- if (page >= bitmap->pages) {
|
|
- /* This can happen if bitmap_start_sync goes beyond
|
|
- * End-of-device while looking for a whole page.
|
|
- * It is harmless.
|
|
- */
|
|
- return -EINVAL;
|
|
- }
|
|
-
|
|
+ WARN_ON_ONCE(page >= bitmap->pages);
|
|
if (bitmap->bp[page].hijacked) /* it's hijacked, don't try to alloc */
|
|
return 0;
|
|
|
|
@@ -1369,6 +1362,14 @@ __acquires(bitmap->lock)
|
|
sector_t csize;
|
|
int err;
|
|
|
|
+ if (page >= bitmap->pages) {
|
|
+ /*
|
|
+ * This can happen if bitmap_start_sync goes beyond
|
|
+ * End-of-device while looking for a whole page or
|
|
+ * user set a huge number to sysfs bitmap_set_bits.
|
|
+ */
|
|
+ return NULL;
|
|
+ }
|
|
err = md_bitmap_checkpage(bitmap, page, create, 0);
|
|
|
|
if (bitmap->bp[page].hijacked ||
|
|
diff --git a/drivers/md/md.c b/drivers/md/md.c
|
|
index 64558991ce0a0..a006f3a9554bf 100644
|
|
--- a/drivers/md/md.c
|
|
+++ b/drivers/md/md.c
|
|
@@ -3766,8 +3766,9 @@ int strict_strtoul_scaled(const char *cp, unsigned long *res, int scale)
|
|
static ssize_t
|
|
safe_delay_show(struct mddev *mddev, char *page)
|
|
{
|
|
- int msec = (mddev->safemode_delay*1000)/HZ;
|
|
- return sprintf(page, "%d.%03d\n", msec/1000, msec%1000);
|
|
+ unsigned int msec = ((unsigned long)mddev->safemode_delay*1000)/HZ;
|
|
+
|
|
+ return sprintf(page, "%u.%03u\n", msec/1000, msec%1000);
|
|
}
|
|
static ssize_t
|
|
safe_delay_store(struct mddev *mddev, const char *cbuf, size_t len)
|
|
@@ -3779,7 +3780,7 @@ safe_delay_store(struct mddev *mddev, const char *cbuf, size_t len)
|
|
return -EINVAL;
|
|
}
|
|
|
|
- if (strict_strtoul_scaled(cbuf, &msec, 3) < 0)
|
|
+ if (strict_strtoul_scaled(cbuf, &msec, 3) < 0 || msec > UINT_MAX / HZ)
|
|
return -EINVAL;
|
|
if (msec == 0)
|
|
mddev->safemode_delay = 0;
|
|
@@ -4440,6 +4441,8 @@ max_corrected_read_errors_store(struct mddev *mddev, const char *buf, size_t len
|
|
rv = kstrtouint(buf, 10, &n);
|
|
if (rv < 0)
|
|
return rv;
|
|
+ if (n > INT_MAX)
|
|
+ return -EINVAL;
|
|
atomic_set(&mddev->max_corr_read_errors, n);
|
|
return len;
|
|
}
|
|
@@ -4740,11 +4743,21 @@ action_store(struct mddev *mddev, const char *page, size_t len)
|
|
return -EINVAL;
|
|
err = mddev_lock(mddev);
|
|
if (!err) {
|
|
- if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
|
|
+ if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) {
|
|
err = -EBUSY;
|
|
- else {
|
|
+ } else if (mddev->reshape_position == MaxSector ||
|
|
+ mddev->pers->check_reshape == NULL ||
|
|
+ mddev->pers->check_reshape(mddev)) {
|
|
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
|
|
err = mddev->pers->start_reshape(mddev);
|
|
+ } else {
|
|
+ /*
|
|
+ * If reshape is still in progress, and
|
|
+ * md_check_recovery() can continue to reshape,
|
|
+ * don't restart reshape because data can be
|
|
+ * corrupted for raid456.
|
|
+ */
|
|
+ clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
|
|
}
|
|
mddev_unlock(mddev);
|
|
}
|
|
diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
|
|
index 8cbaa99e5b98e..7f80e86459b19 100644
|
|
--- a/drivers/md/raid0.c
|
|
+++ b/drivers/md/raid0.c
|
|
@@ -289,6 +289,18 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
|
|
goto abort;
|
|
}
|
|
|
|
+ if (conf->layout == RAID0_ORIG_LAYOUT) {
|
|
+ for (i = 1; i < conf->nr_strip_zones; i++) {
|
|
+ sector_t first_sector = conf->strip_zone[i-1].zone_end;
|
|
+
|
|
+ sector_div(first_sector, mddev->chunk_sectors);
|
|
+ zone = conf->strip_zone + i;
|
|
+ /* disk_shift is first disk index used in the zone */
|
|
+ zone->disk_shift = sector_div(first_sector,
|
|
+ zone->nb_dev);
|
|
+ }
|
|
+ }
|
|
+
|
|
pr_debug("md/raid0:%s: done.\n", mdname(mddev));
|
|
*private_conf = conf;
|
|
|
|
@@ -475,6 +487,20 @@ static inline int is_io_in_chunk_boundary(struct mddev *mddev,
|
|
}
|
|
}
|
|
|
|
+/*
|
|
+ * Convert disk_index to the disk order in which it is read/written.
|
|
+ * For example, if we have 4 disks, they are numbered 0,1,2,3. If we
|
|
+ * write the disks starting at disk 3, then the read/write order would
|
|
+ * be disk 3, then 0, then 1, and then disk 2 and we want map_disk_shift()
|
|
+ * to map the disks as follows 0,1,2,3 => 1,2,3,0. So disk 0 would map
|
|
+ * to 1, 1 to 2, 2 to 3, and 3 to 0. That way we can compare disks in
|
|
+ * that 'output' space to understand the read/write disk ordering.
|
|
+ */
|
|
+static int map_disk_shift(int disk_index, int num_disks, int disk_shift)
|
|
+{
|
|
+ return ((disk_index + num_disks - disk_shift) % num_disks);
|
|
+}
|
|
+
|
|
static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
|
|
{
|
|
struct r0conf *conf = mddev->private;
|
|
@@ -488,7 +514,9 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
|
|
sector_t end_disk_offset;
|
|
unsigned int end_disk_index;
|
|
unsigned int disk;
|
|
+ sector_t orig_start, orig_end;
|
|
|
|
+ orig_start = start;
|
|
zone = find_zone(conf, &start);
|
|
|
|
if (bio_end_sector(bio) > zone->zone_end) {
|
|
@@ -502,6 +530,7 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
|
|
} else
|
|
end = bio_end_sector(bio);
|
|
|
|
+ orig_end = end;
|
|
if (zone != conf->strip_zone)
|
|
end = end - zone[-1].zone_end;
|
|
|
|
@@ -513,13 +542,26 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
|
|
last_stripe_index = end;
|
|
sector_div(last_stripe_index, stripe_size);
|
|
|
|
- start_disk_index = (int)(start - first_stripe_index * stripe_size) /
|
|
- mddev->chunk_sectors;
|
|
+ /* In the first zone the original and alternate layouts are the same */
|
|
+ if ((conf->layout == RAID0_ORIG_LAYOUT) && (zone != conf->strip_zone)) {
|
|
+ sector_div(orig_start, mddev->chunk_sectors);
|
|
+ start_disk_index = sector_div(orig_start, zone->nb_dev);
|
|
+ start_disk_index = map_disk_shift(start_disk_index,
|
|
+ zone->nb_dev,
|
|
+ zone->disk_shift);
|
|
+ sector_div(orig_end, mddev->chunk_sectors);
|
|
+ end_disk_index = sector_div(orig_end, zone->nb_dev);
|
|
+ end_disk_index = map_disk_shift(end_disk_index,
|
|
+ zone->nb_dev, zone->disk_shift);
|
|
+ } else {
|
|
+ start_disk_index = (int)(start - first_stripe_index * stripe_size) /
|
|
+ mddev->chunk_sectors;
|
|
+ end_disk_index = (int)(end - last_stripe_index * stripe_size) /
|
|
+ mddev->chunk_sectors;
|
|
+ }
|
|
start_disk_offset = ((int)(start - first_stripe_index * stripe_size) %
|
|
mddev->chunk_sectors) +
|
|
first_stripe_index * mddev->chunk_sectors;
|
|
- end_disk_index = (int)(end - last_stripe_index * stripe_size) /
|
|
- mddev->chunk_sectors;
|
|
end_disk_offset = ((int)(end - last_stripe_index * stripe_size) %
|
|
mddev->chunk_sectors) +
|
|
last_stripe_index * mddev->chunk_sectors;
|
|
@@ -528,18 +570,22 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
|
|
sector_t dev_start, dev_end;
|
|
struct bio *discard_bio = NULL;
|
|
struct md_rdev *rdev;
|
|
+ int compare_disk;
|
|
+
|
|
+ compare_disk = map_disk_shift(disk, zone->nb_dev,
|
|
+ zone->disk_shift);
|
|
|
|
- if (disk < start_disk_index)
|
|
+ if (compare_disk < start_disk_index)
|
|
dev_start = (first_stripe_index + 1) *
|
|
mddev->chunk_sectors;
|
|
- else if (disk > start_disk_index)
|
|
+ else if (compare_disk > start_disk_index)
|
|
dev_start = first_stripe_index * mddev->chunk_sectors;
|
|
else
|
|
dev_start = start_disk_offset;
|
|
|
|
- if (disk < end_disk_index)
|
|
+ if (compare_disk < end_disk_index)
|
|
dev_end = (last_stripe_index + 1) * mddev->chunk_sectors;
|
|
- else if (disk > end_disk_index)
|
|
+ else if (compare_disk > end_disk_index)
|
|
dev_end = last_stripe_index * mddev->chunk_sectors;
|
|
else
|
|
dev_end = end_disk_offset;
|
|
diff --git a/drivers/md/raid0.h b/drivers/md/raid0.h
|
|
index 3816e5477db1e..8cc761ca74230 100644
|
|
--- a/drivers/md/raid0.h
|
|
+++ b/drivers/md/raid0.h
|
|
@@ -6,6 +6,7 @@ struct strip_zone {
|
|
sector_t zone_end; /* Start of the next zone (in sectors) */
|
|
sector_t dev_start; /* Zone offset in real dev (in sectors) */
|
|
int nb_dev; /* # of devices attached to the zone */
|
|
+ int disk_shift; /* start disk for the original layout */
|
|
};
|
|
|
|
/* Linux 3.14 (20d0189b101) made an unintended change to
|
|
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
|
|
index aee429ab114a5..3983d5c8b5cd2 100644
|
|
--- a/drivers/md/raid10.c
|
|
+++ b/drivers/md/raid10.c
|
|
@@ -751,8 +751,16 @@ static struct md_rdev *read_balance(struct r10conf *conf,
|
|
disk = r10_bio->devs[slot].devnum;
|
|
rdev = rcu_dereference(conf->mirrors[disk].replacement);
|
|
if (rdev == NULL || test_bit(Faulty, &rdev->flags) ||
|
|
- r10_bio->devs[slot].addr + sectors > rdev->recovery_offset)
|
|
+ r10_bio->devs[slot].addr + sectors >
|
|
+ rdev->recovery_offset) {
|
|
+ /*
|
|
+ * Read replacement first to prevent reading both rdev
|
|
+ * and replacement as NULL during replacement replace
|
|
+ * rdev.
|
|
+ */
|
|
+ smp_mb();
|
|
rdev = rcu_dereference(conf->mirrors[disk].rdev);
|
|
+ }
|
|
if (rdev == NULL ||
|
|
test_bit(Faulty, &rdev->flags))
|
|
continue;
|
|
@@ -919,6 +927,7 @@ static void flush_pending_writes(struct r10conf *conf)
|
|
else
|
|
generic_make_request(bio);
|
|
bio = next;
|
|
+ cond_resched();
|
|
}
|
|
blk_finish_plug(&plug);
|
|
} else
|
|
@@ -1104,6 +1113,7 @@ static void raid10_unplug(struct blk_plug_cb *cb, bool from_schedule)
|
|
else
|
|
generic_make_request(bio);
|
|
bio = next;
|
|
+ cond_resched();
|
|
}
|
|
kfree(plug);
|
|
}
|
|
@@ -1363,9 +1373,15 @@ retry_write:
|
|
|
|
for (i = 0; i < conf->copies; i++) {
|
|
int d = r10_bio->devs[i].devnum;
|
|
- struct md_rdev *rdev = rcu_dereference(conf->mirrors[d].rdev);
|
|
- struct md_rdev *rrdev = rcu_dereference(
|
|
- conf->mirrors[d].replacement);
|
|
+ struct md_rdev *rdev, *rrdev;
|
|
+
|
|
+ rrdev = rcu_dereference(conf->mirrors[d].replacement);
|
|
+ /*
|
|
+ * Read replacement first to prevent reading both rdev and
|
|
+ * replacement as NULL during replacement replace rdev.
|
|
+ */
|
|
+ smp_mb();
|
|
+ rdev = rcu_dereference(conf->mirrors[d].rdev);
|
|
if (rdev == rrdev)
|
|
rrdev = NULL;
|
|
if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) {
|
|
@@ -3054,7 +3070,6 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
|
|
int must_sync;
|
|
int any_working;
|
|
int need_recover = 0;
|
|
- int need_replace = 0;
|
|
struct raid10_info *mirror = &conf->mirrors[i];
|
|
struct md_rdev *mrdev, *mreplace;
|
|
|
|
@@ -3066,11 +3081,10 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
|
|
!test_bit(Faulty, &mrdev->flags) &&
|
|
!test_bit(In_sync, &mrdev->flags))
|
|
need_recover = 1;
|
|
- if (mreplace != NULL &&
|
|
- !test_bit(Faulty, &mreplace->flags))
|
|
- need_replace = 1;
|
|
+ if (mreplace && test_bit(Faulty, &mreplace->flags))
|
|
+ mreplace = NULL;
|
|
|
|
- if (!need_recover && !need_replace) {
|
|
+ if (!need_recover && !mreplace) {
|
|
rcu_read_unlock();
|
|
continue;
|
|
}
|
|
@@ -3086,8 +3100,6 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
|
|
rcu_read_unlock();
|
|
continue;
|
|
}
|
|
- if (mreplace && test_bit(Faulty, &mreplace->flags))
|
|
- mreplace = NULL;
|
|
/* Unless we are doing a full sync, or a replacement
|
|
* we only need to recover the block if it is set in
|
|
* the bitmap
|
|
@@ -3210,11 +3222,11 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
|
|
bio = r10_bio->devs[1].repl_bio;
|
|
if (bio)
|
|
bio->bi_end_io = NULL;
|
|
- /* Note: if need_replace, then bio
|
|
+ /* Note: if replace is not NULL, then bio
|
|
* cannot be NULL as r10buf_pool_alloc will
|
|
* have allocated it.
|
|
*/
|
|
- if (!need_replace)
|
|
+ if (!mreplace)
|
|
break;
|
|
bio->bi_next = biolist;
|
|
biolist = bio;
|
|
diff --git a/drivers/media/usb/dvb-usb-v2/az6007.c b/drivers/media/usb/dvb-usb-v2/az6007.c
|
|
index 62ee09f28a0bc..7524c90f5da61 100644
|
|
--- a/drivers/media/usb/dvb-usb-v2/az6007.c
|
|
+++ b/drivers/media/usb/dvb-usb-v2/az6007.c
|
|
@@ -202,7 +202,8 @@ static int az6007_rc_query(struct dvb_usb_device *d)
|
|
unsigned code;
|
|
enum rc_proto proto;
|
|
|
|
- az6007_read(d, AZ6007_READ_IR, 0, 0, st->data, 10);
|
|
+ if (az6007_read(d, AZ6007_READ_IR, 0, 0, st->data, 10) < 0)
|
|
+ return -EIO;
|
|
|
|
if (st->data[1] == 0x44)
|
|
return 0;
|
|
diff --git a/drivers/media/usb/siano/smsusb.c b/drivers/media/usb/siano/smsusb.c
|
|
index 1db232a1063b9..0358cd1043877 100644
|
|
--- a/drivers/media/usb/siano/smsusb.c
|
|
+++ b/drivers/media/usb/siano/smsusb.c
|
|
@@ -179,7 +179,8 @@ static void smsusb_stop_streaming(struct smsusb_device_t *dev)
|
|
|
|
for (i = 0; i < MAX_URBS; i++) {
|
|
usb_kill_urb(&dev->surbs[i].urb);
|
|
- cancel_work_sync(&dev->surbs[i].wq);
|
|
+ if (dev->surbs[i].wq.func)
|
|
+ cancel_work_sync(&dev->surbs[i].wq);
|
|
|
|
if (dev->surbs[i].cb) {
|
|
smscore_putbuffer(dev->coredev, dev->surbs[i].cb);
|
|
diff --git a/drivers/memory/brcmstb_dpfe.c b/drivers/memory/brcmstb_dpfe.c
|
|
index 6827ed4847507..127a9bffdbca8 100644
|
|
--- a/drivers/memory/brcmstb_dpfe.c
|
|
+++ b/drivers/memory/brcmstb_dpfe.c
|
|
@@ -398,15 +398,17 @@ static void __finalize_command(struct private_data *priv)
|
|
static int __send_command(struct private_data *priv, unsigned int cmd,
|
|
u32 result[])
|
|
{
|
|
- const u32 *msg = priv->dpfe_api->command[cmd];
|
|
void __iomem *regs = priv->regs;
|
|
unsigned int i, chksum, chksum_idx;
|
|
+ const u32 *msg;
|
|
int ret = 0;
|
|
u32 resp;
|
|
|
|
if (cmd >= DPFE_CMD_MAX)
|
|
return -1;
|
|
|
|
+ msg = priv->dpfe_api->command[cmd];
|
|
+
|
|
mutex_lock(&priv->lock);
|
|
|
|
/* Wait for DCPU to become ready */
|
|
diff --git a/drivers/memstick/host/r592.c b/drivers/memstick/host/r592.c
|
|
index dd06c18495eb6..0e37c6a5ee36c 100644
|
|
--- a/drivers/memstick/host/r592.c
|
|
+++ b/drivers/memstick/host/r592.c
|
|
@@ -44,12 +44,10 @@ static const char *tpc_names[] = {
|
|
* memstick_debug_get_tpc_name - debug helper that returns string for
|
|
* a TPC number
|
|
*/
|
|
-const char *memstick_debug_get_tpc_name(int tpc)
|
|
+static __maybe_unused const char *memstick_debug_get_tpc_name(int tpc)
|
|
{
|
|
return tpc_names[tpc-1];
|
|
}
|
|
-EXPORT_SYMBOL(memstick_debug_get_tpc_name);
|
|
-
|
|
|
|
/* Read a register*/
|
|
static inline u32 r592_read_reg(struct r592_device *dev, int address)
|
|
diff --git a/drivers/mfd/intel-lpss-acpi.c b/drivers/mfd/intel-lpss-acpi.c
|
|
index 045cbf0cbe53a..993e305a232c5 100644
|
|
--- a/drivers/mfd/intel-lpss-acpi.c
|
|
+++ b/drivers/mfd/intel-lpss-acpi.c
|
|
@@ -114,6 +114,9 @@ static int intel_lpss_acpi_probe(struct platform_device *pdev)
|
|
return -ENOMEM;
|
|
|
|
info->mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
|
+ if (!info->mem)
|
|
+ return -ENODEV;
|
|
+
|
|
info->irq = platform_get_irq(pdev, 0);
|
|
|
|
ret = intel_lpss_probe(&pdev->dev, info);
|
|
diff --git a/drivers/mfd/rt5033.c b/drivers/mfd/rt5033.c
|
|
index 48381d9bf7403..302115dabff4b 100644
|
|
--- a/drivers/mfd/rt5033.c
|
|
+++ b/drivers/mfd/rt5033.c
|
|
@@ -41,9 +41,6 @@ static const struct mfd_cell rt5033_devs[] = {
|
|
{
|
|
.name = "rt5033-charger",
|
|
.of_compatible = "richtek,rt5033-charger",
|
|
- }, {
|
|
- .name = "rt5033-battery",
|
|
- .of_compatible = "richtek,rt5033-battery",
|
|
}, {
|
|
.name = "rt5033-led",
|
|
.of_compatible = "richtek,rt5033-led",
|
|
diff --git a/drivers/mfd/stmfx.c b/drivers/mfd/stmfx.c
|
|
index 711979afd90a0..887c92342b7f1 100644
|
|
--- a/drivers/mfd/stmfx.c
|
|
+++ b/drivers/mfd/stmfx.c
|
|
@@ -389,7 +389,7 @@ static int stmfx_chip_init(struct i2c_client *client)
|
|
|
|
err:
|
|
if (stmfx->vdd)
|
|
- return regulator_disable(stmfx->vdd);
|
|
+ regulator_disable(stmfx->vdd);
|
|
|
|
return ret;
|
|
}
|
|
diff --git a/drivers/mfd/stmpe.c b/drivers/mfd/stmpe.c
|
|
index 508349399f8af..7f758fb60c1fa 100644
|
|
--- a/drivers/mfd/stmpe.c
|
|
+++ b/drivers/mfd/stmpe.c
|
|
@@ -1494,9 +1494,9 @@ int stmpe_probe(struct stmpe_client_info *ci, enum stmpe_partnum partnum)
|
|
|
|
int stmpe_remove(struct stmpe *stmpe)
|
|
{
|
|
- if (!IS_ERR(stmpe->vio))
|
|
+ if (!IS_ERR(stmpe->vio) && regulator_is_enabled(stmpe->vio))
|
|
regulator_disable(stmpe->vio);
|
|
- if (!IS_ERR(stmpe->vcc))
|
|
+ if (!IS_ERR(stmpe->vcc) && regulator_is_enabled(stmpe->vcc))
|
|
regulator_disable(stmpe->vcc);
|
|
|
|
__stmpe_disable(stmpe, STMPE_BLOCK_ADC);
|
|
diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
|
|
index 10fec109bbd33..9bbbeec4cd02c 100644
|
|
--- a/drivers/misc/fastrpc.c
|
|
+++ b/drivers/misc/fastrpc.c
|
|
@@ -1074,7 +1074,7 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl,
|
|
|
|
sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE, 4, 0);
|
|
if (init.attrs)
|
|
- sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE_ATTR, 6, 0);
|
|
+ sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE_ATTR, 4, 0);
|
|
|
|
err = fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE,
|
|
sc, args);
|
|
diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c
|
|
index 1154f0435b0ac..478d6118550e5 100644
|
|
--- a/drivers/misc/pci_endpoint_test.c
|
|
+++ b/drivers/misc/pci_endpoint_test.c
|
|
@@ -590,6 +590,10 @@ static long pci_endpoint_test_ioctl(struct file *file, unsigned int cmd,
|
|
struct pci_dev *pdev = test->pdev;
|
|
|
|
mutex_lock(&test->mutex);
|
|
+
|
|
+ reinit_completion(&test->irq_raised);
|
|
+ test->last_irq = -ENODATA;
|
|
+
|
|
switch (cmd) {
|
|
case PCITEST_BAR:
|
|
bar = arg;
|
|
@@ -774,6 +778,9 @@ static void pci_endpoint_test_remove(struct pci_dev *pdev)
|
|
if (id < 0)
|
|
return;
|
|
|
|
+ pci_endpoint_test_release_irq(test);
|
|
+ pci_endpoint_test_free_irq_vectors(test);
|
|
+
|
|
misc_deregister(&test->miscdev);
|
|
kfree(misc_device->name);
|
|
ida_simple_remove(&pci_endpoint_test_ida, id);
|
|
@@ -782,9 +789,6 @@ static void pci_endpoint_test_remove(struct pci_dev *pdev)
|
|
pci_iounmap(pdev, test->bar[bar]);
|
|
}
|
|
|
|
- pci_endpoint_test_release_irq(test);
|
|
- pci_endpoint_test_free_irq_vectors(test);
|
|
-
|
|
pci_release_regions(pdev);
|
|
pci_disable_device(pdev);
|
|
}
|
|
diff --git a/drivers/mmc/core/quirks.h b/drivers/mmc/core/quirks.h
|
|
index 3dba15bccce25..9a253324c95a1 100644
|
|
--- a/drivers/mmc/core/quirks.h
|
|
+++ b/drivers/mmc/core/quirks.h
|
|
@@ -90,6 +90,20 @@ static const struct mmc_fixup mmc_blk_fixups[] = {
|
|
MMC_FIXUP("VZL00M", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc,
|
|
MMC_QUIRK_SEC_ERASE_TRIM_BROKEN),
|
|
|
|
+ /*
|
|
+ * Kingston EMMC04G-M627 advertises TRIM but it does not seems to
|
|
+ * support being used to offload WRITE_ZEROES.
|
|
+ */
|
|
+ MMC_FIXUP("M62704", CID_MANFID_KINGSTON, 0x0100, add_quirk_mmc,
|
|
+ MMC_QUIRK_TRIM_BROKEN),
|
|
+
|
|
+ /*
|
|
+ * Micron MTFC4GACAJCN-1M advertises TRIM but it does not seems to
|
|
+ * support being used to offload WRITE_ZEROES.
|
|
+ */
|
|
+ MMC_FIXUP("Q2J54A", CID_MANFID_MICRON, 0x014e, add_quirk_mmc,
|
|
+ MMC_QUIRK_TRIM_BROKEN),
|
|
+
|
|
/*
|
|
* On Some Kingston eMMCs, performing trim can result in
|
|
* unrecoverable data conrruption occasionally due to a firmware bug.
|
|
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
|
|
index ae3cbf792d7b1..8d97451bbd289 100644
|
|
--- a/drivers/mmc/host/sdhci.c
|
|
+++ b/drivers/mmc/host/sdhci.c
|
|
@@ -1104,6 +1104,8 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd)
|
|
}
|
|
}
|
|
|
|
+ sdhci_config_dma(host);
|
|
+
|
|
if (host->flags & SDHCI_REQ_USE_DMA) {
|
|
int sg_cnt = sdhci_pre_dma_transfer(host, data, COOKIE_MAPPED);
|
|
|
|
@@ -1123,8 +1125,6 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd)
|
|
}
|
|
}
|
|
|
|
- sdhci_config_dma(host);
|
|
-
|
|
if (!(host->flags & SDHCI_REQ_USE_DMA)) {
|
|
int flags;
|
|
|
|
diff --git a/drivers/mtd/nand/raw/meson_nand.c b/drivers/mtd/nand/raw/meson_nand.c
|
|
index 312738124ea10..8339c020c1a13 100644
|
|
--- a/drivers/mtd/nand/raw/meson_nand.c
|
|
+++ b/drivers/mtd/nand/raw/meson_nand.c
|
|
@@ -72,6 +72,7 @@
|
|
#define GENCMDIADDRH(aih, addr) ((aih) | (((addr) >> 16) & 0xffff))
|
|
|
|
#define DMA_DIR(dir) ((dir) ? NFC_CMD_N2M : NFC_CMD_M2N)
|
|
+#define DMA_ADDR_ALIGN 8
|
|
|
|
#define ECC_CHECK_RETURN_FF (-1)
|
|
|
|
@@ -838,6 +839,9 @@ static int meson_nfc_read_oob(struct nand_chip *nand, int page)
|
|
|
|
static bool meson_nfc_is_buffer_dma_safe(const void *buffer)
|
|
{
|
|
+ if ((uintptr_t)buffer % DMA_ADDR_ALIGN)
|
|
+ return false;
|
|
+
|
|
if (virt_addr_valid(buffer) && (!object_is_on_stack(buffer)))
|
|
return true;
|
|
return false;
|
|
diff --git a/drivers/net/ethernet/broadcom/bgmac.c b/drivers/net/ethernet/broadcom/bgmac.c
|
|
index 193722334d931..89a63fdbe0e39 100644
|
|
--- a/drivers/net/ethernet/broadcom/bgmac.c
|
|
+++ b/drivers/net/ethernet/broadcom/bgmac.c
|
|
@@ -890,13 +890,13 @@ static void bgmac_chip_reset_idm_config(struct bgmac *bgmac)
|
|
|
|
if (iost & BGMAC_BCMA_IOST_ATTACHED) {
|
|
flags = BGMAC_BCMA_IOCTL_SW_CLKEN;
|
|
- if (!bgmac->has_robosw)
|
|
+ if (bgmac->in_init || !bgmac->has_robosw)
|
|
flags |= BGMAC_BCMA_IOCTL_SW_RESET;
|
|
}
|
|
bgmac_clk_enable(bgmac, flags);
|
|
}
|
|
|
|
- if (iost & BGMAC_BCMA_IOST_ATTACHED && !bgmac->has_robosw)
|
|
+ if (iost & BGMAC_BCMA_IOST_ATTACHED && (bgmac->in_init || !bgmac->has_robosw))
|
|
bgmac_idm_write(bgmac, BCMA_IOCTL,
|
|
bgmac_idm_read(bgmac, BCMA_IOCTL) &
|
|
~BGMAC_BCMA_IOCTL_SW_RESET);
|
|
@@ -1489,6 +1489,8 @@ int bgmac_enet_probe(struct bgmac *bgmac)
|
|
struct net_device *net_dev = bgmac->net_dev;
|
|
int err;
|
|
|
|
+ bgmac->in_init = true;
|
|
+
|
|
bgmac_chip_intrs_off(bgmac);
|
|
|
|
net_dev->irq = bgmac->irq;
|
|
@@ -1538,6 +1540,8 @@ int bgmac_enet_probe(struct bgmac *bgmac)
|
|
net_dev->hw_features = net_dev->features;
|
|
net_dev->vlan_features = net_dev->features;
|
|
|
|
+ bgmac->in_init = false;
|
|
+
|
|
err = register_netdev(bgmac->net_dev);
|
|
if (err) {
|
|
dev_err(bgmac->dev, "Cannot register net device\n");
|
|
diff --git a/drivers/net/ethernet/broadcom/bgmac.h b/drivers/net/ethernet/broadcom/bgmac.h
|
|
index 40d02fec27472..76930b8353d60 100644
|
|
--- a/drivers/net/ethernet/broadcom/bgmac.h
|
|
+++ b/drivers/net/ethernet/broadcom/bgmac.h
|
|
@@ -511,6 +511,8 @@ struct bgmac {
|
|
int irq;
|
|
u32 int_mask;
|
|
|
|
+ bool in_init;
|
|
+
|
|
/* Current MAC state */
|
|
int mac_speed;
|
|
int mac_duplex;
|
|
diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
|
|
index ce569b7d3b353..53495d39cc9c5 100644
|
|
--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
|
|
+++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
|
|
@@ -618,5 +618,7 @@ void bcmgenet_mii_exit(struct net_device *dev)
|
|
if (of_phy_is_fixed_link(dn))
|
|
of_phy_deregister_fixed_link(dn);
|
|
of_node_put(priv->phy_dn);
|
|
+ clk_prepare_enable(priv->clk);
|
|
platform_device_unregister(priv->mii_pdev);
|
|
+ clk_disable_unprepare(priv->clk);
|
|
}
|
|
diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
|
|
index d0cd86af29d9f..b16517d162cfd 100644
|
|
--- a/drivers/net/ethernet/broadcom/tg3.c
|
|
+++ b/drivers/net/ethernet/broadcom/tg3.c
|
|
@@ -230,6 +230,7 @@ MODULE_DESCRIPTION("Broadcom Tigon3 ethernet driver");
|
|
MODULE_LICENSE("GPL");
|
|
MODULE_VERSION(DRV_MODULE_VERSION);
|
|
MODULE_FIRMWARE(FIRMWARE_TG3);
|
|
+MODULE_FIRMWARE(FIRMWARE_TG357766);
|
|
MODULE_FIRMWARE(FIRMWARE_TG3TSO);
|
|
MODULE_FIRMWARE(FIRMWARE_TG3TSO5);
|
|
|
|
diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
|
|
index 838cd7881f2f7..9cf556fedc704 100644
|
|
--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
|
|
+++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
|
|
@@ -1389,19 +1389,16 @@ static int iavf_alloc_q_vectors(struct iavf_adapter *adapter)
|
|
static void iavf_free_q_vectors(struct iavf_adapter *adapter)
|
|
{
|
|
int q_idx, num_q_vectors;
|
|
- int napi_vectors;
|
|
|
|
if (!adapter->q_vectors)
|
|
return;
|
|
|
|
num_q_vectors = adapter->num_msix_vectors - NONQ_VECS;
|
|
- napi_vectors = adapter->num_active_queues;
|
|
|
|
for (q_idx = 0; q_idx < num_q_vectors; q_idx++) {
|
|
struct iavf_q_vector *q_vector = &adapter->q_vectors[q_idx];
|
|
|
|
- if (q_idx < napi_vectors)
|
|
- netif_napi_del(&q_vector->napi);
|
|
+ netif_napi_del(&q_vector->napi);
|
|
}
|
|
kfree(adapter->q_vectors);
|
|
adapter->q_vectors = NULL;
|
|
diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
|
|
index 00d66a6e5c6e5..8c6c0d9c7f766 100644
|
|
--- a/drivers/net/ethernet/intel/igb/igb_main.c
|
|
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
|
|
@@ -9028,6 +9028,11 @@ static pci_ers_result_t igb_io_error_detected(struct pci_dev *pdev,
|
|
struct net_device *netdev = pci_get_drvdata(pdev);
|
|
struct igb_adapter *adapter = netdev_priv(netdev);
|
|
|
|
+ if (state == pci_channel_io_normal) {
|
|
+ dev_warn(&pdev->dev, "Non-correctable non-fatal error reported.\n");
|
|
+ return PCI_ERS_RESULT_CAN_RECOVER;
|
|
+ }
|
|
+
|
|
netif_device_detach(netdev);
|
|
|
|
if (state == pci_channel_io_perm_failure)
|
|
diff --git a/drivers/net/ethernet/intel/igc/igc_ethtool.c b/drivers/net/ethernet/intel/igc/igc_ethtool.c
|
|
index cbcb8611ab50d..0a4e7f5f292ac 100644
|
|
--- a/drivers/net/ethernet/intel/igc/igc_ethtool.c
|
|
+++ b/drivers/net/ethernet/intel/igc/igc_ethtool.c
|
|
@@ -1668,6 +1668,8 @@ static int igc_get_link_ksettings(struct net_device *netdev,
|
|
/* twisted pair */
|
|
cmd->base.port = PORT_TP;
|
|
cmd->base.phy_address = hw->phy.addr;
|
|
+ ethtool_link_ksettings_add_link_mode(cmd, supported, TP);
|
|
+ ethtool_link_ksettings_add_link_mode(cmd, advertising, TP);
|
|
|
|
/* advertising link modes */
|
|
if (hw->phy.autoneg_advertised & ADVERTISE_10_HALF)
|
|
diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
|
|
index b8297a63a7fd2..3839ca8bdf6dd 100644
|
|
--- a/drivers/net/ethernet/intel/igc/igc_main.c
|
|
+++ b/drivers/net/ethernet/intel/igc/igc_main.c
|
|
@@ -610,7 +610,6 @@ static void igc_configure_tx_ring(struct igc_adapter *adapter,
|
|
/* disable the queue */
|
|
wr32(IGC_TXDCTL(reg_idx), 0);
|
|
wrfl();
|
|
- mdelay(10);
|
|
|
|
wr32(IGC_TDLEN(reg_idx),
|
|
ring->count * sizeof(union igc_adv_tx_desc));
|
|
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
|
|
index 977c2961aa2c2..110221a16bf6d 100644
|
|
--- a/drivers/net/ethernet/marvell/mvneta.c
|
|
+++ b/drivers/net/ethernet/marvell/mvneta.c
|
|
@@ -1422,7 +1422,7 @@ static void mvneta_defaults_set(struct mvneta_port *pp)
|
|
*/
|
|
if (txq_number == 1)
|
|
txq_map = (cpu == pp->rxq_def) ?
|
|
- MVNETA_CPU_TXQ_ACCESS(1) : 0;
|
|
+ MVNETA_CPU_TXQ_ACCESS(0) : 0;
|
|
|
|
} else {
|
|
txq_map = MVNETA_CPU_TXQ_ACCESS_ALL_MASK;
|
|
@@ -3762,7 +3762,7 @@ static void mvneta_percpu_elect(struct mvneta_port *pp)
|
|
*/
|
|
if (txq_number == 1)
|
|
txq_map = (cpu == elected_cpu) ?
|
|
- MVNETA_CPU_TXQ_ACCESS(1) : 0;
|
|
+ MVNETA_CPU_TXQ_ACCESS(0) : 0;
|
|
else
|
|
txq_map = mvreg_read(pp, MVNETA_CPU_MAP(cpu)) &
|
|
MVNETA_CPU_TXQ_ACCESS_ALL_MASK;
|
|
diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c
|
|
index c69ffcfe61689..6458dbd6c631a 100644
|
|
--- a/drivers/net/ethernet/microchip/lan743x_main.c
|
|
+++ b/drivers/net/ethernet/microchip/lan743x_main.c
|
|
@@ -80,6 +80,18 @@ static int lan743x_csr_light_reset(struct lan743x_adapter *adapter)
|
|
!(data & HW_CFG_LRST_), 100000, 10000000);
|
|
}
|
|
|
|
+static int lan743x_csr_wait_for_bit_atomic(struct lan743x_adapter *adapter,
|
|
+ int offset, u32 bit_mask,
|
|
+ int target_value, int udelay_min,
|
|
+ int udelay_max, int count)
|
|
+{
|
|
+ u32 data;
|
|
+
|
|
+ return readx_poll_timeout_atomic(LAN743X_CSR_READ_OP, offset, data,
|
|
+ target_value == !!(data & bit_mask),
|
|
+ udelay_max, udelay_min * count);
|
|
+}
|
|
+
|
|
static int lan743x_csr_wait_for_bit(struct lan743x_adapter *adapter,
|
|
int offset, u32 bit_mask,
|
|
int target_value, int usleep_min,
|
|
@@ -675,8 +687,8 @@ static int lan743x_dp_write(struct lan743x_adapter *adapter,
|
|
u32 dp_sel;
|
|
int i;
|
|
|
|
- if (lan743x_csr_wait_for_bit(adapter, DP_SEL, DP_SEL_DPRDY_,
|
|
- 1, 40, 100, 100))
|
|
+ if (lan743x_csr_wait_for_bit_atomic(adapter, DP_SEL, DP_SEL_DPRDY_,
|
|
+ 1, 40, 100, 100))
|
|
return -EIO;
|
|
dp_sel = lan743x_csr_read(adapter, DP_SEL);
|
|
dp_sel &= ~DP_SEL_MASK_;
|
|
@@ -687,8 +699,9 @@ static int lan743x_dp_write(struct lan743x_adapter *adapter,
|
|
lan743x_csr_write(adapter, DP_ADDR, addr + i);
|
|
lan743x_csr_write(adapter, DP_DATA_0, buf[i]);
|
|
lan743x_csr_write(adapter, DP_CMD, DP_CMD_WRITE_);
|
|
- if (lan743x_csr_wait_for_bit(adapter, DP_SEL, DP_SEL_DPRDY_,
|
|
- 1, 40, 100, 100))
|
|
+ if (lan743x_csr_wait_for_bit_atomic(adapter, DP_SEL,
|
|
+ DP_SEL_DPRDY_,
|
|
+ 1, 40, 100, 100))
|
|
return -EIO;
|
|
}
|
|
|
|
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
|
|
index d0841836cf705..d718c1a6d5fc7 100644
|
|
--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
|
|
+++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
|
|
@@ -167,10 +167,10 @@ static int ionic_intr_alloc(struct ionic_lif *lif, struct ionic_intr_info *intr)
|
|
return 0;
|
|
}
|
|
|
|
-static void ionic_intr_free(struct ionic_lif *lif, int index)
|
|
+static void ionic_intr_free(struct ionic *ionic, int index)
|
|
{
|
|
- if (index != INTR_INDEX_NOT_ASSIGNED && index < lif->ionic->nintrs)
|
|
- clear_bit(index, lif->ionic->intrs);
|
|
+ if (index != INTR_INDEX_NOT_ASSIGNED && index < ionic->nintrs)
|
|
+ clear_bit(index, ionic->intrs);
|
|
}
|
|
|
|
static int ionic_qcq_enable(struct ionic_qcq *qcq)
|
|
@@ -256,7 +256,6 @@ static int ionic_qcq_disable(struct ionic_qcq *qcq)
|
|
static void ionic_lif_qcq_deinit(struct ionic_lif *lif, struct ionic_qcq *qcq)
|
|
{
|
|
struct ionic_dev *idev = &lif->ionic->idev;
|
|
- struct device *dev = lif->ionic->dev;
|
|
|
|
if (!qcq)
|
|
return;
|
|
@@ -269,7 +268,6 @@ static void ionic_lif_qcq_deinit(struct ionic_lif *lif, struct ionic_qcq *qcq)
|
|
if (qcq->flags & IONIC_QCQ_F_INTR) {
|
|
ionic_intr_mask(idev->intr_ctrl, qcq->intr.index,
|
|
IONIC_INTR_MASK_SET);
|
|
- devm_free_irq(dev, qcq->intr.vector, &qcq->napi);
|
|
netif_napi_del(&qcq->napi);
|
|
}
|
|
|
|
@@ -287,8 +285,12 @@ static void ionic_qcq_free(struct ionic_lif *lif, struct ionic_qcq *qcq)
|
|
qcq->base = NULL;
|
|
qcq->base_pa = 0;
|
|
|
|
- if (qcq->flags & IONIC_QCQ_F_INTR)
|
|
- ionic_intr_free(lif, qcq->intr.index);
|
|
+ if (qcq->flags & IONIC_QCQ_F_INTR) {
|
|
+ irq_set_affinity_hint(qcq->intr.vector, NULL);
|
|
+ devm_free_irq(dev, qcq->intr.vector, &qcq->napi);
|
|
+ qcq->intr.vector = 0;
|
|
+ ionic_intr_free(lif->ionic, qcq->intr.index);
|
|
+ }
|
|
|
|
devm_kfree(dev, qcq->cq.info);
|
|
qcq->cq.info = NULL;
|
|
@@ -330,11 +332,6 @@ static void ionic_qcqs_free(struct ionic_lif *lif)
|
|
static void ionic_link_qcq_interrupts(struct ionic_qcq *src_qcq,
|
|
struct ionic_qcq *n_qcq)
|
|
{
|
|
- if (WARN_ON(n_qcq->flags & IONIC_QCQ_F_INTR)) {
|
|
- ionic_intr_free(n_qcq->cq.lif, n_qcq->intr.index);
|
|
- n_qcq->flags &= ~IONIC_QCQ_F_INTR;
|
|
- }
|
|
-
|
|
n_qcq->intr.vector = src_qcq->intr.vector;
|
|
n_qcq->intr.index = src_qcq->intr.index;
|
|
}
|
|
@@ -418,8 +415,15 @@ static int ionic_qcq_alloc(struct ionic_lif *lif, unsigned int type,
|
|
ionic_intr_mask_assert(idev->intr_ctrl, new->intr.index,
|
|
IONIC_INTR_MASK_SET);
|
|
|
|
- new->intr.cpu = new->intr.index % num_online_cpus();
|
|
- if (cpu_online(new->intr.cpu))
|
|
+ err = ionic_request_irq(lif, new);
|
|
+ if (err) {
|
|
+ netdev_warn(lif->netdev, "irq request failed %d\n", err);
|
|
+ goto err_out_free_intr;
|
|
+ }
|
|
+
|
|
+ new->intr.cpu = cpumask_local_spread(new->intr.index,
|
|
+ dev_to_node(dev));
|
|
+ if (new->intr.cpu != -1)
|
|
cpumask_set_cpu(new->intr.cpu,
|
|
&new->intr.affinity_mask);
|
|
} else {
|
|
@@ -431,13 +435,13 @@ static int ionic_qcq_alloc(struct ionic_lif *lif, unsigned int type,
|
|
if (!new->cq.info) {
|
|
netdev_err(lif->netdev, "Cannot allocate completion queue info\n");
|
|
err = -ENOMEM;
|
|
- goto err_out_free_intr;
|
|
+ goto err_out_free_irq;
|
|
}
|
|
|
|
err = ionic_cq_init(lif, &new->cq, &new->intr, num_descs, cq_desc_size);
|
|
if (err) {
|
|
netdev_err(lif->netdev, "Cannot initialize completion queue\n");
|
|
- goto err_out_free_intr;
|
|
+ goto err_out_free_irq;
|
|
}
|
|
|
|
new->base = dma_alloc_coherent(dev, total_size, &new->base_pa,
|
|
@@ -445,7 +449,7 @@ static int ionic_qcq_alloc(struct ionic_lif *lif, unsigned int type,
|
|
if (!new->base) {
|
|
netdev_err(lif->netdev, "Cannot allocate queue DMA memory\n");
|
|
err = -ENOMEM;
|
|
- goto err_out_free_intr;
|
|
+ goto err_out_free_irq;
|
|
}
|
|
|
|
new->total_size = total_size;
|
|
@@ -471,8 +475,12 @@ static int ionic_qcq_alloc(struct ionic_lif *lif, unsigned int type,
|
|
|
|
return 0;
|
|
|
|
+err_out_free_irq:
|
|
+ if (flags & IONIC_QCQ_F_INTR)
|
|
+ devm_free_irq(dev, new->intr.vector, &new->napi);
|
|
err_out_free_intr:
|
|
- ionic_intr_free(lif, new->intr.index);
|
|
+ if (flags & IONIC_QCQ_F_INTR)
|
|
+ ionic_intr_free(lif->ionic, new->intr.index);
|
|
err_out:
|
|
dev_err(dev, "qcq alloc of %s%d failed %d\n", name, index, err);
|
|
return err;
|
|
@@ -647,12 +655,6 @@ static int ionic_lif_rxq_init(struct ionic_lif *lif, struct ionic_qcq *qcq)
|
|
netif_napi_add(lif->netdev, &qcq->napi, ionic_rx_napi,
|
|
NAPI_POLL_WEIGHT);
|
|
|
|
- err = ionic_request_irq(lif, qcq);
|
|
- if (err) {
|
|
- netif_napi_del(&qcq->napi);
|
|
- return err;
|
|
- }
|
|
-
|
|
qcq->flags |= IONIC_QCQ_F_INITED;
|
|
|
|
ionic_debugfs_add_qcq(lif, qcq);
|
|
@@ -1870,13 +1872,6 @@ static int ionic_lif_adminq_init(struct ionic_lif *lif)
|
|
netif_napi_add(lif->netdev, &qcq->napi, ionic_adminq_napi,
|
|
NAPI_POLL_WEIGHT);
|
|
|
|
- err = ionic_request_irq(lif, qcq);
|
|
- if (err) {
|
|
- netdev_warn(lif->netdev, "adminq irq request failed %d\n", err);
|
|
- netif_napi_del(&qcq->napi);
|
|
- return err;
|
|
- }
|
|
-
|
|
napi_enable(&qcq->napi);
|
|
|
|
if (qcq->flags & IONIC_QCQ_F_INTR)
|
|
diff --git a/drivers/net/ethernet/ti/cpsw_ale.c b/drivers/net/ethernet/ti/cpsw_ale.c
|
|
index e7c24396933e9..f17619c545ae5 100644
|
|
--- a/drivers/net/ethernet/ti/cpsw_ale.c
|
|
+++ b/drivers/net/ethernet/ti/cpsw_ale.c
|
|
@@ -60,23 +60,37 @@
|
|
|
|
static inline int cpsw_ale_get_field(u32 *ale_entry, u32 start, u32 bits)
|
|
{
|
|
- int idx;
|
|
+ int idx, idx2;
|
|
+ u32 hi_val = 0;
|
|
|
|
idx = start / 32;
|
|
+ idx2 = (start + bits - 1) / 32;
|
|
+ /* Check if bits to be fetched exceed a word */
|
|
+ if (idx != idx2) {
|
|
+ idx2 = 2 - idx2; /* flip */
|
|
+ hi_val = ale_entry[idx2] << ((idx2 * 32) - start);
|
|
+ }
|
|
start -= idx * 32;
|
|
idx = 2 - idx; /* flip */
|
|
- return (ale_entry[idx] >> start) & BITMASK(bits);
|
|
+ return (hi_val + (ale_entry[idx] >> start)) & BITMASK(bits);
|
|
}
|
|
|
|
static inline void cpsw_ale_set_field(u32 *ale_entry, u32 start, u32 bits,
|
|
u32 value)
|
|
{
|
|
- int idx;
|
|
+ int idx, idx2;
|
|
|
|
value &= BITMASK(bits);
|
|
- idx = start / 32;
|
|
+ idx = start / 32;
|
|
+ idx2 = (start + bits - 1) / 32;
|
|
+ /* Check if bits to be set exceed a word */
|
|
+ if (idx != idx2) {
|
|
+ idx2 = 2 - idx2; /* flip */
|
|
+ ale_entry[idx2] &= ~(BITMASK(bits + start - (idx2 * 32)));
|
|
+ ale_entry[idx2] |= (value >> ((idx2 * 32) - start));
|
|
+ }
|
|
start -= idx * 32;
|
|
- idx = 2 - idx; /* flip */
|
|
+ idx = 2 - idx; /* flip */
|
|
ale_entry[idx] &= ~(BITMASK(bits) << start);
|
|
ale_entry[idx] |= (value << start);
|
|
}
|
|
diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
|
|
index d0653babab923..0409afe9a53d6 100644
|
|
--- a/drivers/net/gtp.c
|
|
+++ b/drivers/net/gtp.c
|
|
@@ -297,7 +297,9 @@ static void __gtp_encap_destroy(struct sock *sk)
|
|
gtp->sk1u = NULL;
|
|
udp_sk(sk)->encap_type = 0;
|
|
rcu_assign_sk_user_data(sk, NULL);
|
|
+ release_sock(sk);
|
|
sock_put(sk);
|
|
+ return;
|
|
}
|
|
release_sock(sk);
|
|
}
|
|
diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c
|
|
index 0a5b5ff597c6f..ab09d110760ec 100644
|
|
--- a/drivers/net/ipvlan/ipvlan_core.c
|
|
+++ b/drivers/net/ipvlan/ipvlan_core.c
|
|
@@ -586,7 +586,8 @@ static int ipvlan_xmit_mode_l3(struct sk_buff *skb, struct net_device *dev)
|
|
consume_skb(skb);
|
|
return NET_XMIT_DROP;
|
|
}
|
|
- return ipvlan_rcv_frame(addr, &skb, true);
|
|
+ ipvlan_rcv_frame(addr, &skb, true);
|
|
+ return NET_XMIT_SUCCESS;
|
|
}
|
|
}
|
|
out:
|
|
@@ -612,7 +613,8 @@ static int ipvlan_xmit_mode_l2(struct sk_buff *skb, struct net_device *dev)
|
|
consume_skb(skb);
|
|
return NET_XMIT_DROP;
|
|
}
|
|
- return ipvlan_rcv_frame(addr, &skb, true);
|
|
+ ipvlan_rcv_frame(addr, &skb, true);
|
|
+ return NET_XMIT_SUCCESS;
|
|
}
|
|
}
|
|
skb = skb_share_check(skb, GFP_ATOMIC);
|
|
@@ -624,7 +626,8 @@ static int ipvlan_xmit_mode_l2(struct sk_buff *skb, struct net_device *dev)
|
|
* the skb for the main-dev. At the RX side we just return
|
|
* RX_PASS for it to be processed further on the stack.
|
|
*/
|
|
- return dev_forward_skb(ipvlan->phy_dev, skb);
|
|
+ dev_forward_skb(ipvlan->phy_dev, skb);
|
|
+ return NET_XMIT_SUCCESS;
|
|
|
|
} else if (is_multicast_ether_addr(eth->h_dest)) {
|
|
skb_reset_mac_header(skb);
|
|
diff --git a/drivers/net/wireless/ath/ath9k/ar9003_hw.c b/drivers/net/wireless/ath/ath9k/ar9003_hw.c
|
|
index 2fe12b0de5b4f..dea8a998fb622 100644
|
|
--- a/drivers/net/wireless/ath/ath9k/ar9003_hw.c
|
|
+++ b/drivers/net/wireless/ath/ath9k/ar9003_hw.c
|
|
@@ -1099,17 +1099,22 @@ static bool ath9k_hw_verify_hang(struct ath_hw *ah, unsigned int queue)
|
|
{
|
|
u32 dma_dbg_chain, dma_dbg_complete;
|
|
u8 dcu_chain_state, dcu_complete_state;
|
|
+ unsigned int dbg_reg, reg_offset;
|
|
int i;
|
|
|
|
- for (i = 0; i < NUM_STATUS_READS; i++) {
|
|
- if (queue < 6)
|
|
- dma_dbg_chain = REG_READ(ah, AR_DMADBG_4);
|
|
- else
|
|
- dma_dbg_chain = REG_READ(ah, AR_DMADBG_5);
|
|
+ if (queue < 6) {
|
|
+ dbg_reg = AR_DMADBG_4;
|
|
+ reg_offset = queue * 5;
|
|
+ } else {
|
|
+ dbg_reg = AR_DMADBG_5;
|
|
+ reg_offset = (queue - 6) * 5;
|
|
+ }
|
|
|
|
+ for (i = 0; i < NUM_STATUS_READS; i++) {
|
|
+ dma_dbg_chain = REG_READ(ah, dbg_reg);
|
|
dma_dbg_complete = REG_READ(ah, AR_DMADBG_6);
|
|
|
|
- dcu_chain_state = (dma_dbg_chain >> (5 * queue)) & 0x1f;
|
|
+ dcu_chain_state = (dma_dbg_chain >> reg_offset) & 0x1f;
|
|
dcu_complete_state = dma_dbg_complete & 0x3;
|
|
|
|
if ((dcu_chain_state != 0x6) || (dcu_complete_state != 0x1))
|
|
@@ -1128,6 +1133,7 @@ static bool ar9003_hw_detect_mac_hang(struct ath_hw *ah)
|
|
u8 dcu_chain_state, dcu_complete_state;
|
|
bool dcu_wait_frdone = false;
|
|
unsigned long chk_dcu = 0;
|
|
+ unsigned int reg_offset;
|
|
unsigned int i = 0;
|
|
|
|
dma_dbg_4 = REG_READ(ah, AR_DMADBG_4);
|
|
@@ -1139,12 +1145,15 @@ static bool ar9003_hw_detect_mac_hang(struct ath_hw *ah)
|
|
goto exit;
|
|
|
|
for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++) {
|
|
- if (i < 6)
|
|
+ if (i < 6) {
|
|
chk_dbg = dma_dbg_4;
|
|
- else
|
|
+ reg_offset = i * 5;
|
|
+ } else {
|
|
chk_dbg = dma_dbg_5;
|
|
+ reg_offset = (i - 6) * 5;
|
|
+ }
|
|
|
|
- dcu_chain_state = (chk_dbg >> (5 * i)) & 0x1f;
|
|
+ dcu_chain_state = (chk_dbg >> reg_offset) & 0x1f;
|
|
if (dcu_chain_state == 0x6) {
|
|
dcu_wait_frdone = true;
|
|
chk_dcu |= BIT(i);
|
|
diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
|
|
index fe62ff668f757..99667aba289df 100644
|
|
--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
|
|
+++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
|
|
@@ -114,7 +114,13 @@ static void htc_process_conn_rsp(struct htc_target *target,
|
|
|
|
if (svc_rspmsg->status == HTC_SERVICE_SUCCESS) {
|
|
epid = svc_rspmsg->endpoint_id;
|
|
- if (epid < 0 || epid >= ENDPOINT_MAX)
|
|
+
|
|
+ /* Check that the received epid for the endpoint to attach
|
|
+ * a new service is valid. ENDPOINT0 can't be used here as it
|
|
+ * is already reserved for HTC_CTRL_RSVD_SVC service and thus
|
|
+ * should not be modified.
|
|
+ */
|
|
+ if (epid <= ENDPOINT0 || epid >= ENDPOINT_MAX)
|
|
return;
|
|
|
|
service_id = be16_to_cpu(svc_rspmsg->service_id);
|
|
diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
|
|
index eb5751a45f266..5968fcec11737 100644
|
|
--- a/drivers/net/wireless/ath/ath9k/main.c
|
|
+++ b/drivers/net/wireless/ath/ath9k/main.c
|
|
@@ -200,7 +200,7 @@ void ath_cancel_work(struct ath_softc *sc)
|
|
void ath_restart_work(struct ath_softc *sc)
|
|
{
|
|
ieee80211_queue_delayed_work(sc->hw, &sc->hw_check_work,
|
|
- ATH_HW_CHECK_POLL_INT);
|
|
+ msecs_to_jiffies(ATH_HW_CHECK_POLL_INT));
|
|
|
|
if (AR_SREV_9340(sc->sc_ah) || AR_SREV_9330(sc->sc_ah))
|
|
ieee80211_queue_delayed_work(sc->hw, &sc->hw_pll_work,
|
|
@@ -847,7 +847,7 @@ static bool ath9k_txq_list_has_key(struct list_head *txq_list, u32 keyix)
|
|
static bool ath9k_txq_has_key(struct ath_softc *sc, u32 keyix)
|
|
{
|
|
struct ath_hw *ah = sc->sc_ah;
|
|
- int i;
|
|
+ int i, j;
|
|
struct ath_txq *txq;
|
|
bool key_in_use = false;
|
|
|
|
@@ -865,8 +865,9 @@ static bool ath9k_txq_has_key(struct ath_softc *sc, u32 keyix)
|
|
if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_EDMA) {
|
|
int idx = txq->txq_tailidx;
|
|
|
|
- while (!key_in_use &&
|
|
- !list_empty(&txq->txq_fifo[idx])) {
|
|
+ for (j = 0; !key_in_use &&
|
|
+ !list_empty(&txq->txq_fifo[idx]) &&
|
|
+ j < ATH_TXFIFO_DEPTH; j++) {
|
|
key_in_use = ath9k_txq_list_has_key(
|
|
&txq->txq_fifo[idx], keyix);
|
|
INCR(idx, ATH_TXFIFO_DEPTH);
|
|
@@ -2227,7 +2228,7 @@ void __ath9k_flush(struct ieee80211_hw *hw, u32 queues, bool drop,
|
|
}
|
|
|
|
ieee80211_queue_delayed_work(hw, &sc->hw_check_work,
|
|
- ATH_HW_CHECK_POLL_INT);
|
|
+ msecs_to_jiffies(ATH_HW_CHECK_POLL_INT));
|
|
}
|
|
|
|
static bool ath9k_tx_frames_pending(struct ieee80211_hw *hw)
|
|
diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c
|
|
index deb22b8c2065f..ef861b19fd477 100644
|
|
--- a/drivers/net/wireless/ath/ath9k/wmi.c
|
|
+++ b/drivers/net/wireless/ath/ath9k/wmi.c
|
|
@@ -218,6 +218,10 @@ static void ath9k_wmi_ctrl_rx(void *priv, struct sk_buff *skb,
|
|
if (unlikely(wmi->stopped))
|
|
goto free_skb;
|
|
|
|
+ /* Validate the obtained SKB. */
|
|
+ if (unlikely(skb->len < sizeof(struct wmi_cmd_hdr)))
|
|
+ goto free_skb;
|
|
+
|
|
hdr = (struct wmi_cmd_hdr *) skb->data;
|
|
cmd_id = be16_to_cpu(hdr->command_id);
|
|
|
|
diff --git a/drivers/net/wireless/atmel/atmel_cs.c b/drivers/net/wireless/atmel/atmel_cs.c
|
|
index 7afc9c5329fb1..f5fa1a95b0c15 100644
|
|
--- a/drivers/net/wireless/atmel/atmel_cs.c
|
|
+++ b/drivers/net/wireless/atmel/atmel_cs.c
|
|
@@ -73,6 +73,7 @@ struct local_info {
|
|
static int atmel_probe(struct pcmcia_device *p_dev)
|
|
{
|
|
struct local_info *local;
|
|
+ int ret;
|
|
|
|
dev_dbg(&p_dev->dev, "atmel_attach()\n");
|
|
|
|
@@ -83,8 +84,16 @@ static int atmel_probe(struct pcmcia_device *p_dev)
|
|
|
|
p_dev->priv = local;
|
|
|
|
- return atmel_config(p_dev);
|
|
-} /* atmel_attach */
|
|
+ ret = atmel_config(p_dev);
|
|
+ if (ret)
|
|
+ goto err_free_priv;
|
|
+
|
|
+ return 0;
|
|
+
|
|
+err_free_priv:
|
|
+ kfree(p_dev->priv);
|
|
+ return ret;
|
|
+}
|
|
|
|
static void atmel_detach(struct pcmcia_device *link)
|
|
{
|
|
diff --git a/drivers/net/wireless/cisco/airo.c b/drivers/net/wireless/cisco/airo.c
|
|
index da0d3834b5f01..ebf0d3072290e 100644
|
|
--- a/drivers/net/wireless/cisco/airo.c
|
|
+++ b/drivers/net/wireless/cisco/airo.c
|
|
@@ -6104,8 +6104,11 @@ static int airo_get_rate(struct net_device *dev,
|
|
{
|
|
struct airo_info *local = dev->ml_priv;
|
|
StatusRid status_rid; /* Card status info */
|
|
+ int ret;
|
|
|
|
- readStatusRid(local, &status_rid, 1);
|
|
+ ret = readStatusRid(local, &status_rid, 1);
|
|
+ if (ret)
|
|
+ return -EBUSY;
|
|
|
|
vwrq->value = le16_to_cpu(status_rid.currentXmitRate) * 500000;
|
|
/* If more than one rate, set auto */
|
|
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
|
|
index 5973eecbc0378..18c5975d7c037 100644
|
|
--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
|
|
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
|
|
@@ -1167,8 +1167,11 @@ static void iwl_mvm_queue_state_change(struct iwl_op_mode *op_mode,
|
|
mvmtxq = iwl_mvm_txq_from_mac80211(txq);
|
|
mvmtxq->stopped = !start;
|
|
|
|
- if (start && mvmsta->sta_state != IEEE80211_STA_NOTEXIST)
|
|
+ if (start && mvmsta->sta_state != IEEE80211_STA_NOTEXIST) {
|
|
+ local_bh_disable();
|
|
iwl_mvm_mac_itxq_xmit(mvm->hw, txq);
|
|
+ local_bh_enable();
|
|
+ }
|
|
}
|
|
|
|
out:
|
|
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
|
|
index a3255100e3fee..7befb92b5159c 100644
|
|
--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
|
|
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
|
|
@@ -2557,7 +2557,7 @@ int iwl_mvm_sta_rx_agg(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
|
|
}
|
|
|
|
if (iwl_mvm_has_new_rx_api(mvm) && start) {
|
|
- u16 reorder_buf_size = buf_size * sizeof(baid_data->entries[0]);
|
|
+ u32 reorder_buf_size = buf_size * sizeof(baid_data->entries[0]);
|
|
|
|
/* sparse doesn't like the __align() so don't check */
|
|
#ifndef __CHECKER__
|
|
diff --git a/drivers/net/wireless/intersil/orinoco/orinoco_cs.c b/drivers/net/wireless/intersil/orinoco/orinoco_cs.c
|
|
index a956f965a1e5e..03bfd2482656c 100644
|
|
--- a/drivers/net/wireless/intersil/orinoco/orinoco_cs.c
|
|
+++ b/drivers/net/wireless/intersil/orinoco/orinoco_cs.c
|
|
@@ -96,6 +96,7 @@ orinoco_cs_probe(struct pcmcia_device *link)
|
|
{
|
|
struct orinoco_private *priv;
|
|
struct orinoco_pccard *card;
|
|
+ int ret;
|
|
|
|
priv = alloc_orinocodev(sizeof(*card), &link->dev,
|
|
orinoco_cs_hard_reset, NULL);
|
|
@@ -107,8 +108,16 @@ orinoco_cs_probe(struct pcmcia_device *link)
|
|
card->p_dev = link;
|
|
link->priv = priv;
|
|
|
|
- return orinoco_cs_config(link);
|
|
-} /* orinoco_cs_attach */
|
|
+ ret = orinoco_cs_config(link);
|
|
+ if (ret)
|
|
+ goto err_free_orinocodev;
|
|
+
|
|
+ return 0;
|
|
+
|
|
+err_free_orinocodev:
|
|
+ free_orinocodev(priv);
|
|
+ return ret;
|
|
+}
|
|
|
|
static void orinoco_cs_detach(struct pcmcia_device *link)
|
|
{
|
|
diff --git a/drivers/net/wireless/intersil/orinoco/spectrum_cs.c b/drivers/net/wireless/intersil/orinoco/spectrum_cs.c
|
|
index b60048c95e0a8..011c86e55923e 100644
|
|
--- a/drivers/net/wireless/intersil/orinoco/spectrum_cs.c
|
|
+++ b/drivers/net/wireless/intersil/orinoco/spectrum_cs.c
|
|
@@ -157,6 +157,7 @@ spectrum_cs_probe(struct pcmcia_device *link)
|
|
{
|
|
struct orinoco_private *priv;
|
|
struct orinoco_pccard *card;
|
|
+ int ret;
|
|
|
|
priv = alloc_orinocodev(sizeof(*card), &link->dev,
|
|
spectrum_cs_hard_reset,
|
|
@@ -169,8 +170,16 @@ spectrum_cs_probe(struct pcmcia_device *link)
|
|
card->p_dev = link;
|
|
link->priv = priv;
|
|
|
|
- return spectrum_cs_config(link);
|
|
-} /* spectrum_cs_attach */
|
|
+ ret = spectrum_cs_config(link);
|
|
+ if (ret)
|
|
+ goto err_free_orinocodev;
|
|
+
|
|
+ return 0;
|
|
+
|
|
+err_free_orinocodev:
|
|
+ free_orinocodev(priv);
|
|
+ return ret;
|
|
+}
|
|
|
|
static void spectrum_cs_detach(struct pcmcia_device *link)
|
|
{
|
|
diff --git a/drivers/net/wireless/marvell/mwifiex/scan.c b/drivers/net/wireless/marvell/mwifiex/scan.c
|
|
index 629af26675cf1..1ab04adc53dcd 100644
|
|
--- a/drivers/net/wireless/marvell/mwifiex/scan.c
|
|
+++ b/drivers/net/wireless/marvell/mwifiex/scan.c
|
|
@@ -2202,9 +2202,9 @@ int mwifiex_ret_802_11_scan(struct mwifiex_private *priv,
|
|
|
|
if (nd_config) {
|
|
adapter->nd_info =
|
|
- kzalloc(sizeof(struct cfg80211_wowlan_nd_match) +
|
|
- sizeof(struct cfg80211_wowlan_nd_match *) *
|
|
- scan_rsp->number_of_sets, GFP_ATOMIC);
|
|
+ kzalloc(struct_size(adapter->nd_info, matches,
|
|
+ scan_rsp->number_of_sets),
|
|
+ GFP_ATOMIC);
|
|
|
|
if (adapter->nd_info)
|
|
adapter->nd_info->n_matches = scan_rsp->number_of_sets;
|
|
diff --git a/drivers/net/wireless/ray_cs.c b/drivers/net/wireless/ray_cs.c
|
|
index 3836d6ac53049..d9c1ac5cb5626 100644
|
|
--- a/drivers/net/wireless/ray_cs.c
|
|
+++ b/drivers/net/wireless/ray_cs.c
|
|
@@ -270,13 +270,14 @@ static int ray_probe(struct pcmcia_device *p_dev)
|
|
{
|
|
ray_dev_t *local;
|
|
struct net_device *dev;
|
|
+ int ret;
|
|
|
|
dev_dbg(&p_dev->dev, "ray_attach()\n");
|
|
|
|
/* Allocate space for private device-specific data */
|
|
dev = alloc_etherdev(sizeof(ray_dev_t));
|
|
if (!dev)
|
|
- goto fail_alloc_dev;
|
|
+ return -ENOMEM;
|
|
|
|
local = netdev_priv(dev);
|
|
local->finder = p_dev;
|
|
@@ -313,11 +314,16 @@ static int ray_probe(struct pcmcia_device *p_dev)
|
|
timer_setup(&local->timer, NULL, 0);
|
|
|
|
this_device = p_dev;
|
|
- return ray_config(p_dev);
|
|
+ ret = ray_config(p_dev);
|
|
+ if (ret)
|
|
+ goto err_free_dev;
|
|
+
|
|
+ return 0;
|
|
|
|
-fail_alloc_dev:
|
|
- return -ENOMEM;
|
|
-} /* ray_attach */
|
|
+err_free_dev:
|
|
+ free_netdev(dev);
|
|
+ return ret;
|
|
+}
|
|
|
|
static void ray_detach(struct pcmcia_device *link)
|
|
{
|
|
@@ -1641,38 +1647,34 @@ static void authenticate_timeout(struct timer_list *t)
|
|
/*===========================================================================*/
|
|
static int parse_addr(char *in_str, UCHAR *out)
|
|
{
|
|
+ int i, k;
|
|
int len;
|
|
- int i, j, k;
|
|
- int status;
|
|
|
|
if (in_str == NULL)
|
|
return 0;
|
|
- if ((len = strlen(in_str)) < 2)
|
|
+ len = strnlen(in_str, ADDRLEN * 2 + 1) - 1;
|
|
+ if (len < 1)
|
|
return 0;
|
|
memset(out, 0, ADDRLEN);
|
|
|
|
- status = 1;
|
|
- j = len - 1;
|
|
- if (j > 12)
|
|
- j = 12;
|
|
i = 5;
|
|
|
|
- while (j > 0) {
|
|
- if ((k = hex_to_bin(in_str[j--])) != -1)
|
|
+ while (len > 0) {
|
|
+ if ((k = hex_to_bin(in_str[len--])) != -1)
|
|
out[i] = k;
|
|
else
|
|
return 0;
|
|
|
|
- if (j == 0)
|
|
+ if (len == 0)
|
|
break;
|
|
- if ((k = hex_to_bin(in_str[j--])) != -1)
|
|
+ if ((k = hex_to_bin(in_str[len--])) != -1)
|
|
out[i] += k << 4;
|
|
else
|
|
return 0;
|
|
if (!i--)
|
|
break;
|
|
}
|
|
- return status;
|
|
+ return 1;
|
|
}
|
|
|
|
/*===========================================================================*/
|
|
diff --git a/drivers/net/wireless/rsi/rsi_91x_sdio.c b/drivers/net/wireless/rsi/rsi_91x_sdio.c
|
|
index 4fe837090cdae..22b0567ad8261 100644
|
|
--- a/drivers/net/wireless/rsi/rsi_91x_sdio.c
|
|
+++ b/drivers/net/wireless/rsi/rsi_91x_sdio.c
|
|
@@ -1479,9 +1479,6 @@ static void rsi_shutdown(struct device *dev)
|
|
if (sdev->write_fail)
|
|
rsi_dbg(INFO_ZONE, "###### Device is not ready #######\n");
|
|
|
|
- if (rsi_set_sdio_pm_caps(adapter))
|
|
- rsi_dbg(INFO_ZONE, "Setting power management caps failed\n");
|
|
-
|
|
rsi_dbg(INFO_ZONE, "***** RSI module shut down *****\n");
|
|
}
|
|
|
|
diff --git a/drivers/net/wireless/wl3501_cs.c b/drivers/net/wireless/wl3501_cs.c
|
|
index 8638c7c72bc30..e6505624f0c28 100644
|
|
--- a/drivers/net/wireless/wl3501_cs.c
|
|
+++ b/drivers/net/wireless/wl3501_cs.c
|
|
@@ -134,8 +134,8 @@ static const struct {
|
|
|
|
/**
|
|
* iw_valid_channel - validate channel in regulatory domain
|
|
- * @reg_comain - regulatory domain
|
|
- * @channel - channel to validate
|
|
+ * @reg_domain: regulatory domain
|
|
+ * @channel: channel to validate
|
|
*
|
|
* Returns 0 if invalid in the specified regulatory domain, non-zero if valid.
|
|
*/
|
|
@@ -154,7 +154,7 @@ static int iw_valid_channel(int reg_domain, int channel)
|
|
|
|
/**
|
|
* iw_default_channel - get default channel for a regulatory domain
|
|
- * @reg_comain - regulatory domain
|
|
+ * @reg_domain: regulatory domain
|
|
*
|
|
* Returns the default channel for a regulatory domain
|
|
*/
|
|
@@ -237,6 +237,7 @@ static int wl3501_get_flash_mac_addr(struct wl3501_card *this)
|
|
|
|
/**
|
|
* wl3501_set_to_wla - Move 'size' bytes from PC to card
|
|
+ * @this: Card
|
|
* @dest: Card addressing space
|
|
* @src: PC addressing space
|
|
* @size: Bytes to move
|
|
@@ -259,6 +260,7 @@ static void wl3501_set_to_wla(struct wl3501_card *this, u16 dest, void *src,
|
|
|
|
/**
|
|
* wl3501_get_from_wla - Move 'size' bytes from card to PC
|
|
+ * @this: Card
|
|
* @src: Card addressing space
|
|
* @dest: PC addressing space
|
|
* @size: Bytes to move
|
|
@@ -455,12 +457,10 @@ out:
|
|
|
|
/**
|
|
* wl3501_send_pkt - Send a packet.
|
|
- * @this - card
|
|
- *
|
|
- * Send a packet.
|
|
- *
|
|
- * data = Ethernet raw frame. (e.g. data[0] - data[5] is Dest MAC Addr,
|
|
+ * @this: Card
|
|
+ * @data: Ethernet raw frame. (e.g. data[0] - data[5] is Dest MAC Addr,
|
|
* data[6] - data[11] is Src MAC Addr)
|
|
+ * @len: Packet length
|
|
* Ref: IEEE 802.11
|
|
*/
|
|
static int wl3501_send_pkt(struct wl3501_card *this, u8 *data, u16 len)
|
|
@@ -723,7 +723,7 @@ static void wl3501_mgmt_scan_confirm(struct wl3501_card *this, u16 addr)
|
|
|
|
/**
|
|
* wl3501_block_interrupt - Mask interrupt from SUTRO
|
|
- * @this - card
|
|
+ * @this: Card
|
|
*
|
|
* Mask interrupt from SUTRO. (i.e. SUTRO cannot interrupt the HOST)
|
|
* Return: 1 if interrupt is originally enabled
|
|
@@ -740,7 +740,7 @@ static int wl3501_block_interrupt(struct wl3501_card *this)
|
|
|
|
/**
|
|
* wl3501_unblock_interrupt - Enable interrupt from SUTRO
|
|
- * @this - card
|
|
+ * @this: Card
|
|
*
|
|
* Enable interrupt from SUTRO. (i.e. SUTRO can interrupt the HOST)
|
|
* Return: 1 if interrupt is originally enabled
|
|
@@ -1114,8 +1114,8 @@ static inline void wl3501_ack_interrupt(struct wl3501_card *this)
|
|
|
|
/**
|
|
* wl3501_interrupt - Hardware interrupt from card.
|
|
- * @irq - Interrupt number
|
|
- * @dev_id - net_device
|
|
+ * @irq: Interrupt number
|
|
+ * @dev_id: net_device
|
|
*
|
|
* We must acknowledge the interrupt as soon as possible, and block the
|
|
* interrupt from the same card immediately to prevent re-entry.
|
|
@@ -1251,7 +1251,7 @@ static int wl3501_close(struct net_device *dev)
|
|
|
|
/**
|
|
* wl3501_reset - Reset the SUTRO.
|
|
- * @dev - network device
|
|
+ * @dev: network device
|
|
*
|
|
* It is almost the same as wl3501_open(). In fact, we may just wl3501_close()
|
|
* and wl3501_open() again, but I wouldn't like to free_irq() when the driver
|
|
@@ -1414,7 +1414,7 @@ static struct iw_statistics *wl3501_get_wireless_stats(struct net_device *dev)
|
|
|
|
/**
|
|
* wl3501_detach - deletes a driver "instance"
|
|
- * @link - FILL_IN
|
|
+ * @link: FILL_IN
|
|
*
|
|
* This deletes a driver "instance". The device is de-registered with Card
|
|
* Services. If it has been released, all local data structures are freed.
|
|
@@ -1435,9 +1435,7 @@ static void wl3501_detach(struct pcmcia_device *link)
|
|
wl3501_release(link);
|
|
|
|
unregister_netdev(dev);
|
|
-
|
|
- if (link->priv)
|
|
- free_netdev(link->priv);
|
|
+ free_netdev(dev);
|
|
}
|
|
|
|
static int wl3501_get_name(struct net_device *dev, struct iw_request_info *info,
|
|
@@ -1864,6 +1862,7 @@ static int wl3501_probe(struct pcmcia_device *p_dev)
|
|
{
|
|
struct net_device *dev;
|
|
struct wl3501_card *this;
|
|
+ int ret;
|
|
|
|
/* The io structure describes IO port mapping */
|
|
p_dev->resource[0]->end = 16;
|
|
@@ -1875,8 +1874,7 @@ static int wl3501_probe(struct pcmcia_device *p_dev)
|
|
|
|
dev = alloc_etherdev(sizeof(struct wl3501_card));
|
|
if (!dev)
|
|
- goto out_link;
|
|
-
|
|
+ return -ENOMEM;
|
|
|
|
dev->netdev_ops = &wl3501_netdev_ops;
|
|
dev->watchdog_timeo = 5 * HZ;
|
|
@@ -1889,9 +1887,15 @@ static int wl3501_probe(struct pcmcia_device *p_dev)
|
|
netif_stop_queue(dev);
|
|
p_dev->priv = dev;
|
|
|
|
- return wl3501_config(p_dev);
|
|
-out_link:
|
|
- return -ENOMEM;
|
|
+ ret = wl3501_config(p_dev);
|
|
+ if (ret)
|
|
+ goto out_free_etherdev;
|
|
+
|
|
+ return 0;
|
|
+
|
|
+out_free_etherdev:
|
|
+ free_netdev(dev);
|
|
+ return ret;
|
|
}
|
|
|
|
static int wl3501_config(struct pcmcia_device *link)
|
|
@@ -1947,8 +1951,7 @@ static int wl3501_config(struct pcmcia_device *link)
|
|
goto failed;
|
|
}
|
|
|
|
- for (i = 0; i < 6; i++)
|
|
- dev->dev_addr[i] = ((char *)&this->mac_addr)[i];
|
|
+ eth_hw_addr_set(dev, this->mac_addr);
|
|
|
|
/* print probe information */
|
|
printk(KERN_INFO "%s: wl3501 @ 0x%3.3x, IRQ %d, "
|
|
diff --git a/drivers/ntb/hw/amd/ntb_hw_amd.c b/drivers/ntb/hw/amd/ntb_hw_amd.c
|
|
index abb37659de343..50983d77329ea 100644
|
|
--- a/drivers/ntb/hw/amd/ntb_hw_amd.c
|
|
+++ b/drivers/ntb/hw/amd/ntb_hw_amd.c
|
|
@@ -1153,12 +1153,17 @@ static struct pci_driver amd_ntb_pci_driver = {
|
|
|
|
static int __init amd_ntb_pci_driver_init(void)
|
|
{
|
|
+ int ret;
|
|
pr_info("%s %s\n", NTB_DESC, NTB_VER);
|
|
|
|
if (debugfs_initialized())
|
|
debugfs_dir = debugfs_create_dir(KBUILD_MODNAME, NULL);
|
|
|
|
- return pci_register_driver(&amd_ntb_pci_driver);
|
|
+ ret = pci_register_driver(&amd_ntb_pci_driver);
|
|
+ if (ret)
|
|
+ debugfs_remove_recursive(debugfs_dir);
|
|
+
|
|
+ return ret;
|
|
}
|
|
module_init(amd_ntb_pci_driver_init);
|
|
|
|
diff --git a/drivers/ntb/hw/idt/ntb_hw_idt.c b/drivers/ntb/hw/idt/ntb_hw_idt.c
|
|
index dcf2346805350..a0091900b0cfb 100644
|
|
--- a/drivers/ntb/hw/idt/ntb_hw_idt.c
|
|
+++ b/drivers/ntb/hw/idt/ntb_hw_idt.c
|
|
@@ -2908,6 +2908,7 @@ static struct pci_driver idt_pci_driver = {
|
|
|
|
static int __init idt_pci_driver_init(void)
|
|
{
|
|
+ int ret;
|
|
pr_info("%s %s\n", NTB_DESC, NTB_VER);
|
|
|
|
/* Create the top DebugFS directory if the FS is initialized */
|
|
@@ -2915,7 +2916,11 @@ static int __init idt_pci_driver_init(void)
|
|
dbgfs_topdir = debugfs_create_dir(KBUILD_MODNAME, NULL);
|
|
|
|
/* Register the NTB hardware driver to handle the PCI device */
|
|
- return pci_register_driver(&idt_pci_driver);
|
|
+ ret = pci_register_driver(&idt_pci_driver);
|
|
+ if (ret)
|
|
+ debugfs_remove_recursive(dbgfs_topdir);
|
|
+
|
|
+ return ret;
|
|
}
|
|
module_init(idt_pci_driver_init);
|
|
|
|
diff --git a/drivers/ntb/hw/intel/ntb_hw_gen1.c b/drivers/ntb/hw/intel/ntb_hw_gen1.c
|
|
index bb57ec2390299..8d8739bff9f3c 100644
|
|
--- a/drivers/ntb/hw/intel/ntb_hw_gen1.c
|
|
+++ b/drivers/ntb/hw/intel/ntb_hw_gen1.c
|
|
@@ -2065,12 +2065,17 @@ static struct pci_driver intel_ntb_pci_driver = {
|
|
|
|
static int __init intel_ntb_pci_driver_init(void)
|
|
{
|
|
+ int ret;
|
|
pr_info("%s %s\n", NTB_DESC, NTB_VER);
|
|
|
|
if (debugfs_initialized())
|
|
debugfs_dir = debugfs_create_dir(KBUILD_MODNAME, NULL);
|
|
|
|
- return pci_register_driver(&intel_ntb_pci_driver);
|
|
+ ret = pci_register_driver(&intel_ntb_pci_driver);
|
|
+ if (ret)
|
|
+ debugfs_remove_recursive(debugfs_dir);
|
|
+
|
|
+ return ret;
|
|
}
|
|
module_init(intel_ntb_pci_driver_init);
|
|
|
|
diff --git a/drivers/ntb/ntb_transport.c b/drivers/ntb/ntb_transport.c
|
|
index 00a5d5764993c..3cc0e8ebcdd5c 100644
|
|
--- a/drivers/ntb/ntb_transport.c
|
|
+++ b/drivers/ntb/ntb_transport.c
|
|
@@ -412,7 +412,7 @@ int ntb_transport_register_client_dev(char *device_name)
|
|
|
|
rc = device_register(dev);
|
|
if (rc) {
|
|
- kfree(client_dev);
|
|
+ put_device(dev);
|
|
goto err;
|
|
}
|
|
|
|
diff --git a/drivers/ntb/test/ntb_tool.c b/drivers/ntb/test/ntb_tool.c
|
|
index 6301aa413c3b8..1f64146546221 100644
|
|
--- a/drivers/ntb/test/ntb_tool.c
|
|
+++ b/drivers/ntb/test/ntb_tool.c
|
|
@@ -998,6 +998,8 @@ static int tool_init_mws(struct tool_ctx *tc)
|
|
tc->peers[pidx].outmws =
|
|
devm_kcalloc(&tc->ntb->dev, tc->peers[pidx].outmw_cnt,
|
|
sizeof(*tc->peers[pidx].outmws), GFP_KERNEL);
|
|
+ if (tc->peers[pidx].outmws == NULL)
|
|
+ return -ENOMEM;
|
|
|
|
for (widx = 0; widx < tc->peers[pidx].outmw_cnt; widx++) {
|
|
tc->peers[pidx].outmws[widx].pidx = pidx;
|
|
diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
|
|
index 17f411772f0ca..24dbb69688316 100644
|
|
--- a/drivers/pci/controller/dwc/pcie-qcom.c
|
|
+++ b/drivers/pci/controller/dwc/pcie-qcom.c
|
|
@@ -807,6 +807,8 @@ static int qcom_pcie_get_resources_2_4_0(struct qcom_pcie *pcie)
|
|
return PTR_ERR(res->phy_ahb_reset);
|
|
}
|
|
|
|
+ dw_pcie_dbi_ro_wr_dis(pci);
|
|
+
|
|
return 0;
|
|
}
|
|
|
|
diff --git a/drivers/pci/controller/pci-ftpci100.c b/drivers/pci/controller/pci-ftpci100.c
|
|
index bf5ece5d9291f..88983fd0c1bdd 100644
|
|
--- a/drivers/pci/controller/pci-ftpci100.c
|
|
+++ b/drivers/pci/controller/pci-ftpci100.c
|
|
@@ -458,22 +458,12 @@ static int faraday_pci_probe(struct platform_device *pdev)
|
|
p->dev = dev;
|
|
|
|
/* Retrieve and enable optional clocks */
|
|
- clk = devm_clk_get(dev, "PCLK");
|
|
+ clk = devm_clk_get_enabled(dev, "PCLK");
|
|
if (IS_ERR(clk))
|
|
return PTR_ERR(clk);
|
|
- ret = clk_prepare_enable(clk);
|
|
- if (ret) {
|
|
- dev_err(dev, "could not prepare PCLK\n");
|
|
- return ret;
|
|
- }
|
|
- p->bus_clk = devm_clk_get(dev, "PCICLK");
|
|
+ p->bus_clk = devm_clk_get_enabled(dev, "PCICLK");
|
|
if (IS_ERR(p->bus_clk))
|
|
return PTR_ERR(p->bus_clk);
|
|
- ret = clk_prepare_enable(p->bus_clk);
|
|
- if (ret) {
|
|
- dev_err(dev, "could not prepare PCICLK\n");
|
|
- return ret;
|
|
- }
|
|
|
|
regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
|
p->base = devm_ioremap_resource(dev, regs);
|
|
diff --git a/drivers/pci/controller/pcie-rockchip-ep.c b/drivers/pci/controller/pcie-rockchip-ep.c
|
|
index b82edefffd15f..d7ab669b1b0d0 100644
|
|
--- a/drivers/pci/controller/pcie-rockchip-ep.c
|
|
+++ b/drivers/pci/controller/pcie-rockchip-ep.c
|
|
@@ -124,6 +124,7 @@ static void rockchip_pcie_prog_ep_ob_atu(struct rockchip_pcie *rockchip, u8 fn,
|
|
static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn,
|
|
struct pci_epf_header *hdr)
|
|
{
|
|
+ u32 reg;
|
|
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
|
|
struct rockchip_pcie *rockchip = &ep->rockchip;
|
|
|
|
@@ -136,8 +137,9 @@ static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn,
|
|
PCIE_CORE_CONFIG_VENDOR);
|
|
}
|
|
|
|
- rockchip_pcie_write(rockchip, hdr->deviceid << 16,
|
|
- ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + PCI_VENDOR_ID);
|
|
+ reg = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_DID_VID);
|
|
+ reg = (reg & 0xFFFF) | (hdr->deviceid << 16);
|
|
+ rockchip_pcie_write(rockchip, reg, PCIE_EP_CONFIG_DID_VID);
|
|
|
|
rockchip_pcie_write(rockchip,
|
|
hdr->revid |
|
|
@@ -311,15 +313,15 @@ static int rockchip_pcie_ep_set_msi(struct pci_epc *epc, u8 fn,
|
|
{
|
|
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
|
|
struct rockchip_pcie *rockchip = &ep->rockchip;
|
|
- u16 flags;
|
|
+ u32 flags;
|
|
|
|
flags = rockchip_pcie_read(rockchip,
|
|
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
|
|
ROCKCHIP_PCIE_EP_MSI_CTRL_REG);
|
|
flags &= ~ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_MASK;
|
|
flags |=
|
|
- ((multi_msg_cap << 1) << ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET) |
|
|
- PCI_MSI_FLAGS_64BIT;
|
|
+ (multi_msg_cap << ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET) |
|
|
+ (PCI_MSI_FLAGS_64BIT << ROCKCHIP_PCIE_EP_MSI_FLAGS_OFFSET);
|
|
flags &= ~ROCKCHIP_PCIE_EP_MSI_CTRL_MASK_MSI_CAP;
|
|
rockchip_pcie_write(rockchip, flags,
|
|
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
|
|
@@ -331,7 +333,7 @@ static int rockchip_pcie_ep_get_msi(struct pci_epc *epc, u8 fn)
|
|
{
|
|
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
|
|
struct rockchip_pcie *rockchip = &ep->rockchip;
|
|
- u16 flags;
|
|
+ u32 flags;
|
|
|
|
flags = rockchip_pcie_read(rockchip,
|
|
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
|
|
@@ -344,48 +346,25 @@ static int rockchip_pcie_ep_get_msi(struct pci_epc *epc, u8 fn)
|
|
}
|
|
|
|
static void rockchip_pcie_ep_assert_intx(struct rockchip_pcie_ep *ep, u8 fn,
|
|
- u8 intx, bool is_asserted)
|
|
+ u8 intx, bool do_assert)
|
|
{
|
|
struct rockchip_pcie *rockchip = &ep->rockchip;
|
|
- u32 r = ep->max_regions - 1;
|
|
- u32 offset;
|
|
- u32 status;
|
|
- u8 msg_code;
|
|
-
|
|
- if (unlikely(ep->irq_pci_addr != ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR ||
|
|
- ep->irq_pci_fn != fn)) {
|
|
- rockchip_pcie_prog_ep_ob_atu(rockchip, fn, r,
|
|
- AXI_WRAPPER_NOR_MSG,
|
|
- ep->irq_phys_addr, 0, 0);
|
|
- ep->irq_pci_addr = ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR;
|
|
- ep->irq_pci_fn = fn;
|
|
- }
|
|
|
|
intx &= 3;
|
|
- if (is_asserted) {
|
|
+
|
|
+ if (do_assert) {
|
|
ep->irq_pending |= BIT(intx);
|
|
- msg_code = ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTA + intx;
|
|
+ rockchip_pcie_write(rockchip,
|
|
+ PCIE_CLIENT_INT_IN_ASSERT |
|
|
+ PCIE_CLIENT_INT_PEND_ST_PEND,
|
|
+ PCIE_CLIENT_LEGACY_INT_CTRL);
|
|
} else {
|
|
ep->irq_pending &= ~BIT(intx);
|
|
- msg_code = ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTA + intx;
|
|
+ rockchip_pcie_write(rockchip,
|
|
+ PCIE_CLIENT_INT_IN_DEASSERT |
|
|
+ PCIE_CLIENT_INT_PEND_ST_NORMAL,
|
|
+ PCIE_CLIENT_LEGACY_INT_CTRL);
|
|
}
|
|
-
|
|
- status = rockchip_pcie_read(rockchip,
|
|
- ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
|
|
- ROCKCHIP_PCIE_EP_CMD_STATUS);
|
|
- status &= ROCKCHIP_PCIE_EP_CMD_STATUS_IS;
|
|
-
|
|
- if ((status != 0) ^ (ep->irq_pending != 0)) {
|
|
- status ^= ROCKCHIP_PCIE_EP_CMD_STATUS_IS;
|
|
- rockchip_pcie_write(rockchip, status,
|
|
- ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
|
|
- ROCKCHIP_PCIE_EP_CMD_STATUS);
|
|
- }
|
|
-
|
|
- offset =
|
|
- ROCKCHIP_PCIE_MSG_ROUTING(ROCKCHIP_PCIE_MSG_ROUTING_LOCAL_INTX) |
|
|
- ROCKCHIP_PCIE_MSG_CODE(msg_code) | ROCKCHIP_PCIE_MSG_NO_DATA;
|
|
- writel(0, ep->irq_cpu_addr + offset);
|
|
}
|
|
|
|
static int rockchip_pcie_ep_send_legacy_irq(struct rockchip_pcie_ep *ep, u8 fn,
|
|
@@ -415,7 +394,7 @@ static int rockchip_pcie_ep_send_msi_irq(struct rockchip_pcie_ep *ep, u8 fn,
|
|
u8 interrupt_num)
|
|
{
|
|
struct rockchip_pcie *rockchip = &ep->rockchip;
|
|
- u16 flags, mme, data, data_mask;
|
|
+ u32 flags, mme, data, data_mask;
|
|
u8 msi_count;
|
|
u64 pci_addr, pci_addr_mask = 0xff;
|
|
|
|
@@ -505,6 +484,7 @@ static const struct pci_epc_features rockchip_pcie_epc_features = {
|
|
.linkup_notifier = false,
|
|
.msi_capable = true,
|
|
.msix_capable = false,
|
|
+ .align = 256,
|
|
};
|
|
|
|
static const struct pci_epc_features*
|
|
@@ -630,6 +610,9 @@ static int rockchip_pcie_ep_probe(struct platform_device *pdev)
|
|
|
|
ep->irq_pci_addr = ROCKCHIP_PCIE_EP_DUMMY_IRQ_ADDR;
|
|
|
|
+ rockchip_pcie_write(rockchip, PCIE_CLIENT_CONF_ENABLE,
|
|
+ PCIE_CLIENT_CONFIG);
|
|
+
|
|
return 0;
|
|
err_epc_mem_exit:
|
|
pci_epc_mem_exit(epc);
|
|
diff --git a/drivers/pci/controller/pcie-rockchip.c b/drivers/pci/controller/pcie-rockchip.c
|
|
index c53d1322a3d6c..b047437605cb2 100644
|
|
--- a/drivers/pci/controller/pcie-rockchip.c
|
|
+++ b/drivers/pci/controller/pcie-rockchip.c
|
|
@@ -14,6 +14,7 @@
|
|
#include <linux/clk.h>
|
|
#include <linux/delay.h>
|
|
#include <linux/gpio/consumer.h>
|
|
+#include <linux/iopoll.h>
|
|
#include <linux/of_pci.h>
|
|
#include <linux/phy/phy.h>
|
|
#include <linux/platform_device.h>
|
|
@@ -154,6 +155,12 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
|
|
}
|
|
EXPORT_SYMBOL_GPL(rockchip_pcie_parse_dt);
|
|
|
|
+#define rockchip_pcie_read_addr(addr) rockchip_pcie_read(rockchip, addr)
|
|
+/* 100 ms max wait time for PHY PLLs to lock */
|
|
+#define RK_PHY_PLL_LOCK_TIMEOUT_US 100000
|
|
+/* Sleep should be less than 20ms */
|
|
+#define RK_PHY_PLL_LOCK_SLEEP_US 1000
|
|
+
|
|
int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
|
|
{
|
|
struct device *dev = rockchip->dev;
|
|
@@ -255,6 +262,16 @@ int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
|
|
}
|
|
}
|
|
|
|
+ err = readx_poll_timeout(rockchip_pcie_read_addr,
|
|
+ PCIE_CLIENT_SIDE_BAND_STATUS,
|
|
+ regs, !(regs & PCIE_CLIENT_PHY_ST),
|
|
+ RK_PHY_PLL_LOCK_SLEEP_US,
|
|
+ RK_PHY_PLL_LOCK_TIMEOUT_US);
|
|
+ if (err) {
|
|
+ dev_err(dev, "PHY PLLs could not lock, %d\n", err);
|
|
+ goto err_power_off_phy;
|
|
+ }
|
|
+
|
|
/*
|
|
* Please don't reorder the deassert sequence of the following
|
|
* four reset pins.
|
|
diff --git a/drivers/pci/controller/pcie-rockchip.h b/drivers/pci/controller/pcie-rockchip.h
|
|
index 8e87a059ce73d..1c45b3c32151c 100644
|
|
--- a/drivers/pci/controller/pcie-rockchip.h
|
|
+++ b/drivers/pci/controller/pcie-rockchip.h
|
|
@@ -37,6 +37,13 @@
|
|
#define PCIE_CLIENT_MODE_EP HIWORD_UPDATE(0x0040, 0)
|
|
#define PCIE_CLIENT_GEN_SEL_1 HIWORD_UPDATE(0x0080, 0)
|
|
#define PCIE_CLIENT_GEN_SEL_2 HIWORD_UPDATE_BIT(0x0080)
|
|
+#define PCIE_CLIENT_LEGACY_INT_CTRL (PCIE_CLIENT_BASE + 0x0c)
|
|
+#define PCIE_CLIENT_INT_IN_ASSERT HIWORD_UPDATE_BIT(0x0002)
|
|
+#define PCIE_CLIENT_INT_IN_DEASSERT HIWORD_UPDATE(0x0002, 0)
|
|
+#define PCIE_CLIENT_INT_PEND_ST_PEND HIWORD_UPDATE_BIT(0x0001)
|
|
+#define PCIE_CLIENT_INT_PEND_ST_NORMAL HIWORD_UPDATE(0x0001, 0)
|
|
+#define PCIE_CLIENT_SIDE_BAND_STATUS (PCIE_CLIENT_BASE + 0x20)
|
|
+#define PCIE_CLIENT_PHY_ST BIT(12)
|
|
#define PCIE_CLIENT_DEBUG_OUT_0 (PCIE_CLIENT_BASE + 0x3c)
|
|
#define PCIE_CLIENT_DEBUG_LTSSM_MASK GENMASK(5, 0)
|
|
#define PCIE_CLIENT_DEBUG_LTSSM_L1 0x18
|
|
@@ -132,6 +139,8 @@
|
|
#define PCIE_RC_RP_ATS_BASE 0x400000
|
|
#define PCIE_RC_CONFIG_NORMAL_BASE 0x800000
|
|
#define PCIE_RC_CONFIG_BASE 0xa00000
|
|
+#define PCIE_EP_CONFIG_BASE 0xa00000
|
|
+#define PCIE_EP_CONFIG_DID_VID (PCIE_EP_CONFIG_BASE + 0x00)
|
|
#define PCIE_RC_CONFIG_RID_CCR (PCIE_RC_CONFIG_BASE + 0x08)
|
|
#define PCIE_RC_CONFIG_SCC_SHIFT 16
|
|
#define PCIE_RC_CONFIG_DCR (PCIE_RC_CONFIG_BASE + 0xc4)
|
|
@@ -223,6 +232,7 @@
|
|
#define ROCKCHIP_PCIE_EP_CMD_STATUS 0x4
|
|
#define ROCKCHIP_PCIE_EP_CMD_STATUS_IS BIT(19)
|
|
#define ROCKCHIP_PCIE_EP_MSI_CTRL_REG 0x90
|
|
+#define ROCKCHIP_PCIE_EP_MSI_FLAGS_OFFSET 16
|
|
#define ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET 17
|
|
#define ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_MASK GENMASK(19, 17)
|
|
#define ROCKCHIP_PCIE_EP_MSI_CTRL_MME_OFFSET 20
|
|
@@ -230,7 +240,6 @@
|
|
#define ROCKCHIP_PCIE_EP_MSI_CTRL_ME BIT(16)
|
|
#define ROCKCHIP_PCIE_EP_MSI_CTRL_MASK_MSI_CAP BIT(24)
|
|
#define ROCKCHIP_PCIE_EP_DUMMY_IRQ_ADDR 0x1
|
|
-#define ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR 0x3
|
|
#define ROCKCHIP_PCIE_EP_FUNC_BASE(fn) (((fn) << 12) & GENMASK(19, 12))
|
|
#define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \
|
|
(PCIE_RC_RP_ATS_BASE + 0x0840 + (fn) * 0x0040 + (bar) * 0x0008)
|
|
diff --git a/drivers/pci/hotplug/pciehp_ctrl.c b/drivers/pci/hotplug/pciehp_ctrl.c
|
|
index 6503d15effbbd..45d0f63707158 100644
|
|
--- a/drivers/pci/hotplug/pciehp_ctrl.c
|
|
+++ b/drivers/pci/hotplug/pciehp_ctrl.c
|
|
@@ -258,6 +258,14 @@ void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events)
|
|
present = pciehp_card_present(ctrl);
|
|
link_active = pciehp_check_link_active(ctrl);
|
|
if (present <= 0 && link_active <= 0) {
|
|
+ if (ctrl->state == BLINKINGON_STATE) {
|
|
+ ctrl->state = OFF_STATE;
|
|
+ cancel_delayed_work(&ctrl->button_work);
|
|
+ pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
|
|
+ INDICATOR_NOOP);
|
|
+ ctrl_info(ctrl, "Slot(%s): Card not present\n",
|
|
+ slot_name(ctrl));
|
|
+ }
|
|
mutex_unlock(&ctrl->state_lock);
|
|
return;
|
|
}
|
|
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
|
|
index f8c730b6701b6..64c89b23e99f7 100644
|
|
--- a/drivers/pci/pci.c
|
|
+++ b/drivers/pci/pci.c
|
|
@@ -2617,13 +2617,13 @@ static const struct dmi_system_id bridge_d3_blacklist[] = {
|
|
{
|
|
/*
|
|
* Downstream device is not accessible after putting a root port
|
|
- * into D3cold and back into D0 on Elo i2.
|
|
+ * into D3cold and back into D0 on Elo Continental Z2 board
|
|
*/
|
|
- .ident = "Elo i2",
|
|
+ .ident = "Elo Continental Z2",
|
|
.matches = {
|
|
- DMI_MATCH(DMI_SYS_VENDOR, "Elo Touch Solutions"),
|
|
- DMI_MATCH(DMI_PRODUCT_NAME, "Elo i2"),
|
|
- DMI_MATCH(DMI_PRODUCT_VERSION, "RevB"),
|
|
+ DMI_MATCH(DMI_BOARD_VENDOR, "Elo Touch Solutions"),
|
|
+ DMI_MATCH(DMI_BOARD_NAME, "Geminilake"),
|
|
+ DMI_MATCH(DMI_BOARD_VERSION, "Continental Z2"),
|
|
},
|
|
},
|
|
#endif
|
|
diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
|
|
index 7624c71011c6e..d8d27b11b48c4 100644
|
|
--- a/drivers/pci/pcie/aspm.c
|
|
+++ b/drivers/pci/pcie/aspm.c
|
|
@@ -991,21 +991,24 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
|
|
|
|
down_read(&pci_bus_sem);
|
|
mutex_lock(&aspm_lock);
|
|
- /*
|
|
- * All PCIe functions are in one slot, remove one function will remove
|
|
- * the whole slot, so just wait until we are the last function left.
|
|
- */
|
|
- if (!list_empty(&parent->subordinate->devices))
|
|
- goto out;
|
|
|
|
link = parent->link_state;
|
|
root = link->root;
|
|
parent_link = link->parent;
|
|
|
|
- /* All functions are removed, so just disable ASPM for the link */
|
|
+ /*
|
|
+ * link->downstream is a pointer to the pci_dev of function 0. If
|
|
+ * we remove that function, the pci_dev is about to be deallocated,
|
|
+ * so we can't use link->downstream again. Free the link state to
|
|
+ * avoid this.
|
|
+ *
|
|
+ * If we're removing a non-0 function, it's possible we could
|
|
+ * retain the link state, but PCIe r6.0, sec 7.5.3.7, recommends
|
|
+ * programming the same ASPM Control value for all functions of
|
|
+ * multi-function devices, so disable ASPM for all of them.
|
|
+ */
|
|
pcie_config_aspm_link(link, 0);
|
|
list_del(&link->sibling);
|
|
- /* Clock PM is for endpoint device */
|
|
free_link_state(link);
|
|
|
|
/* Recheck latencies and configure upstream links */
|
|
@@ -1013,7 +1016,7 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
|
|
pcie_update_aspm_capable(root);
|
|
pcie_config_aspm_path(parent_link);
|
|
}
|
|
-out:
|
|
+
|
|
mutex_unlock(&aspm_lock);
|
|
up_read(&pci_bus_sem);
|
|
}
|
|
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
|
|
index 449d4ed611a68..73260bd217278 100644
|
|
--- a/drivers/pci/quirks.c
|
|
+++ b/drivers/pci/quirks.c
|
|
@@ -4168,6 +4168,8 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9220,
|
|
/* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c49 */
|
|
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9230,
|
|
quirk_dma_func1_alias);
|
|
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9235,
|
|
+ quirk_dma_func1_alias);
|
|
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TTI, 0x0642,
|
|
quirk_dma_func1_alias);
|
|
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TTI, 0x0645,
|
|
diff --git a/drivers/pinctrl/intel/pinctrl-cherryview.c b/drivers/pinctrl/intel/pinctrl-cherryview.c
|
|
index 8f06445a8e39c..2b48901f1b2af 100644
|
|
--- a/drivers/pinctrl/intel/pinctrl-cherryview.c
|
|
+++ b/drivers/pinctrl/intel/pinctrl-cherryview.c
|
|
@@ -1021,11 +1021,6 @@ static int chv_config_get(struct pinctrl_dev *pctldev, unsigned int pin,
|
|
|
|
break;
|
|
|
|
- case PIN_CONFIG_DRIVE_OPEN_DRAIN:
|
|
- if (!(ctrl1 & CHV_PADCTRL1_ODEN))
|
|
- return -EINVAL;
|
|
- break;
|
|
-
|
|
case PIN_CONFIG_BIAS_HIGH_IMPEDANCE: {
|
|
u32 cfg;
|
|
|
|
@@ -1035,6 +1030,16 @@ static int chv_config_get(struct pinctrl_dev *pctldev, unsigned int pin,
|
|
return -EINVAL;
|
|
|
|
break;
|
|
+
|
|
+ case PIN_CONFIG_DRIVE_PUSH_PULL:
|
|
+ if (ctrl1 & CHV_PADCTRL1_ODEN)
|
|
+ return -EINVAL;
|
|
+ break;
|
|
+
|
|
+ case PIN_CONFIG_DRIVE_OPEN_DRAIN:
|
|
+ if (!(ctrl1 & CHV_PADCTRL1_ODEN))
|
|
+ return -EINVAL;
|
|
+ break;
|
|
}
|
|
|
|
default:
|
|
diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
|
|
index 887dc57704402..2415085eadeda 100644
|
|
--- a/drivers/pinctrl/pinctrl-amd.c
|
|
+++ b/drivers/pinctrl/pinctrl-amd.c
|
|
@@ -123,6 +123,14 @@ static int amd_gpio_set_debounce(struct gpio_chip *gc, unsigned offset,
|
|
struct amd_gpio *gpio_dev = gpiochip_get_data(gc);
|
|
|
|
raw_spin_lock_irqsave(&gpio_dev->lock, flags);
|
|
+
|
|
+ /* Use special handling for Pin0 debounce */
|
|
+ if (offset == 0) {
|
|
+ pin_reg = readl(gpio_dev->base + WAKE_INT_MASTER_REG);
|
|
+ if (pin_reg & INTERNAL_GPIO0_DEBOUNCE)
|
|
+ debounce = 0;
|
|
+ }
|
|
+
|
|
pin_reg = readl(gpio_dev->base + offset * 4);
|
|
|
|
if (debounce) {
|
|
@@ -178,18 +186,6 @@ static int amd_gpio_set_debounce(struct gpio_chip *gc, unsigned offset,
|
|
return ret;
|
|
}
|
|
|
|
-static int amd_gpio_set_config(struct gpio_chip *gc, unsigned offset,
|
|
- unsigned long config)
|
|
-{
|
|
- u32 debounce;
|
|
-
|
|
- if (pinconf_to_config_param(config) != PIN_CONFIG_INPUT_DEBOUNCE)
|
|
- return -ENOTSUPP;
|
|
-
|
|
- debounce = pinconf_to_config_argument(config);
|
|
- return amd_gpio_set_debounce(gc, offset, debounce);
|
|
-}
|
|
-
|
|
#ifdef CONFIG_DEBUG_FS
|
|
static void amd_gpio_dbg_show(struct seq_file *s, struct gpio_chip *gc)
|
|
{
|
|
@@ -212,6 +208,7 @@ static void amd_gpio_dbg_show(struct seq_file *s, struct gpio_chip *gc)
|
|
char *output_value;
|
|
char *output_enable;
|
|
|
|
+ seq_printf(s, "WAKE_INT_MASTER_REG: 0x%08x\n", readl(gpio_dev->base + WAKE_INT_MASTER_REG));
|
|
for (bank = 0; bank < gpio_dev->hwbank_num; bank++) {
|
|
seq_printf(s, "GPIO bank%d\t", bank);
|
|
|
|
@@ -673,7 +670,7 @@ static int amd_pinconf_get(struct pinctrl_dev *pctldev,
|
|
}
|
|
|
|
static int amd_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
|
|
- unsigned long *configs, unsigned num_configs)
|
|
+ unsigned long *configs, unsigned int num_configs)
|
|
{
|
|
int i;
|
|
u32 arg;
|
|
@@ -763,6 +760,20 @@ static int amd_pinconf_group_set(struct pinctrl_dev *pctldev,
|
|
return 0;
|
|
}
|
|
|
|
+static int amd_gpio_set_config(struct gpio_chip *gc, unsigned int pin,
|
|
+ unsigned long config)
|
|
+{
|
|
+ struct amd_gpio *gpio_dev = gpiochip_get_data(gc);
|
|
+
|
|
+ if (pinconf_to_config_param(config) == PIN_CONFIG_INPUT_DEBOUNCE) {
|
|
+ u32 debounce = pinconf_to_config_argument(config);
|
|
+
|
|
+ return amd_gpio_set_debounce(gc, pin, debounce);
|
|
+ }
|
|
+
|
|
+ return amd_pinconf_set(gpio_dev->pctrl, pin, &config, 1);
|
|
+}
|
|
+
|
|
static const struct pinconf_ops amd_pinconf_ops = {
|
|
.pin_config_get = amd_pinconf_get,
|
|
.pin_config_set = amd_pinconf_set,
|
|
@@ -790,9 +801,9 @@ static void amd_gpio_irq_init(struct amd_gpio *gpio_dev)
|
|
|
|
raw_spin_lock_irqsave(&gpio_dev->lock, flags);
|
|
|
|
- pin_reg = readl(gpio_dev->base + i * 4);
|
|
+ pin_reg = readl(gpio_dev->base + pin * 4);
|
|
pin_reg &= ~mask;
|
|
- writel(pin_reg, gpio_dev->base + i * 4);
|
|
+ writel(pin_reg, gpio_dev->base + pin * 4);
|
|
|
|
raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
|
|
}
|
|
diff --git a/drivers/pinctrl/pinctrl-amd.h b/drivers/pinctrl/pinctrl-amd.h
|
|
index d4a192df5fabd..55ff463bb9960 100644
|
|
--- a/drivers/pinctrl/pinctrl-amd.h
|
|
+++ b/drivers/pinctrl/pinctrl-amd.h
|
|
@@ -17,6 +17,7 @@
|
|
#define AMD_GPIO_PINS_BANK3 32
|
|
|
|
#define WAKE_INT_MASTER_REG 0xfc
|
|
+#define INTERNAL_GPIO0_DEBOUNCE (1 << 15)
|
|
#define EOI_MASK (1 << 29)
|
|
|
|
#define WAKE_INT_STATUS_REG0 0x2f8
|
|
diff --git a/drivers/pinctrl/pinctrl-at91-pio4.c b/drivers/pinctrl/pinctrl-at91-pio4.c
|
|
index 064b7c3c942a9..9c225256e3f4e 100644
|
|
--- a/drivers/pinctrl/pinctrl-at91-pio4.c
|
|
+++ b/drivers/pinctrl/pinctrl-at91-pio4.c
|
|
@@ -1013,6 +1013,8 @@ static int atmel_pinctrl_probe(struct platform_device *pdev)
|
|
/* Pin naming convention: P(bank_name)(bank_pin_number). */
|
|
pin_desc[i].name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "P%c%d",
|
|
bank + 'A', line);
|
|
+ if (!pin_desc[i].name)
|
|
+ return -ENOMEM;
|
|
|
|
group->name = group_names[i] = pin_desc[i].name;
|
|
group->pin = pin_desc[i].number;
|
|
diff --git a/drivers/platform/x86/wmi.c b/drivers/platform/x86/wmi.c
|
|
index cb029126a68c6..67c4ec554ada8 100644
|
|
--- a/drivers/platform/x86/wmi.c
|
|
+++ b/drivers/platform/x86/wmi.c
|
|
@@ -39,7 +39,7 @@ MODULE_LICENSE("GPL");
|
|
static LIST_HEAD(wmi_block_list);
|
|
|
|
struct guid_block {
|
|
- char guid[16];
|
|
+ guid_t guid;
|
|
union {
|
|
char object_id[2];
|
|
struct {
|
|
@@ -110,17 +110,17 @@ static struct platform_driver acpi_wmi_driver = {
|
|
|
|
static bool find_guid(const char *guid_string, struct wmi_block **out)
|
|
{
|
|
- uuid_le guid_input;
|
|
+ guid_t guid_input;
|
|
struct wmi_block *wblock;
|
|
struct guid_block *block;
|
|
|
|
- if (uuid_le_to_bin(guid_string, &guid_input))
|
|
+ if (guid_parse(guid_string, &guid_input))
|
|
return false;
|
|
|
|
list_for_each_entry(wblock, &wmi_block_list, list) {
|
|
block = &wblock->gblock;
|
|
|
|
- if (memcmp(block->guid, &guid_input, 16) == 0) {
|
|
+ if (guid_equal(&block->guid, &guid_input)) {
|
|
if (out)
|
|
*out = wblock;
|
|
return true;
|
|
@@ -129,11 +129,20 @@ static bool find_guid(const char *guid_string, struct wmi_block **out)
|
|
return false;
|
|
}
|
|
|
|
+static bool guid_parse_and_compare(const char *string, const guid_t *guid)
|
|
+{
|
|
+ guid_t guid_input;
|
|
+
|
|
+ if (guid_parse(string, &guid_input))
|
|
+ return false;
|
|
+
|
|
+ return guid_equal(&guid_input, guid);
|
|
+}
|
|
+
|
|
static const void *find_guid_context(struct wmi_block *wblock,
|
|
struct wmi_driver *wdriver)
|
|
{
|
|
const struct wmi_device_id *id;
|
|
- uuid_le guid_input;
|
|
|
|
if (wblock == NULL || wdriver == NULL)
|
|
return NULL;
|
|
@@ -142,9 +151,7 @@ static const void *find_guid_context(struct wmi_block *wblock,
|
|
|
|
id = wdriver->id_table;
|
|
while (*id->guid_string) {
|
|
- if (uuid_le_to_bin(id->guid_string, &guid_input))
|
|
- continue;
|
|
- if (!memcmp(wblock->gblock.guid, &guid_input, 16))
|
|
+ if (guid_parse_and_compare(id->guid_string, &wblock->gblock.guid))
|
|
return id->context;
|
|
id++;
|
|
}
|
|
@@ -456,7 +463,7 @@ EXPORT_SYMBOL_GPL(wmi_set_block);
|
|
|
|
static void wmi_dump_wdg(const struct guid_block *g)
|
|
{
|
|
- pr_info("%pUL:\n", g->guid);
|
|
+ pr_info("%pUL:\n", &g->guid);
|
|
if (g->flags & ACPI_WMI_EVENT)
|
|
pr_info("\tnotify_id: 0x%02X\n", g->notify_id);
|
|
else
|
|
@@ -526,18 +533,18 @@ wmi_notify_handler handler, void *data)
|
|
{
|
|
struct wmi_block *block;
|
|
acpi_status status = AE_NOT_EXIST;
|
|
- uuid_le guid_input;
|
|
+ guid_t guid_input;
|
|
|
|
if (!guid || !handler)
|
|
return AE_BAD_PARAMETER;
|
|
|
|
- if (uuid_le_to_bin(guid, &guid_input))
|
|
+ if (guid_parse(guid, &guid_input))
|
|
return AE_BAD_PARAMETER;
|
|
|
|
list_for_each_entry(block, &wmi_block_list, list) {
|
|
acpi_status wmi_status;
|
|
|
|
- if (memcmp(block->gblock.guid, &guid_input, 16) == 0) {
|
|
+ if (guid_equal(&block->gblock.guid, &guid_input)) {
|
|
if (block->handler &&
|
|
block->handler != wmi_notify_debug)
|
|
return AE_ALREADY_ACQUIRED;
|
|
@@ -565,18 +572,18 @@ acpi_status wmi_remove_notify_handler(const char *guid)
|
|
{
|
|
struct wmi_block *block;
|
|
acpi_status status = AE_NOT_EXIST;
|
|
- uuid_le guid_input;
|
|
+ guid_t guid_input;
|
|
|
|
if (!guid)
|
|
return AE_BAD_PARAMETER;
|
|
|
|
- if (uuid_le_to_bin(guid, &guid_input))
|
|
+ if (guid_parse(guid, &guid_input))
|
|
return AE_BAD_PARAMETER;
|
|
|
|
list_for_each_entry(block, &wmi_block_list, list) {
|
|
acpi_status wmi_status;
|
|
|
|
- if (memcmp(block->gblock.guid, &guid_input, 16) == 0) {
|
|
+ if (guid_equal(&block->gblock.guid, &guid_input)) {
|
|
if (!block->handler ||
|
|
block->handler == wmi_notify_debug)
|
|
return AE_NULL_ENTRY;
|
|
@@ -612,7 +619,6 @@ acpi_status wmi_get_event_data(u32 event, struct acpi_buffer *out)
|
|
{
|
|
struct acpi_object_list input;
|
|
union acpi_object params[1];
|
|
- struct guid_block *gblock;
|
|
struct wmi_block *wblock;
|
|
|
|
input.count = 1;
|
|
@@ -621,7 +627,7 @@ acpi_status wmi_get_event_data(u32 event, struct acpi_buffer *out)
|
|
params[0].integer.value = event;
|
|
|
|
list_for_each_entry(wblock, &wmi_block_list, list) {
|
|
- gblock = &wblock->gblock;
|
|
+ struct guid_block *gblock = &wblock->gblock;
|
|
|
|
if ((gblock->flags & ACPI_WMI_EVENT) &&
|
|
(gblock->notify_id == event))
|
|
@@ -682,7 +688,7 @@ static ssize_t modalias_show(struct device *dev, struct device_attribute *attr,
|
|
{
|
|
struct wmi_block *wblock = dev_to_wblock(dev);
|
|
|
|
- return sprintf(buf, "wmi:%pUL\n", wblock->gblock.guid);
|
|
+ return sprintf(buf, "wmi:%pUL\n", &wblock->gblock.guid);
|
|
}
|
|
static DEVICE_ATTR_RO(modalias);
|
|
|
|
@@ -691,7 +697,7 @@ static ssize_t guid_show(struct device *dev, struct device_attribute *attr,
|
|
{
|
|
struct wmi_block *wblock = dev_to_wblock(dev);
|
|
|
|
- return sprintf(buf, "%pUL\n", wblock->gblock.guid);
|
|
+ return sprintf(buf, "%pUL\n", &wblock->gblock.guid);
|
|
}
|
|
static DEVICE_ATTR_RO(guid);
|
|
|
|
@@ -774,10 +780,10 @@ static int wmi_dev_uevent(struct device *dev, struct kobj_uevent_env *env)
|
|
{
|
|
struct wmi_block *wblock = dev_to_wblock(dev);
|
|
|
|
- if (add_uevent_var(env, "MODALIAS=wmi:%pUL", wblock->gblock.guid))
|
|
+ if (add_uevent_var(env, "MODALIAS=wmi:%pUL", &wblock->gblock.guid))
|
|
return -ENOMEM;
|
|
|
|
- if (add_uevent_var(env, "WMI_GUID=%pUL", wblock->gblock.guid))
|
|
+ if (add_uevent_var(env, "WMI_GUID=%pUL", &wblock->gblock.guid))
|
|
return -ENOMEM;
|
|
|
|
return 0;
|
|
@@ -801,11 +807,7 @@ static int wmi_dev_match(struct device *dev, struct device_driver *driver)
|
|
return 0;
|
|
|
|
while (*id->guid_string) {
|
|
- uuid_le driver_guid;
|
|
-
|
|
- if (WARN_ON(uuid_le_to_bin(id->guid_string, &driver_guid)))
|
|
- continue;
|
|
- if (!memcmp(&driver_guid, wblock->gblock.guid, 16))
|
|
+ if (guid_parse_and_compare(id->guid_string, &wblock->gblock.guid))
|
|
return 1;
|
|
|
|
id++;
|
|
@@ -1039,7 +1041,6 @@ static const struct device_type wmi_type_data = {
|
|
};
|
|
|
|
static int wmi_create_device(struct device *wmi_bus_dev,
|
|
- const struct guid_block *gblock,
|
|
struct wmi_block *wblock,
|
|
struct acpi_device *device)
|
|
{
|
|
@@ -1047,12 +1048,12 @@ static int wmi_create_device(struct device *wmi_bus_dev,
|
|
char method[5];
|
|
int result;
|
|
|
|
- if (gblock->flags & ACPI_WMI_EVENT) {
|
|
+ if (wblock->gblock.flags & ACPI_WMI_EVENT) {
|
|
wblock->dev.dev.type = &wmi_type_event;
|
|
goto out_init;
|
|
}
|
|
|
|
- if (gblock->flags & ACPI_WMI_METHOD) {
|
|
+ if (wblock->gblock.flags & ACPI_WMI_METHOD) {
|
|
wblock->dev.dev.type = &wmi_type_method;
|
|
mutex_init(&wblock->char_mutex);
|
|
goto out_init;
|
|
@@ -1102,7 +1103,7 @@ static int wmi_create_device(struct device *wmi_bus_dev,
|
|
wblock->dev.dev.bus = &wmi_bus_type;
|
|
wblock->dev.dev.parent = wmi_bus_dev;
|
|
|
|
- dev_set_name(&wblock->dev.dev, "%pUL", gblock->guid);
|
|
+ dev_set_name(&wblock->dev.dev, "%pUL", &wblock->gblock.guid);
|
|
|
|
device_initialize(&wblock->dev.dev);
|
|
|
|
@@ -1122,13 +1123,12 @@ static void wmi_free_devices(struct acpi_device *device)
|
|
}
|
|
}
|
|
|
|
-static bool guid_already_parsed(struct acpi_device *device,
|
|
- const u8 *guid)
|
|
+static bool guid_already_parsed(struct acpi_device *device, const guid_t *guid)
|
|
{
|
|
struct wmi_block *wblock;
|
|
|
|
list_for_each_entry(wblock, &wmi_block_list, list) {
|
|
- if (memcmp(wblock->gblock.guid, guid, 16) == 0) {
|
|
+ if (guid_equal(&wblock->gblock.guid, guid)) {
|
|
/*
|
|
* Because we historically didn't track the relationship
|
|
* between GUIDs and ACPI nodes, we don't know whether
|
|
@@ -1183,7 +1183,7 @@ static int parse_wdg(struct device *wmi_bus_dev, struct acpi_device *device)
|
|
* case yet, so for now, we'll just ignore the duplicate
|
|
* for device creation.
|
|
*/
|
|
- if (guid_already_parsed(device, gblock[i].guid))
|
|
+ if (guid_already_parsed(device, &gblock[i].guid))
|
|
continue;
|
|
|
|
wblock = kzalloc(sizeof(struct wmi_block), GFP_KERNEL);
|
|
@@ -1195,7 +1195,7 @@ static int parse_wdg(struct device *wmi_bus_dev, struct acpi_device *device)
|
|
wblock->acpi_device = device;
|
|
wblock->gblock = gblock[i];
|
|
|
|
- retval = wmi_create_device(wmi_bus_dev, &gblock[i], wblock, device);
|
|
+ retval = wmi_create_device(wmi_bus_dev, wblock, device);
|
|
if (retval) {
|
|
kfree(wblock);
|
|
continue;
|
|
@@ -1220,7 +1220,7 @@ static int parse_wdg(struct device *wmi_bus_dev, struct acpi_device *device)
|
|
retval = device_add(&wblock->dev.dev);
|
|
if (retval) {
|
|
dev_err(wmi_bus_dev, "failed to register %pUL\n",
|
|
- wblock->gblock.guid);
|
|
+ &wblock->gblock.guid);
|
|
if (debug_event)
|
|
wmi_method_enable(wblock, 0);
|
|
list_del(&wblock->list);
|
|
@@ -1280,12 +1280,11 @@ acpi_wmi_ec_space_handler(u32 function, acpi_physical_address address,
|
|
static void acpi_wmi_notify_handler(acpi_handle handle, u32 event,
|
|
void *context)
|
|
{
|
|
- struct guid_block *block;
|
|
struct wmi_block *wblock;
|
|
bool found_it = false;
|
|
|
|
list_for_each_entry(wblock, &wmi_block_list, list) {
|
|
- block = &wblock->gblock;
|
|
+ struct guid_block *block = &wblock->gblock;
|
|
|
|
if (wblock->acpi_device->handle == handle &&
|
|
(block->flags & ACPI_WMI_EVENT) &&
|
|
@@ -1333,10 +1332,8 @@ static void acpi_wmi_notify_handler(acpi_handle handle, u32 event,
|
|
wblock->handler(event, wblock->handler_data);
|
|
}
|
|
|
|
- if (debug_event) {
|
|
- pr_info("DEBUG Event GUID: %pUL\n",
|
|
- wblock->gblock.guid);
|
|
- }
|
|
+ if (debug_event)
|
|
+ pr_info("DEBUG Event GUID: %pUL\n", &wblock->gblock.guid);
|
|
|
|
acpi_bus_generate_netlink_event(
|
|
wblock->acpi_device->pnp.device_class,
|
|
diff --git a/drivers/powercap/Kconfig b/drivers/powercap/Kconfig
|
|
index dc1c1381d7fa9..61fd5dfaf7a0f 100644
|
|
--- a/drivers/powercap/Kconfig
|
|
+++ b/drivers/powercap/Kconfig
|
|
@@ -18,10 +18,12 @@ if POWERCAP
|
|
# Client driver configurations go here.
|
|
config INTEL_RAPL_CORE
|
|
tristate
|
|
+ depends on PCI
|
|
+ select IOSF_MBI
|
|
|
|
config INTEL_RAPL
|
|
tristate "Intel RAPL Support via MSR Interface"
|
|
- depends on X86 && IOSF_MBI
|
|
+ depends on X86 && PCI
|
|
select INTEL_RAPL_CORE
|
|
---help---
|
|
This enables support for the Intel Running Average Power Limit (RAPL)
|
|
diff --git a/drivers/powercap/intel_rapl_msr.c b/drivers/powercap/intel_rapl_msr.c
|
|
index d5487965bdfe9..6091e462626a4 100644
|
|
--- a/drivers/powercap/intel_rapl_msr.c
|
|
+++ b/drivers/powercap/intel_rapl_msr.c
|
|
@@ -22,7 +22,6 @@
|
|
#include <linux/processor.h>
|
|
#include <linux/platform_device.h>
|
|
|
|
-#include <asm/iosf_mbi.h>
|
|
#include <asm/cpu_device_id.h>
|
|
#include <asm/intel-family.h>
|
|
|
|
diff --git a/drivers/pwm/pwm-imx-tpm.c b/drivers/pwm/pwm-imx-tpm.c
|
|
index 9145f61606497..85aad55b7a8f0 100644
|
|
--- a/drivers/pwm/pwm-imx-tpm.c
|
|
+++ b/drivers/pwm/pwm-imx-tpm.c
|
|
@@ -405,6 +405,13 @@ static int __maybe_unused pwm_imx_tpm_suspend(struct device *dev)
|
|
if (tpm->enable_count > 0)
|
|
return -EBUSY;
|
|
|
|
+ /*
|
|
+ * Force 'real_period' to be zero to force period update code
|
|
+ * can be executed after system resume back, since suspend causes
|
|
+ * the period related registers to become their reset values.
|
|
+ */
|
|
+ tpm->real_period = 0;
|
|
+
|
|
clk_disable_unprepare(tpm->clk);
|
|
|
|
return 0;
|
|
diff --git a/drivers/pwm/sysfs.c b/drivers/pwm/sysfs.c
|
|
index 2389b86698468..986f3a29a13d5 100644
|
|
--- a/drivers/pwm/sysfs.c
|
|
+++ b/drivers/pwm/sysfs.c
|
|
@@ -424,6 +424,13 @@ static int pwm_class_resume_npwm(struct device *parent, unsigned int npwm)
|
|
if (!export)
|
|
continue;
|
|
|
|
+ /* If pwmchip was not enabled before suspend, do nothing. */
|
|
+ if (!export->suspend.enabled) {
|
|
+ /* release lock taken in pwm_class_get_state */
|
|
+ mutex_unlock(&export->lock);
|
|
+ continue;
|
|
+ }
|
|
+
|
|
state.enabled = export->suspend.enabled;
|
|
ret = pwm_class_apply_state(export, pwm, &state);
|
|
if (ret < 0)
|
|
@@ -448,7 +455,17 @@ static int __maybe_unused pwm_class_suspend(struct device *parent)
|
|
if (!export)
|
|
continue;
|
|
|
|
+ /*
|
|
+ * If pwmchip was not enabled before suspend, save
|
|
+ * state for resume time and do nothing else.
|
|
+ */
|
|
export->suspend = state;
|
|
+ if (!state.enabled) {
|
|
+ /* release lock taken in pwm_class_get_state */
|
|
+ mutex_unlock(&export->lock);
|
|
+ continue;
|
|
+ }
|
|
+
|
|
state.enabled = false;
|
|
ret = pwm_class_apply_state(export, pwm, &state);
|
|
if (ret < 0) {
|
|
diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
|
|
index cc9aa95d69691..fe4b666edd037 100644
|
|
--- a/drivers/regulator/core.c
|
|
+++ b/drivers/regulator/core.c
|
|
@@ -1710,19 +1710,17 @@ static struct regulator *create_regulator(struct regulator_dev *rdev,
|
|
|
|
if (err != -EEXIST)
|
|
regulator->debugfs = debugfs_create_dir(supply_name, rdev->debugfs);
|
|
- if (!regulator->debugfs) {
|
|
+ if (IS_ERR(regulator->debugfs))
|
|
rdev_dbg(rdev, "Failed to create debugfs directory\n");
|
|
- } else {
|
|
- debugfs_create_u32("uA_load", 0444, regulator->debugfs,
|
|
- ®ulator->uA_load);
|
|
- debugfs_create_u32("min_uV", 0444, regulator->debugfs,
|
|
- ®ulator->voltage[PM_SUSPEND_ON].min_uV);
|
|
- debugfs_create_u32("max_uV", 0444, regulator->debugfs,
|
|
- ®ulator->voltage[PM_SUSPEND_ON].max_uV);
|
|
- debugfs_create_file("constraint_flags", 0444,
|
|
- regulator->debugfs, regulator,
|
|
- &constraint_flags_fops);
|
|
- }
|
|
+
|
|
+ debugfs_create_u32("uA_load", 0444, regulator->debugfs,
|
|
+ ®ulator->uA_load);
|
|
+ debugfs_create_u32("min_uV", 0444, regulator->debugfs,
|
|
+ ®ulator->voltage[PM_SUSPEND_ON].min_uV);
|
|
+ debugfs_create_u32("max_uV", 0444, regulator->debugfs,
|
|
+ ®ulator->voltage[PM_SUSPEND_ON].max_uV);
|
|
+ debugfs_create_file("constraint_flags", 0444, regulator->debugfs,
|
|
+ regulator, &constraint_flags_fops);
|
|
|
|
/*
|
|
* Check now if the regulator is an always on regulator - if
|
|
@@ -4906,10 +4904,8 @@ static void rdev_init_debugfs(struct regulator_dev *rdev)
|
|
}
|
|
|
|
rdev->debugfs = debugfs_create_dir(rname, debugfs_root);
|
|
- if (IS_ERR(rdev->debugfs)) {
|
|
- rdev_warn(rdev, "Failed to create debugfs directory\n");
|
|
- return;
|
|
- }
|
|
+ if (IS_ERR(rdev->debugfs))
|
|
+ rdev_dbg(rdev, "Failed to create debugfs directory\n");
|
|
|
|
debugfs_create_u32("use_count", 0444, rdev->debugfs,
|
|
&rdev->use_count);
|
|
@@ -5797,7 +5793,7 @@ static int __init regulator_init(void)
|
|
|
|
debugfs_root = debugfs_create_dir("regulator", NULL);
|
|
if (IS_ERR(debugfs_root))
|
|
- pr_warn("regulator: Failed to create debugfs directory\n");
|
|
+ pr_debug("regulator: Failed to create debugfs directory\n");
|
|
|
|
#ifdef CONFIG_DEBUG_FS
|
|
debugfs_create_file("supply_map", 0444, debugfs_root, NULL,
|
|
diff --git a/drivers/rtc/rtc-st-lpc.c b/drivers/rtc/rtc-st-lpc.c
|
|
index 27261b020f8dd..2031d042c5e44 100644
|
|
--- a/drivers/rtc/rtc-st-lpc.c
|
|
+++ b/drivers/rtc/rtc-st-lpc.c
|
|
@@ -231,7 +231,7 @@ static int st_rtc_probe(struct platform_device *pdev)
|
|
enable_irq_wake(rtc->irq);
|
|
disable_irq(rtc->irq);
|
|
|
|
- rtc->clk = clk_get(&pdev->dev, NULL);
|
|
+ rtc->clk = devm_clk_get(&pdev->dev, NULL);
|
|
if (IS_ERR(rtc->clk)) {
|
|
dev_err(&pdev->dev, "Unable to request clock\n");
|
|
return PTR_ERR(rtc->clk);
|
|
diff --git a/drivers/scsi/3w-xxxx.c b/drivers/scsi/3w-xxxx.c
|
|
index 2b1e0d5030201..75290aabd543b 100644
|
|
--- a/drivers/scsi/3w-xxxx.c
|
|
+++ b/drivers/scsi/3w-xxxx.c
|
|
@@ -2310,8 +2310,10 @@ static int tw_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
|
|
TW_DISABLE_INTERRUPTS(tw_dev);
|
|
|
|
/* Initialize the card */
|
|
- if (tw_reset_sequence(tw_dev))
|
|
+ if (tw_reset_sequence(tw_dev)) {
|
|
+ retval = -EINVAL;
|
|
goto out_release_mem_region;
|
|
+ }
|
|
|
|
/* Set host specific parameters */
|
|
host->max_id = TW_MAX_UNITS;
|
|
diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
|
|
index f864ef059d29e..858058f228191 100644
|
|
--- a/drivers/scsi/qedf/qedf_main.c
|
|
+++ b/drivers/scsi/qedf/qedf_main.c
|
|
@@ -2914,9 +2914,8 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf)
|
|
* addresses of our queues
|
|
*/
|
|
if (!qedf->p_cpuq) {
|
|
- status = -EINVAL;
|
|
QEDF_ERR(&qedf->dbg_ctx, "p_cpuq is NULL.\n");
|
|
- goto mem_alloc_failure;
|
|
+ return -EINVAL;
|
|
}
|
|
|
|
qedf->global_queues = kzalloc((sizeof(struct global_queue *)
|
|
diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
|
|
index 59f5dc9876cc5..be3525d17fc92 100644
|
|
--- a/drivers/scsi/qla2xxx/qla_attr.c
|
|
+++ b/drivers/scsi/qla2xxx/qla_attr.c
|
|
@@ -2574,6 +2574,7 @@ static void
|
|
qla2x00_terminate_rport_io(struct fc_rport *rport)
|
|
{
|
|
fc_port_t *fcport = *(fc_port_t **)rport->dd_data;
|
|
+ scsi_qla_host_t *vha;
|
|
|
|
if (!fcport)
|
|
return;
|
|
@@ -2583,9 +2584,12 @@ qla2x00_terminate_rport_io(struct fc_rport *rport)
|
|
|
|
if (test_bit(ABORT_ISP_ACTIVE, &fcport->vha->dpc_flags))
|
|
return;
|
|
+ vha = fcport->vha;
|
|
|
|
if (unlikely(pci_channel_offline(fcport->vha->hw->pdev))) {
|
|
qla2x00_abort_all_cmds(fcport->vha, DID_NO_CONNECT << 16);
|
|
+ qla2x00_eh_wait_for_pending_commands(fcport->vha, fcport->d_id.b24,
|
|
+ 0, WAIT_TARGET);
|
|
return;
|
|
}
|
|
/*
|
|
@@ -2600,6 +2604,15 @@ qla2x00_terminate_rport_io(struct fc_rport *rport)
|
|
else
|
|
qla2x00_port_logout(fcport->vha, fcport);
|
|
}
|
|
+
|
|
+ /* check for any straggling io left behind */
|
|
+ if (qla2x00_eh_wait_for_pending_commands(fcport->vha, fcport->d_id.b24, 0, WAIT_TARGET)) {
|
|
+ ql_log(ql_log_warn, vha, 0x300b,
|
|
+ "IO not return. Resetting. \n");
|
|
+ set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags);
|
|
+ qla2xxx_wake_dpc(vha);
|
|
+ qla2x00_wait_for_chip_reset(vha);
|
|
+ }
|
|
}
|
|
|
|
static int
|
|
diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c
|
|
index ce55121910e89..e584d39dc9821 100644
|
|
--- a/drivers/scsi/qla2xxx/qla_bsg.c
|
|
+++ b/drivers/scsi/qla2xxx/qla_bsg.c
|
|
@@ -259,6 +259,10 @@ qla2x00_process_els(struct bsg_job *bsg_job)
|
|
|
|
if (bsg_request->msgcode == FC_BSG_RPT_ELS) {
|
|
rport = fc_bsg_to_rport(bsg_job);
|
|
+ if (!rport) {
|
|
+ rval = -ENOMEM;
|
|
+ goto done;
|
|
+ }
|
|
fcport = *(fc_port_t **) rport->dd_data;
|
|
host = rport_to_shost(rport);
|
|
vha = shost_priv(host);
|
|
@@ -2526,6 +2530,8 @@ qla24xx_bsg_request(struct bsg_job *bsg_job)
|
|
|
|
if (bsg_request->msgcode == FC_BSG_RPT_ELS) {
|
|
rport = fc_bsg_to_rport(bsg_job);
|
|
+ if (!rport)
|
|
+ return ret;
|
|
host = rport_to_shost(rport);
|
|
vha = shost_priv(host);
|
|
} else {
|
|
diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
|
|
index a8272d4290754..2ef6277244f57 100644
|
|
--- a/drivers/scsi/qla2xxx/qla_def.h
|
|
+++ b/drivers/scsi/qla2xxx/qla_def.h
|
|
@@ -593,7 +593,6 @@ typedef struct srb {
|
|
uint8_t pad[3];
|
|
struct kref cmd_kref; /* need to migrate ref_count over to this */
|
|
void *priv;
|
|
- wait_queue_head_t nvme_ls_waitq;
|
|
struct fc_port *fcport;
|
|
struct scsi_qla_host *vha;
|
|
unsigned int start_timer:1;
|
|
diff --git a/drivers/scsi/qla2xxx/qla_inline.h b/drivers/scsi/qla2xxx/qla_inline.h
|
|
index 477b0b8a5f4bc..c54b987d1f11d 100644
|
|
--- a/drivers/scsi/qla2xxx/qla_inline.h
|
|
+++ b/drivers/scsi/qla2xxx/qla_inline.h
|
|
@@ -110,11 +110,13 @@ qla2x00_set_fcport_disc_state(fc_port_t *fcport, int state)
|
|
{
|
|
int old_val;
|
|
uint8_t shiftbits, mask;
|
|
+ uint8_t port_dstate_str_sz;
|
|
|
|
/* This will have to change when the max no. of states > 16 */
|
|
shiftbits = 4;
|
|
mask = (1 << shiftbits) - 1;
|
|
|
|
+ port_dstate_str_sz = sizeof(port_dstate_str) / sizeof(char *);
|
|
fcport->disc_state = state;
|
|
while (1) {
|
|
old_val = atomic_read(&fcport->shadow_disc_state);
|
|
@@ -122,7 +124,8 @@ qla2x00_set_fcport_disc_state(fc_port_t *fcport, int state)
|
|
old_val, (old_val << shiftbits) | state)) {
|
|
ql_dbg(ql_dbg_disc, fcport->vha, 0x2134,
|
|
"FCPort %8phC disc_state transition: %s to %s - portid=%06x.\n",
|
|
- fcport->port_name, port_dstate_str[old_val & mask],
|
|
+ fcport->port_name, (old_val & mask) < port_dstate_str_sz ?
|
|
+ port_dstate_str[old_val & mask] : "Unknown",
|
|
port_dstate_str[state], fcport->d_id.b24);
|
|
return;
|
|
}
|
|
diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
|
|
index 103288b0377e0..716f46a67bcda 100644
|
|
--- a/drivers/scsi/qla2xxx/qla_iocb.c
|
|
+++ b/drivers/scsi/qla2xxx/qla_iocb.c
|
|
@@ -601,7 +601,8 @@ qla24xx_build_scsi_type_6_iocbs(srb_t *sp, struct cmd_type_6 *cmd_pkt,
|
|
put_unaligned_le32(COMMAND_TYPE_6, &cmd_pkt->entry_type);
|
|
|
|
/* No data transfer */
|
|
- if (!scsi_bufflen(cmd) || cmd->sc_data_direction == DMA_NONE) {
|
|
+ if (!scsi_bufflen(cmd) || cmd->sc_data_direction == DMA_NONE ||
|
|
+ tot_dsds == 0) {
|
|
cmd_pkt->byte_count = cpu_to_le32(0);
|
|
return 0;
|
|
}
|
|
@@ -3665,7 +3666,7 @@ qla2x00_start_sp(srb_t *sp)
|
|
spin_lock_irqsave(qp->qp_lock_ptr, flags);
|
|
pkt = __qla2x00_alloc_iocbs(sp->qpair, sp);
|
|
if (!pkt) {
|
|
- rval = EAGAIN;
|
|
+ rval = -EAGAIN;
|
|
ql_log(ql_log_warn, vha, 0x700c,
|
|
"qla2x00_alloc_iocbs failed.\n");
|
|
goto done;
|
|
diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
|
|
index ab9dcbd2006c2..b67480456f45c 100644
|
|
--- a/drivers/scsi/qla2xxx/qla_nvme.c
|
|
+++ b/drivers/scsi/qla2xxx/qla_nvme.c
|
|
@@ -318,7 +318,6 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
|
|
if (rval != QLA_SUCCESS) {
|
|
ql_log(ql_log_warn, vha, 0x700e,
|
|
"qla2x00_start_sp failed = %d\n", rval);
|
|
- wake_up(&sp->nvme_ls_waitq);
|
|
sp->priv = NULL;
|
|
priv->sp = NULL;
|
|
qla2x00_rel_sp(sp);
|
|
@@ -563,7 +562,6 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport,
|
|
if (!sp)
|
|
return -EBUSY;
|
|
|
|
- init_waitqueue_head(&sp->nvme_ls_waitq);
|
|
kref_init(&sp->cmd_kref);
|
|
spin_lock_init(&priv->cmd_lock);
|
|
sp->priv = (void *)priv;
|
|
@@ -581,7 +579,6 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport,
|
|
if (rval != QLA_SUCCESS) {
|
|
ql_log(ql_log_warn, vha, 0x212d,
|
|
"qla2x00_start_nvme_mq failed = %d\n", rval);
|
|
- wake_up(&sp->nvme_ls_waitq);
|
|
sp->priv = NULL;
|
|
priv->sp = NULL;
|
|
qla2xxx_rel_qpair_sp(sp->qpair, sp);
|
|
diff --git a/drivers/soc/fsl/qe/Kconfig b/drivers/soc/fsl/qe/Kconfig
|
|
index cfa4b2939992c..3ed0838607647 100644
|
|
--- a/drivers/soc/fsl/qe/Kconfig
|
|
+++ b/drivers/soc/fsl/qe/Kconfig
|
|
@@ -38,6 +38,7 @@ config QE_TDM
|
|
|
|
config QE_USB
|
|
bool
|
|
+ depends on QUICC_ENGINE
|
|
default y if USB_FSL_QE
|
|
help
|
|
QE USB Controller support
|
|
diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
|
|
index d933a6eda5fdc..118d9161a7886 100644
|
|
--- a/drivers/spi/spi-bcm-qspi.c
|
|
+++ b/drivers/spi/spi-bcm-qspi.c
|
|
@@ -1250,13 +1250,9 @@ int bcm_qspi_probe(struct platform_device *pdev,
|
|
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
|
|
"mspi");
|
|
|
|
- if (res) {
|
|
- qspi->base[MSPI] = devm_ioremap_resource(dev, res);
|
|
- if (IS_ERR(qspi->base[MSPI]))
|
|
- return PTR_ERR(qspi->base[MSPI]);
|
|
- } else {
|
|
- return 0;
|
|
- }
|
|
+ qspi->base[MSPI] = devm_ioremap_resource(dev, res);
|
|
+ if (IS_ERR(qspi->base[MSPI]))
|
|
+ return PTR_ERR(qspi->base[MSPI]);
|
|
|
|
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "bspi");
|
|
if (res) {
|
|
diff --git a/drivers/spi/spi-bcm63xx.c b/drivers/spi/spi-bcm63xx.c
|
|
index fdd7eaa0b8ede..ff27596168732 100644
|
|
--- a/drivers/spi/spi-bcm63xx.c
|
|
+++ b/drivers/spi/spi-bcm63xx.c
|
|
@@ -125,7 +125,7 @@ enum bcm63xx_regs_spi {
|
|
SPI_MSG_DATA_SIZE,
|
|
};
|
|
|
|
-#define BCM63XX_SPI_MAX_PREPEND 15
|
|
+#define BCM63XX_SPI_MAX_PREPEND 7
|
|
|
|
#define BCM63XX_SPI_MAX_CS 8
|
|
#define BCM63XX_SPI_BUS_NUM 0
|
|
diff --git a/drivers/spi/spi-geni-qcom.c b/drivers/spi/spi-geni-qcom.c
|
|
index 01b53d816497c..ae1cbc3215366 100644
|
|
--- a/drivers/spi/spi-geni-qcom.c
|
|
+++ b/drivers/spi/spi-geni-qcom.c
|
|
@@ -32,7 +32,7 @@
|
|
#define CS_DEMUX_OUTPUT_SEL GENMASK(3, 0)
|
|
|
|
#define SE_SPI_TRANS_CFG 0x25c
|
|
-#define CS_TOGGLE BIT(0)
|
|
+#define CS_TOGGLE BIT(1)
|
|
|
|
#define SE_SPI_WORD_LEN 0x268
|
|
#define WORD_LEN_MSK GENMASK(9, 0)
|
|
diff --git a/drivers/tty/serial/8250/8250.h b/drivers/tty/serial/8250/8250.h
|
|
index e50af46f8c198..04bf2b297fdc0 100644
|
|
--- a/drivers/tty/serial/8250/8250.h
|
|
+++ b/drivers/tty/serial/8250/8250.h
|
|
@@ -87,7 +87,6 @@ struct serial8250_config {
|
|
#define UART_BUG_TXEN (1 << 1) /* UART has buggy TX IIR status */
|
|
#define UART_BUG_NOMSR (1 << 2) /* UART has buggy MSR status bits (Au1x00) */
|
|
#define UART_BUG_THRE (1 << 3) /* UART has buggy THRE reassertion */
|
|
-#define UART_BUG_PARITY (1 << 4) /* UART mishandles parity if FIFO enabled */
|
|
|
|
|
|
#ifdef CONFIG_SERIAL_8250_SHARE_IRQ
|
|
diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
|
|
index 928b35b87dcf3..a2db055278a17 100644
|
|
--- a/drivers/tty/serial/8250/8250_omap.c
|
|
+++ b/drivers/tty/serial/8250/8250_omap.c
|
|
@@ -1314,25 +1314,35 @@ static int omap8250_suspend(struct device *dev)
|
|
{
|
|
struct omap8250_priv *priv = dev_get_drvdata(dev);
|
|
struct uart_8250_port *up = serial8250_get_port(priv->line);
|
|
+ int err;
|
|
|
|
serial8250_suspend_port(priv->line);
|
|
|
|
- pm_runtime_get_sync(dev);
|
|
+ err = pm_runtime_resume_and_get(dev);
|
|
+ if (err)
|
|
+ return err;
|
|
if (!device_may_wakeup(dev))
|
|
priv->wer = 0;
|
|
serial_out(up, UART_OMAP_WER, priv->wer);
|
|
- pm_runtime_mark_last_busy(dev);
|
|
- pm_runtime_put_autosuspend(dev);
|
|
-
|
|
+ err = pm_runtime_force_suspend(dev);
|
|
flush_work(&priv->qos_work);
|
|
- return 0;
|
|
+
|
|
+ return err;
|
|
}
|
|
|
|
static int omap8250_resume(struct device *dev)
|
|
{
|
|
struct omap8250_priv *priv = dev_get_drvdata(dev);
|
|
+ int err;
|
|
|
|
+ err = pm_runtime_force_resume(dev);
|
|
+ if (err)
|
|
+ return err;
|
|
serial8250_resume_port(priv->line);
|
|
+ /* Paired with pm_runtime_resume_and_get() in omap8250_suspend() */
|
|
+ pm_runtime_mark_last_busy(dev);
|
|
+ pm_runtime_put_autosuspend(dev);
|
|
+
|
|
return 0;
|
|
}
|
|
#else
|
|
diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
|
|
index 4a3991ac2dd06..5c7a2145b9454 100644
|
|
--- a/drivers/tty/serial/8250/8250_pci.c
|
|
+++ b/drivers/tty/serial/8250/8250_pci.c
|
|
@@ -1068,14 +1068,6 @@ static int pci_oxsemi_tornado_init(struct pci_dev *dev)
|
|
return number_uarts;
|
|
}
|
|
|
|
-static int pci_asix_setup(struct serial_private *priv,
|
|
- const struct pciserial_board *board,
|
|
- struct uart_8250_port *port, int idx)
|
|
-{
|
|
- port->bugs |= UART_BUG_PARITY;
|
|
- return pci_default_setup(priv, board, port, idx);
|
|
-}
|
|
-
|
|
/* Quatech devices have their own extra interface features */
|
|
|
|
struct quatech_feature {
|
|
@@ -1872,7 +1864,6 @@ pci_moxa_setup(struct serial_private *priv,
|
|
#define PCI_DEVICE_ID_WCH_CH355_4S 0x7173
|
|
#define PCI_VENDOR_ID_AGESTAR 0x5372
|
|
#define PCI_DEVICE_ID_AGESTAR_9375 0x6872
|
|
-#define PCI_VENDOR_ID_ASIX 0x9710
|
|
#define PCI_DEVICE_ID_BROADCOM_TRUMANAGE 0x160a
|
|
#define PCI_DEVICE_ID_AMCC_ADDIDATA_APCI7800 0x818e
|
|
|
|
@@ -2671,16 +2662,6 @@ static struct pci_serial_quirk pci_serial_quirks[] __refdata = {
|
|
.subdevice = PCI_ANY_ID,
|
|
.setup = pci_wch_ch38x_setup,
|
|
},
|
|
- /*
|
|
- * ASIX devices with FIFO bug
|
|
- */
|
|
- {
|
|
- .vendor = PCI_VENDOR_ID_ASIX,
|
|
- .device = PCI_ANY_ID,
|
|
- .subvendor = PCI_ANY_ID,
|
|
- .subdevice = PCI_ANY_ID,
|
|
- .setup = pci_asix_setup,
|
|
- },
|
|
/*
|
|
* Broadcom TruManage (NetXtreme)
|
|
*/
|
|
diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
|
|
index 96ae6c6031d4d..f49f3b017206c 100644
|
|
--- a/drivers/tty/serial/8250/8250_port.c
|
|
+++ b/drivers/tty/serial/8250/8250_port.c
|
|
@@ -2535,11 +2535,8 @@ static unsigned char serial8250_compute_lcr(struct uart_8250_port *up,
|
|
|
|
if (c_cflag & CSTOPB)
|
|
cval |= UART_LCR_STOP;
|
|
- if (c_cflag & PARENB) {
|
|
+ if (c_cflag & PARENB)
|
|
cval |= UART_LCR_PARITY;
|
|
- if (up->bugs & UART_BUG_PARITY)
|
|
- up->fifo_bug = true;
|
|
- }
|
|
if (!(c_cflag & PARODD))
|
|
cval |= UART_LCR_EPAR;
|
|
#ifdef CMSPAR
|
|
@@ -2646,8 +2643,7 @@ serial8250_do_set_termios(struct uart_port *port, struct ktermios *termios,
|
|
up->lcr = cval; /* Save computed LCR */
|
|
|
|
if (up->capabilities & UART_CAP_FIFO && port->fifosize > 1) {
|
|
- /* NOTE: If fifo_bug is not set, a user can set RX_trigger. */
|
|
- if ((baud < 2400 && !up->dma) || up->fifo_bug) {
|
|
+ if (baud < 2400 && !up->dma) {
|
|
up->fcr &= ~UART_FCR_TRIGGER_MASK;
|
|
up->fcr |= UART_FCR_TRIGGER_1;
|
|
}
|
|
@@ -2983,8 +2979,7 @@ static int do_set_rxtrig(struct tty_port *port, unsigned char bytes)
|
|
struct uart_8250_port *up = up_to_u8250p(uport);
|
|
int rxtrig;
|
|
|
|
- if (!(up->capabilities & UART_CAP_FIFO) || uport->fifosize <= 1 ||
|
|
- up->fifo_bug)
|
|
+ if (!(up->capabilities & UART_CAP_FIFO) || uport->fifosize <= 1)
|
|
return -EINVAL;
|
|
|
|
rxtrig = bytes_to_fcr_rxtrig(up, bytes);
|
|
diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
|
|
index 6c0628c58efd9..8d0838c904c8a 100644
|
|
--- a/drivers/tty/serial/atmel_serial.c
|
|
+++ b/drivers/tty/serial/atmel_serial.c
|
|
@@ -884,11 +884,11 @@ static void atmel_complete_tx_dma(void *arg)
|
|
|
|
port->icount.tx += atmel_port->tx_len;
|
|
|
|
- spin_lock_irq(&atmel_port->lock_tx);
|
|
+ spin_lock(&atmel_port->lock_tx);
|
|
async_tx_ack(atmel_port->desc_tx);
|
|
atmel_port->cookie_tx = -EINVAL;
|
|
atmel_port->desc_tx = NULL;
|
|
- spin_unlock_irq(&atmel_port->lock_tx);
|
|
+ spin_unlock(&atmel_port->lock_tx);
|
|
|
|
if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
|
|
uart_write_wakeup(port);
|
|
diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
|
|
index 1d0124126eb22..88c8357969220 100644
|
|
--- a/drivers/tty/serial/fsl_lpuart.c
|
|
+++ b/drivers/tty/serial/fsl_lpuart.c
|
|
@@ -2409,6 +2409,7 @@ static int __init lpuart32_imx_early_console_setup(struct earlycon_device *devic
|
|
OF_EARLYCON_DECLARE(lpuart, "fsl,vf610-lpuart", lpuart_early_console_setup);
|
|
OF_EARLYCON_DECLARE(lpuart32, "fsl,ls1021a-lpuart", lpuart32_early_console_setup);
|
|
OF_EARLYCON_DECLARE(lpuart32, "fsl,imx7ulp-lpuart", lpuart32_imx_early_console_setup);
|
|
+OF_EARLYCON_DECLARE(lpuart32, "fsl,imx8ulp-lpuart", lpuart32_imx_early_console_setup);
|
|
OF_EARLYCON_DECLARE(lpuart32, "fsl,imx8qxp-lpuart", lpuart32_imx_early_console_setup);
|
|
EARLYCON_DECLARE(lpuart, lpuart_early_console_setup);
|
|
EARLYCON_DECLARE(lpuart32, lpuart32_early_console_setup);
|
|
diff --git a/drivers/tty/serial/samsung.c b/drivers/tty/serial/samsung.c
|
|
index 1df74bad10630..24f9bd9101662 100644
|
|
--- a/drivers/tty/serial/samsung.c
|
|
+++ b/drivers/tty/serial/samsung.c
|
|
@@ -1199,8 +1199,12 @@ static unsigned int s3c24xx_serial_getclk(struct s3c24xx_uart_port *ourport,
|
|
continue;
|
|
|
|
rate = clk_get_rate(clk);
|
|
- if (!rate)
|
|
+ if (!rate) {
|
|
+ dev_err(ourport->port.dev,
|
|
+ "Failed to get clock rate for %s.\n", clkname);
|
|
+ clk_put(clk);
|
|
continue;
|
|
+ }
|
|
|
|
if (ourport->info->has_divslot) {
|
|
unsigned long div = rate / req_baud;
|
|
@@ -1226,10 +1230,18 @@ static unsigned int s3c24xx_serial_getclk(struct s3c24xx_uart_port *ourport,
|
|
calc_deviation = -calc_deviation;
|
|
|
|
if (calc_deviation < deviation) {
|
|
+ /*
|
|
+ * If we find a better clk, release the previous one, if
|
|
+ * any.
|
|
+ */
|
|
+ if (!IS_ERR(*best_clk))
|
|
+ clk_put(*best_clk);
|
|
*best_clk = clk;
|
|
best_quot = quot;
|
|
*clk_num = cnt;
|
|
deviation = calc_deviation;
|
|
+ } else {
|
|
+ clk_put(clk);
|
|
}
|
|
}
|
|
|
|
diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
|
|
index 44922e6381da6..087ab22488552 100644
|
|
--- a/drivers/usb/core/devio.c
|
|
+++ b/drivers/usb/core/devio.c
|
|
@@ -734,6 +734,7 @@ static int driver_resume(struct usb_interface *intf)
|
|
return 0;
|
|
}
|
|
|
|
+#ifdef CONFIG_PM
|
|
/* The following routines apply to the entire device, not interfaces */
|
|
void usbfs_notify_suspend(struct usb_device *udev)
|
|
{
|
|
@@ -752,6 +753,7 @@ void usbfs_notify_resume(struct usb_device *udev)
|
|
}
|
|
mutex_unlock(&usbfs_mutex);
|
|
}
|
|
+#endif
|
|
|
|
struct usb_driver usbfs_driver = {
|
|
.name = "usbfs",
|
|
diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
|
|
index 2dcdeb52fc293..2d7cfa8825aa8 100644
|
|
--- a/drivers/usb/dwc3/dwc3-qcom.c
|
|
+++ b/drivers/usb/dwc3/dwc3-qcom.c
|
|
@@ -574,6 +574,7 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
|
|
struct device *dev = &pdev->dev;
|
|
struct dwc3_qcom *qcom;
|
|
struct resource *res, *parent_res = NULL;
|
|
+ struct resource local_res;
|
|
int ret, i;
|
|
bool ignore_pipe_clk;
|
|
|
|
@@ -624,9 +625,8 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
|
|
if (np) {
|
|
parent_res = res;
|
|
} else {
|
|
- parent_res = kmemdup(res, sizeof(struct resource), GFP_KERNEL);
|
|
- if (!parent_res)
|
|
- return -ENOMEM;
|
|
+ memcpy(&local_res, res, sizeof(struct resource));
|
|
+ parent_res = &local_res;
|
|
|
|
parent_res->start = res->start +
|
|
qcom->acpi_pdata->qscratch_base_offset;
|
|
@@ -704,10 +704,14 @@ reset_assert:
|
|
static int dwc3_qcom_remove(struct platform_device *pdev)
|
|
{
|
|
struct dwc3_qcom *qcom = platform_get_drvdata(pdev);
|
|
+ struct device_node *np = pdev->dev.of_node;
|
|
struct device *dev = &pdev->dev;
|
|
int i;
|
|
|
|
- of_platform_depopulate(dev);
|
|
+ if (np)
|
|
+ of_platform_depopulate(&pdev->dev);
|
|
+ else
|
|
+ platform_device_put(pdev);
|
|
|
|
for (i = qcom->num_clocks - 1; i >= 0; i--) {
|
|
clk_disable_unprepare(qcom->clks[i]);
|
|
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
|
|
index 12d8a50d24ba9..bda9fd7157e47 100644
|
|
--- a/drivers/usb/dwc3/gadget.c
|
|
+++ b/drivers/usb/dwc3/gadget.c
|
|
@@ -2077,7 +2077,9 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
|
|
ret = pm_runtime_get_sync(dwc->dev);
|
|
if (!ret || ret < 0) {
|
|
pm_runtime_put(dwc->dev);
|
|
- return 0;
|
|
+ if (ret < 0)
|
|
+ pm_runtime_set_suspended(dwc->dev);
|
|
+ return ret;
|
|
}
|
|
|
|
if (dwc->pullups_connected == is_on) {
|
|
diff --git a/drivers/usb/phy/phy-tahvo.c b/drivers/usb/phy/phy-tahvo.c
|
|
index a3e043e3e4aae..d0672b6712985 100644
|
|
--- a/drivers/usb/phy/phy-tahvo.c
|
|
+++ b/drivers/usb/phy/phy-tahvo.c
|
|
@@ -395,7 +395,7 @@ static int tahvo_usb_probe(struct platform_device *pdev)
|
|
|
|
tu->irq = ret = platform_get_irq(pdev, 0);
|
|
if (ret < 0)
|
|
- return ret;
|
|
+ goto err_remove_phy;
|
|
ret = request_threaded_irq(tu->irq, NULL, tahvo_usb_vbus_interrupt,
|
|
IRQF_ONESHOT,
|
|
"tahvo-vbus", tu);
|
|
diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
|
|
index 7d9f9c0cb2eab..939bcbb5404fb 100644
|
|
--- a/drivers/usb/serial/option.c
|
|
+++ b/drivers/usb/serial/option.c
|
|
@@ -1151,6 +1151,10 @@ static const struct usb_device_id option_ids[] = {
|
|
{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x90fa),
|
|
.driver_info = RSVD(3) },
|
|
/* u-blox products */
|
|
+ { USB_DEVICE(UBLOX_VENDOR_ID, 0x1311) }, /* u-blox LARA-R6 01B */
|
|
+ { USB_DEVICE(UBLOX_VENDOR_ID, 0x1312), /* u-blox LARA-R6 01B (RMNET) */
|
|
+ .driver_info = RSVD(4) },
|
|
+ { USB_DEVICE_INTERFACE_CLASS(UBLOX_VENDOR_ID, 0x1313, 0xff) }, /* u-blox LARA-R6 01B (ECM) */
|
|
{ USB_DEVICE(UBLOX_VENDOR_ID, 0x1341) }, /* u-blox LARA-L6 */
|
|
{ USB_DEVICE(UBLOX_VENDOR_ID, 0x1342), /* u-blox LARA-L6 (RMNET) */
|
|
.driver_info = RSVD(4) },
|
|
diff --git a/drivers/video/fbdev/au1200fb.c b/drivers/video/fbdev/au1200fb.c
|
|
index 43a4dddaafd52..d0335d4d5ab54 100644
|
|
--- a/drivers/video/fbdev/au1200fb.c
|
|
+++ b/drivers/video/fbdev/au1200fb.c
|
|
@@ -1732,6 +1732,9 @@ static int au1200fb_drv_probe(struct platform_device *dev)
|
|
|
|
/* Now hook interrupt too */
|
|
irq = platform_get_irq(dev, 0);
|
|
+ if (irq < 0)
|
|
+ return irq;
|
|
+
|
|
ret = request_irq(irq, au1200fb_handle_irq,
|
|
IRQF_SHARED, "lcd", (void *)dev);
|
|
if (ret) {
|
|
diff --git a/drivers/video/fbdev/imsttfb.c b/drivers/video/fbdev/imsttfb.c
|
|
index 5f610d4929ddb..9670e5b5fe326 100644
|
|
--- a/drivers/video/fbdev/imsttfb.c
|
|
+++ b/drivers/video/fbdev/imsttfb.c
|
|
@@ -1346,7 +1346,7 @@ static struct fb_ops imsttfb_ops = {
|
|
.fb_ioctl = imsttfb_ioctl,
|
|
};
|
|
|
|
-static void init_imstt(struct fb_info *info)
|
|
+static int init_imstt(struct fb_info *info)
|
|
{
|
|
struct imstt_par *par = info->par;
|
|
__u32 i, tmp, *ip, *end;
|
|
@@ -1419,7 +1419,7 @@ static void init_imstt(struct fb_info *info)
|
|
|| !(compute_imstt_regvals(par, info->var.xres, info->var.yres))) {
|
|
printk("imsttfb: %ux%ux%u not supported\n", info->var.xres, info->var.yres, info->var.bits_per_pixel);
|
|
framebuffer_release(info);
|
|
- return;
|
|
+ return -ENODEV;
|
|
}
|
|
|
|
sprintf(info->fix.id, "IMS TT (%s)", par->ramdac == IBM ? "IBM" : "TVP");
|
|
@@ -1455,12 +1455,13 @@ static void init_imstt(struct fb_info *info)
|
|
|
|
if (register_framebuffer(info) < 0) {
|
|
framebuffer_release(info);
|
|
- return;
|
|
+ return -ENODEV;
|
|
}
|
|
|
|
tmp = (read_reg_le32(par->dc_regs, SSTATUS) & 0x0f00) >> 8;
|
|
fb_info(info, "%s frame buffer; %uMB vram; chip version %u\n",
|
|
info->fix.id, info->fix.smem_len >> 20, tmp);
|
|
+ return 0;
|
|
}
|
|
|
|
static int imsttfb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|
@@ -1469,6 +1470,7 @@ static int imsttfb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|
struct imstt_par *par;
|
|
struct fb_info *info;
|
|
struct device_node *dp;
|
|
+ int ret = -ENOMEM;
|
|
|
|
dp = pci_device_to_OF_node(pdev);
|
|
if(dp)
|
|
@@ -1504,23 +1506,37 @@ static int imsttfb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|
default:
|
|
printk(KERN_INFO "imsttfb: Device 0x%x unknown, "
|
|
"contact maintainer.\n", pdev->device);
|
|
- release_mem_region(addr, size);
|
|
- framebuffer_release(info);
|
|
- return -ENODEV;
|
|
+ ret = -ENODEV;
|
|
+ goto error;
|
|
}
|
|
|
|
info->fix.smem_start = addr;
|
|
info->screen_base = (__u8 *)ioremap(addr, par->ramdac == IBM ?
|
|
0x400000 : 0x800000);
|
|
+ if (!info->screen_base)
|
|
+ goto error;
|
|
info->fix.mmio_start = addr + 0x800000;
|
|
par->dc_regs = ioremap(addr + 0x800000, 0x1000);
|
|
+ if (!par->dc_regs)
|
|
+ goto error;
|
|
par->cmap_regs_phys = addr + 0x840000;
|
|
par->cmap_regs = (__u8 *)ioremap(addr + 0x840000, 0x1000);
|
|
+ if (!par->cmap_regs)
|
|
+ goto error;
|
|
info->pseudo_palette = par->palette;
|
|
- init_imstt(info);
|
|
-
|
|
- pci_set_drvdata(pdev, info);
|
|
- return 0;
|
|
+ ret = init_imstt(info);
|
|
+ if (!ret)
|
|
+ pci_set_drvdata(pdev, info);
|
|
+ return ret;
|
|
+
|
|
+error:
|
|
+ if (par->dc_regs)
|
|
+ iounmap(par->dc_regs);
|
|
+ if (info->screen_base)
|
|
+ iounmap(info->screen_base);
|
|
+ release_mem_region(addr, size);
|
|
+ framebuffer_release(info);
|
|
+ return ret;
|
|
}
|
|
|
|
static void imsttfb_remove(struct pci_dev *pdev)
|
|
diff --git a/drivers/video/fbdev/imxfb.c b/drivers/video/fbdev/imxfb.c
|
|
index ffde3107104bc..dbc8808b093a5 100644
|
|
--- a/drivers/video/fbdev/imxfb.c
|
|
+++ b/drivers/video/fbdev/imxfb.c
|
|
@@ -601,10 +601,10 @@ static int imxfb_activate_var(struct fb_var_screeninfo *var, struct fb_info *inf
|
|
if (var->hsync_len < 1 || var->hsync_len > 64)
|
|
printk(KERN_ERR "%s: invalid hsync_len %d\n",
|
|
info->fix.id, var->hsync_len);
|
|
- if (var->left_margin > 255)
|
|
+ if (var->left_margin < 3 || var->left_margin > 255)
|
|
printk(KERN_ERR "%s: invalid left_margin %d\n",
|
|
info->fix.id, var->left_margin);
|
|
- if (var->right_margin > 255)
|
|
+ if (var->right_margin < 1 || var->right_margin > 255)
|
|
printk(KERN_ERR "%s: invalid right_margin %d\n",
|
|
info->fix.id, var->right_margin);
|
|
if (var->yres < 1 || var->yres > ymax_mask)
|
|
diff --git a/drivers/video/fbdev/omap/lcd_mipid.c b/drivers/video/fbdev/omap/lcd_mipid.c
|
|
index a75ae0c9b14c7..d1cd8785d011d 100644
|
|
--- a/drivers/video/fbdev/omap/lcd_mipid.c
|
|
+++ b/drivers/video/fbdev/omap/lcd_mipid.c
|
|
@@ -563,11 +563,15 @@ static int mipid_spi_probe(struct spi_device *spi)
|
|
|
|
r = mipid_detect(md);
|
|
if (r < 0)
|
|
- return r;
|
|
+ goto free_md;
|
|
|
|
omapfb_register_panel(&md->panel);
|
|
|
|
return 0;
|
|
+
|
|
+free_md:
|
|
+ kfree(md);
|
|
+ return r;
|
|
}
|
|
|
|
static int mipid_spi_remove(struct spi_device *spi)
|
|
diff --git a/drivers/w1/w1.c b/drivers/w1/w1.c
|
|
index 2a7970a10533e..e08f40c9d54c9 100644
|
|
--- a/drivers/w1/w1.c
|
|
+++ b/drivers/w1/w1.c
|
|
@@ -1228,10 +1228,10 @@ err_out_exit_init:
|
|
|
|
static void __exit w1_fini(void)
|
|
{
|
|
- struct w1_master *dev;
|
|
+ struct w1_master *dev, *n;
|
|
|
|
/* Set netlink removal messages and some cleanup */
|
|
- list_for_each_entry(dev, &w1_masters, w1_master_entry)
|
|
+ list_for_each_entry_safe(dev, n, &w1_masters, w1_master_entry)
|
|
__w1_remove_master_device(dev);
|
|
|
|
w1_fini_netlink();
|
|
diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
|
|
index 353e89efdebf8..db8f83ab55f63 100644
|
|
--- a/fs/btrfs/qgroup.c
|
|
+++ b/fs/btrfs/qgroup.c
|
|
@@ -1189,7 +1189,9 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
|
|
goto out;
|
|
}
|
|
|
|
+ spin_lock(&fs_info->trans_lock);
|
|
list_del("a_root->dirty_list);
|
|
+ spin_unlock(&fs_info->trans_lock);
|
|
|
|
btrfs_tree_lock(quota_root->node);
|
|
btrfs_clean_tree_block(quota_root->node);
|
|
@@ -4283,4 +4285,5 @@ void btrfs_qgroup_destroy_extent_records(struct btrfs_transaction *trans)
|
|
ulist_free(entry->old_roots);
|
|
kfree(entry);
|
|
}
|
|
+ *root = RB_ROOT;
|
|
}
|
|
diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
|
|
index 7def75d5b00cb..243e246cb5046 100644
|
|
--- a/fs/ceph/caps.c
|
|
+++ b/fs/ceph/caps.c
|
|
@@ -3340,6 +3340,15 @@ static void handle_cap_grant(struct inode *inode,
|
|
}
|
|
BUG_ON(cap->issued & ~cap->implemented);
|
|
|
|
+ /* don't let check_caps skip sending a response to MDS for revoke msgs */
|
|
+ if (le32_to_cpu(grant->op) == CEPH_CAP_OP_REVOKE) {
|
|
+ cap->mds_wanted = 0;
|
|
+ if (cap == ci->i_auth_cap)
|
|
+ check_caps = 1; /* check auth cap only */
|
|
+ else
|
|
+ check_caps = 2; /* check all caps */
|
|
+ }
|
|
+
|
|
if (extra_info->inline_version > 0 &&
|
|
extra_info->inline_version >= ci->i_inline_version) {
|
|
ci->i_inline_version = extra_info->inline_version;
|
|
diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c
|
|
index a10d2bcfe75a8..edce0b25cd90e 100644
|
|
--- a/fs/dlm/plock.c
|
|
+++ b/fs/dlm/plock.c
|
|
@@ -363,7 +363,9 @@ int dlm_posix_get(dlm_lockspace_t *lockspace, u64 number, struct file *file,
|
|
locks_init_lock(fl);
|
|
fl->fl_type = (op->info.ex) ? F_WRLCK : F_RDLCK;
|
|
fl->fl_flags = FL_POSIX;
|
|
- fl->fl_pid = -op->info.pid;
|
|
+ fl->fl_pid = op->info.pid;
|
|
+ if (op->info.nodeid != dlm_our_nodeid())
|
|
+ fl->fl_pid = -fl->fl_pid;
|
|
fl->fl_start = op->info.start;
|
|
fl->fl_end = op->info.end;
|
|
rv = 0;
|
|
diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
|
|
index fdd18c2508115..dcc377094f90b 100644
|
|
--- a/fs/erofs/zdata.c
|
|
+++ b/fs/erofs/zdata.c
|
|
@@ -636,7 +636,7 @@ hitted:
|
|
tight &= (clt->mode >= COLLECT_PRIMARY_HOOKED &&
|
|
clt->mode != COLLECT_PRIMARY_FOLLOWED_NOINPLACE);
|
|
|
|
- cur = end - min_t(unsigned int, offset + end - map->m_la, end);
|
|
+ cur = end - min_t(erofs_off_t, offset + end - map->m_la, end);
|
|
if (!(map->m_flags & EROFS_MAP_MAPPED)) {
|
|
zero_user_segment(page, cur, end);
|
|
goto next_part;
|
|
diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
|
|
index b5ee58fdd82f3..6553f58fb2898 100644
|
|
--- a/fs/erofs/zmap.c
|
|
+++ b/fs/erofs/zmap.c
|
|
@@ -215,7 +215,7 @@ static int unpack_compacted_index(struct z_erofs_maprecorder *m,
|
|
int i;
|
|
u8 *in, type;
|
|
|
|
- if (1 << amortizedshift == 4)
|
|
+ if (1 << amortizedshift == 4 && lclusterbits <= 14)
|
|
vcnt = 2;
|
|
else if (1 << amortizedshift == 2 && lclusterbits == 12)
|
|
vcnt = 16;
|
|
@@ -273,7 +273,6 @@ static int compacted_load_cluster_from_disk(struct z_erofs_maprecorder *m,
|
|
{
|
|
struct inode *const inode = m->inode;
|
|
struct erofs_inode *const vi = EROFS_I(inode);
|
|
- const unsigned int lclusterbits = vi->z_logical_clusterbits;
|
|
const erofs_off_t ebase = ALIGN(iloc(EROFS_I_SB(inode), vi->nid) +
|
|
vi->inode_isize + vi->xattr_isize, 8) +
|
|
sizeof(struct z_erofs_map_header);
|
|
@@ -283,9 +282,6 @@ static int compacted_load_cluster_from_disk(struct z_erofs_maprecorder *m,
|
|
erofs_off_t pos;
|
|
int err;
|
|
|
|
- if (lclusterbits != 12)
|
|
- return -EOPNOTSUPP;
|
|
-
|
|
if (lcn >= totalidx)
|
|
return -EINVAL;
|
|
|
|
diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
|
|
index a131d2781342b..25532bac77771 100644
|
|
--- a/fs/ext4/indirect.c
|
|
+++ b/fs/ext4/indirect.c
|
|
@@ -636,6 +636,14 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
|
|
|
|
ext4_update_inode_fsync_trans(handle, inode, 1);
|
|
count = ar.len;
|
|
+
|
|
+ /*
|
|
+ * Update reserved blocks/metadata blocks after successful block
|
|
+ * allocation which had been deferred till now.
|
|
+ */
|
|
+ if (flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE)
|
|
+ ext4_da_update_reserve_space(inode, count, 1);
|
|
+
|
|
got_it:
|
|
map->m_flags |= EXT4_MAP_MAPPED;
|
|
map->m_pblk = le32_to_cpu(chain[depth-1].key);
|
|
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
|
|
index 51c67418ed020..8a0bca3b653bc 100644
|
|
--- a/fs/ext4/inode.c
|
|
+++ b/fs/ext4/inode.c
|
|
@@ -669,16 +669,6 @@ found:
|
|
*/
|
|
ext4_clear_inode_state(inode, EXT4_STATE_EXT_MIGRATE);
|
|
}
|
|
-
|
|
- /*
|
|
- * Update reserved blocks/metadata blocks after successful
|
|
- * block allocation which had been deferred till now. We don't
|
|
- * support fallocate for non extent files. So we can update
|
|
- * reserve space here.
|
|
- */
|
|
- if ((retval > 0) &&
|
|
- (flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE))
|
|
- ext4_da_update_reserve_space(inode, retval, 1);
|
|
}
|
|
|
|
if (retval > 0) {
|
|
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
|
|
index 92c37fbbabc15..be5c2e53b636e 100644
|
|
--- a/fs/ext4/mballoc.c
|
|
+++ b/fs/ext4/mballoc.c
|
|
@@ -4950,8 +4950,8 @@ do_more:
|
|
* them with group lock_held
|
|
*/
|
|
if (test_opt(sb, DISCARD)) {
|
|
- err = ext4_issue_discard(sb, block_group, bit, count,
|
|
- NULL);
|
|
+ err = ext4_issue_discard(sb, block_group, bit,
|
|
+ count_clusters, NULL);
|
|
if (err && err != -EOPNOTSUPP)
|
|
ext4_msg(sb, KERN_WARNING, "discard request in"
|
|
" group:%d block:%d count:%lu failed"
|
|
diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
|
|
index d3804975e82bb..3da931a5c9556 100644
|
|
--- a/fs/ext4/namei.c
|
|
+++ b/fs/ext4/namei.c
|
|
@@ -3795,19 +3795,10 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|
return retval;
|
|
}
|
|
|
|
- /*
|
|
- * We need to protect against old.inode directory getting converted
|
|
- * from inline directory format into a normal one.
|
|
- */
|
|
- if (S_ISDIR(old.inode->i_mode))
|
|
- inode_lock_nested(old.inode, I_MUTEX_NONDIR2);
|
|
-
|
|
old.bh = ext4_find_entry(old.dir, &old.dentry->d_name, &old.de,
|
|
&old.inlined);
|
|
- if (IS_ERR(old.bh)) {
|
|
- retval = PTR_ERR(old.bh);
|
|
- goto unlock_moved_dir;
|
|
- }
|
|
+ if (IS_ERR(old.bh))
|
|
+ return PTR_ERR(old.bh);
|
|
|
|
/*
|
|
* Check for inode number is _not_ due to possible IO errors.
|
|
@@ -3968,10 +3959,6 @@ release_bh:
|
|
brelse(old.bh);
|
|
brelse(new.bh);
|
|
|
|
-unlock_moved_dir:
|
|
- if (S_ISDIR(old.inode->i_mode))
|
|
- inode_unlock(old.inode);
|
|
-
|
|
return retval;
|
|
}
|
|
|
|
diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
|
|
index 79d13811c5be2..cd371ac25f840 100644
|
|
--- a/fs/ext4/xattr.c
|
|
+++ b/fs/ext4/xattr.c
|
|
@@ -1742,6 +1742,20 @@ static int ext4_xattr_set_entry(struct ext4_xattr_info *i,
|
|
memmove(here, (void *)here + size,
|
|
(void *)last - (void *)here + sizeof(__u32));
|
|
memset(last, 0, size);
|
|
+
|
|
+ /*
|
|
+ * Update i_inline_off - moved ibody region might contain
|
|
+ * system.data attribute. Handling a failure here won't
|
|
+ * cause other complications for setting an xattr.
|
|
+ */
|
|
+ if (!is_block && ext4_has_inline_data(inode)) {
|
|
+ ret = ext4_find_inline_data_nolock(inode);
|
|
+ if (ret) {
|
|
+ ext4_warning_inode(inode,
|
|
+ "unable to update i_inline_off");
|
|
+ goto out;
|
|
+ }
|
|
+ }
|
|
} else if (s->not_found) {
|
|
/* Insert new name. */
|
|
size_t size = EXT4_XATTR_LEN(name_len);
|
|
diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
|
|
index 9cb2a87247b21..ed95c27e93026 100644
|
|
--- a/fs/f2fs/namei.c
|
|
+++ b/fs/f2fs/namei.c
|
|
@@ -892,20 +892,12 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|
goto out;
|
|
}
|
|
|
|
- /*
|
|
- * Copied from ext4_rename: we need to protect against old.inode
|
|
- * directory getting converted from inline directory format into
|
|
- * a normal one.
|
|
- */
|
|
- if (S_ISDIR(old_inode->i_mode))
|
|
- inode_lock_nested(old_inode, I_MUTEX_NONDIR2);
|
|
-
|
|
err = -ENOENT;
|
|
old_entry = f2fs_find_entry(old_dir, &old_dentry->d_name, &old_page);
|
|
if (!old_entry) {
|
|
if (IS_ERR(old_page))
|
|
err = PTR_ERR(old_page);
|
|
- goto out_unlock_old;
|
|
+ goto out;
|
|
}
|
|
|
|
if (S_ISDIR(old_inode->i_mode)) {
|
|
@@ -1033,9 +1025,6 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|
|
|
f2fs_unlock_op(sbi);
|
|
|
|
- if (S_ISDIR(old_inode->i_mode))
|
|
- inode_unlock(old_inode);
|
|
-
|
|
if (IS_DIRSYNC(old_dir) || IS_DIRSYNC(new_dir))
|
|
f2fs_sync_fs(sbi->sb, 1);
|
|
|
|
@@ -1051,9 +1040,6 @@ out_dir:
|
|
f2fs_put_page(old_dir_page, 0);
|
|
out_old:
|
|
f2fs_put_page(old_page, 0);
|
|
-out_unlock_old:
|
|
- if (S_ISDIR(old_inode->i_mode))
|
|
- inode_unlock(old_inode);
|
|
out:
|
|
if (whiteout)
|
|
iput(whiteout);
|
|
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
|
|
index b080d5c58f6cb..8256a2dedae8c 100644
|
|
--- a/fs/f2fs/node.c
|
|
+++ b/fs/f2fs/node.c
|
|
@@ -889,8 +889,10 @@ static int truncate_dnode(struct dnode_of_data *dn)
|
|
dn->ofs_in_node = 0;
|
|
f2fs_truncate_data_blocks(dn);
|
|
err = truncate_node(dn);
|
|
- if (err)
|
|
+ if (err) {
|
|
+ f2fs_put_page(page, 1);
|
|
return err;
|
|
+ }
|
|
|
|
return 1;
|
|
}
|
|
diff --git a/fs/fs_context.c b/fs/fs_context.c
|
|
index e492a83fa100e..412712eb59ee1 100644
|
|
--- a/fs/fs_context.c
|
|
+++ b/fs/fs_context.c
|
|
@@ -598,7 +598,8 @@ static int legacy_parse_param(struct fs_context *fc, struct fs_parameter *param)
|
|
return -ENOMEM;
|
|
}
|
|
|
|
- ctx->legacy_data[size++] = ',';
|
|
+ if (size)
|
|
+ ctx->legacy_data[size++] = ',';
|
|
len = strlen(param->key);
|
|
memcpy(ctx->legacy_data + size, param->key, len);
|
|
size += len;
|
|
diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
|
|
index 34487bf1d7914..b2f37809fa9bd 100644
|
|
--- a/fs/fuse/dir.c
|
|
+++ b/fs/fuse/dir.c
|
|
@@ -246,7 +246,7 @@ static int fuse_dentry_revalidate(struct dentry *entry, unsigned int flags)
|
|
spin_unlock(&fi->lock);
|
|
}
|
|
kfree(forget);
|
|
- if (ret == -ENOMEM)
|
|
+ if (ret == -ENOMEM || ret == -EINTR)
|
|
goto out;
|
|
if (ret || fuse_invalid_attr(&outarg.attr) ||
|
|
(outarg.attr.mode ^ inode->i_mode) & S_IFMT)
|
|
diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
|
|
index 9c593fd50c6a5..baf0a70460c03 100644
|
|
--- a/fs/gfs2/super.c
|
|
+++ b/fs/gfs2/super.c
|
|
@@ -1258,6 +1258,14 @@ static void gfs2_evict_inode(struct inode *inode)
|
|
if (inode->i_nlink || sb_rdonly(sb))
|
|
goto out;
|
|
|
|
+ /*
|
|
+ * In case of an incomplete mount, gfs2_evict_inode() may be called for
|
|
+ * system files without having an active journal to write to. In that
|
|
+ * case, skip the filesystem evict.
|
|
+ */
|
|
+ if (!sdp->sd_jdesc)
|
|
+ goto out;
|
|
+
|
|
if (test_bit(GIF_ALLOC_FAILED, &ip->i_flags)) {
|
|
BUG_ON(!gfs2_glock_is_locked_by_me(ip->i_gl));
|
|
gfs2_holder_mark_uninitialized(&gh);
|
|
diff --git a/fs/inode.c b/fs/inode.c
|
|
index 140a62e5382cb..f7c8c0fe11d44 100644
|
|
--- a/fs/inode.c
|
|
+++ b/fs/inode.c
|
|
@@ -1012,6 +1012,48 @@ void discard_new_inode(struct inode *inode)
|
|
}
|
|
EXPORT_SYMBOL(discard_new_inode);
|
|
|
|
+/**
|
|
+ * lock_two_inodes - lock two inodes (may be regular files but also dirs)
|
|
+ *
|
|
+ * Lock any non-NULL argument. The caller must make sure that if he is passing
|
|
+ * in two directories, one is not ancestor of the other. Zero, one or two
|
|
+ * objects may be locked by this function.
|
|
+ *
|
|
+ * @inode1: first inode to lock
|
|
+ * @inode2: second inode to lock
|
|
+ * @subclass1: inode lock subclass for the first lock obtained
|
|
+ * @subclass2: inode lock subclass for the second lock obtained
|
|
+ */
|
|
+void lock_two_inodes(struct inode *inode1, struct inode *inode2,
|
|
+ unsigned subclass1, unsigned subclass2)
|
|
+{
|
|
+ if (!inode1 || !inode2) {
|
|
+ /*
|
|
+ * Make sure @subclass1 will be used for the acquired lock.
|
|
+ * This is not strictly necessary (no current caller cares) but
|
|
+ * let's keep things consistent.
|
|
+ */
|
|
+ if (!inode1)
|
|
+ swap(inode1, inode2);
|
|
+ goto lock;
|
|
+ }
|
|
+
|
|
+ /*
|
|
+ * If one object is directory and the other is not, we must make sure
|
|
+ * to lock directory first as the other object may be its child.
|
|
+ */
|
|
+ if (S_ISDIR(inode2->i_mode) == S_ISDIR(inode1->i_mode)) {
|
|
+ if (inode1 > inode2)
|
|
+ swap(inode1, inode2);
|
|
+ } else if (!S_ISDIR(inode1->i_mode))
|
|
+ swap(inode1, inode2);
|
|
+lock:
|
|
+ if (inode1)
|
|
+ inode_lock_nested(inode1, subclass1);
|
|
+ if (inode2 && inode2 != inode1)
|
|
+ inode_lock_nested(inode2, subclass2);
|
|
+}
|
|
+
|
|
/**
|
|
* lock_two_nondirectories - take two i_mutexes on non-directory objects
|
|
*
|
|
diff --git a/fs/internal.h b/fs/internal.h
|
|
index 61aed95f83d1e..377f984e92267 100644
|
|
--- a/fs/internal.h
|
|
+++ b/fs/internal.h
|
|
@@ -138,6 +138,8 @@ extern int vfs_open(const struct path *, struct file *);
|
|
extern long prune_icache_sb(struct super_block *sb, struct shrink_control *sc);
|
|
extern void inode_add_lru(struct inode *inode);
|
|
extern int dentry_needs_remove_privs(struct dentry *dentry);
|
|
+void lock_two_inodes(struct inode *inode1, struct inode *inode2,
|
|
+ unsigned subclass1, unsigned subclass2);
|
|
|
|
/*
|
|
* fs-writeback.c
|
|
diff --git a/fs/jffs2/build.c b/fs/jffs2/build.c
|
|
index 837cd55fd4c5e..6ae9d6fefb861 100644
|
|
--- a/fs/jffs2/build.c
|
|
+++ b/fs/jffs2/build.c
|
|
@@ -211,7 +211,10 @@ static int jffs2_build_filesystem(struct jffs2_sb_info *c)
|
|
ic->scan_dents = NULL;
|
|
cond_resched();
|
|
}
|
|
- jffs2_build_xattr_subsystem(c);
|
|
+ ret = jffs2_build_xattr_subsystem(c);
|
|
+ if (ret)
|
|
+ goto exit;
|
|
+
|
|
c->flags &= ~JFFS2_SB_FLAG_BUILDING;
|
|
|
|
dbg_fsbuild("FS build complete\n");
|
|
diff --git a/fs/jffs2/xattr.c b/fs/jffs2/xattr.c
|
|
index da3e18503c658..acb4492f5970c 100644
|
|
--- a/fs/jffs2/xattr.c
|
|
+++ b/fs/jffs2/xattr.c
|
|
@@ -772,10 +772,10 @@ void jffs2_clear_xattr_subsystem(struct jffs2_sb_info *c)
|
|
}
|
|
|
|
#define XREF_TMPHASH_SIZE (128)
|
|
-void jffs2_build_xattr_subsystem(struct jffs2_sb_info *c)
|
|
+int jffs2_build_xattr_subsystem(struct jffs2_sb_info *c)
|
|
{
|
|
struct jffs2_xattr_ref *ref, *_ref;
|
|
- struct jffs2_xattr_ref *xref_tmphash[XREF_TMPHASH_SIZE];
|
|
+ struct jffs2_xattr_ref **xref_tmphash;
|
|
struct jffs2_xattr_datum *xd, *_xd;
|
|
struct jffs2_inode_cache *ic;
|
|
struct jffs2_raw_node_ref *raw;
|
|
@@ -784,9 +784,12 @@ void jffs2_build_xattr_subsystem(struct jffs2_sb_info *c)
|
|
|
|
BUG_ON(!(c->flags & JFFS2_SB_FLAG_BUILDING));
|
|
|
|
+ xref_tmphash = kcalloc(XREF_TMPHASH_SIZE,
|
|
+ sizeof(struct jffs2_xattr_ref *), GFP_KERNEL);
|
|
+ if (!xref_tmphash)
|
|
+ return -ENOMEM;
|
|
+
|
|
/* Phase.1 : Merge same xref */
|
|
- for (i=0; i < XREF_TMPHASH_SIZE; i++)
|
|
- xref_tmphash[i] = NULL;
|
|
for (ref=c->xref_temp; ref; ref=_ref) {
|
|
struct jffs2_xattr_ref *tmp;
|
|
|
|
@@ -884,6 +887,8 @@ void jffs2_build_xattr_subsystem(struct jffs2_sb_info *c)
|
|
"%u of xref (%u dead, %u orphan) found.\n",
|
|
xdatum_count, xdatum_unchecked_count, xdatum_orphan_count,
|
|
xref_count, xref_dead_count, xref_orphan_count);
|
|
+ kfree(xref_tmphash);
|
|
+ return 0;
|
|
}
|
|
|
|
struct jffs2_xattr_datum *jffs2_setup_xattr_datum(struct jffs2_sb_info *c,
|
|
diff --git a/fs/jffs2/xattr.h b/fs/jffs2/xattr.h
|
|
index 720007b2fd65d..1b5030a3349db 100644
|
|
--- a/fs/jffs2/xattr.h
|
|
+++ b/fs/jffs2/xattr.h
|
|
@@ -71,7 +71,7 @@ static inline int is_xattr_ref_dead(struct jffs2_xattr_ref *ref)
|
|
#ifdef CONFIG_JFFS2_FS_XATTR
|
|
|
|
extern void jffs2_init_xattr_subsystem(struct jffs2_sb_info *c);
|
|
-extern void jffs2_build_xattr_subsystem(struct jffs2_sb_info *c);
|
|
+extern int jffs2_build_xattr_subsystem(struct jffs2_sb_info *c);
|
|
extern void jffs2_clear_xattr_subsystem(struct jffs2_sb_info *c);
|
|
|
|
extern struct jffs2_xattr_datum *jffs2_setup_xattr_datum(struct jffs2_sb_info *c,
|
|
@@ -103,7 +103,7 @@ extern ssize_t jffs2_listxattr(struct dentry *, char *, size_t);
|
|
#else
|
|
|
|
#define jffs2_init_xattr_subsystem(c)
|
|
-#define jffs2_build_xattr_subsystem(c)
|
|
+#define jffs2_build_xattr_subsystem(c) (0)
|
|
#define jffs2_clear_xattr_subsystem(c)
|
|
|
|
#define jffs2_xattr_do_crccheck_inode(c, ic)
|
|
diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
|
|
index cc1fed285b2d6..dac67ee1879be 100644
|
|
--- a/fs/jfs/jfs_dmap.c
|
|
+++ b/fs/jfs/jfs_dmap.c
|
|
@@ -178,7 +178,13 @@ int dbMount(struct inode *ipbmap)
|
|
dbmp_le = (struct dbmap_disk *) mp->data;
|
|
bmp->db_mapsize = le64_to_cpu(dbmp_le->dn_mapsize);
|
|
bmp->db_nfree = le64_to_cpu(dbmp_le->dn_nfree);
|
|
+
|
|
bmp->db_l2nbperpage = le32_to_cpu(dbmp_le->dn_l2nbperpage);
|
|
+ if (bmp->db_l2nbperpage > L2PSIZE - L2MINBLOCKSIZE) {
|
|
+ err = -EINVAL;
|
|
+ goto err_release_metapage;
|
|
+ }
|
|
+
|
|
bmp->db_numag = le32_to_cpu(dbmp_le->dn_numag);
|
|
if (!bmp->db_numag) {
|
|
err = -EINVAL;
|
|
diff --git a/fs/jfs/jfs_filsys.h b/fs/jfs/jfs_filsys.h
|
|
index b5d702df7111a..33ef13a0b1108 100644
|
|
--- a/fs/jfs/jfs_filsys.h
|
|
+++ b/fs/jfs/jfs_filsys.h
|
|
@@ -122,7 +122,9 @@
|
|
#define NUM_INODE_PER_IAG INOSPERIAG
|
|
|
|
#define MINBLOCKSIZE 512
|
|
+#define L2MINBLOCKSIZE 9
|
|
#define MAXBLOCKSIZE 4096
|
|
+#define L2MAXBLOCKSIZE 12
|
|
#define MAXFILESIZE ((s64)1 << 52)
|
|
|
|
#define JFS_LINK_MAX 0xffffffff
|
|
diff --git a/fs/namei.c b/fs/namei.c
|
|
index d220df830c499..14e600711f504 100644
|
|
--- a/fs/namei.c
|
|
+++ b/fs/namei.c
|
|
@@ -2870,8 +2870,8 @@ struct dentry *lock_rename(struct dentry *p1, struct dentry *p2)
|
|
return p;
|
|
}
|
|
|
|
- inode_lock_nested(p1->d_inode, I_MUTEX_PARENT);
|
|
- inode_lock_nested(p2->d_inode, I_MUTEX_PARENT2);
|
|
+ lock_two_inodes(p1->d_inode, p2->d_inode,
|
|
+ I_MUTEX_PARENT, I_MUTEX_PARENT2);
|
|
return NULL;
|
|
}
|
|
EXPORT_SYMBOL(lock_rename);
|
|
@@ -4367,7 +4367,7 @@ SYSCALL_DEFINE2(link, const char __user *, oldname, const char __user *, newname
|
|
* sb->s_vfs_rename_mutex. We might be more accurate, but that's another
|
|
* story.
|
|
* c) we have to lock _four_ objects - parents and victim (if it exists),
|
|
- * and source (if it is not a directory).
|
|
+ * and source.
|
|
* And that - after we got ->i_mutex on parents (until then we don't know
|
|
* whether the target exists). Solution: try to be smart with locking
|
|
* order for inodes. We rely on the fact that tree topology may change
|
|
@@ -4444,10 +4444,16 @@ int vfs_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|
|
|
take_dentry_name_snapshot(&old_name, old_dentry);
|
|
dget(new_dentry);
|
|
- if (!is_dir || (flags & RENAME_EXCHANGE))
|
|
- lock_two_nondirectories(source, target);
|
|
- else if (target)
|
|
- inode_lock(target);
|
|
+ /*
|
|
+ * Lock all moved children. Moved directories may need to change parent
|
|
+ * pointer so they need the lock to prevent against concurrent
|
|
+ * directory changes moving parent pointer. For regular files we've
|
|
+ * historically always done this. The lockdep locking subclasses are
|
|
+ * somewhat arbitrary but RENAME_EXCHANGE in particular can swap
|
|
+ * regular files and directories so it's difficult to tell which
|
|
+ * subclasses to use.
|
|
+ */
|
|
+ lock_two_inodes(source, target, I_MUTEX_NORMAL, I_MUTEX_NONDIR2);
|
|
|
|
error = -EBUSY;
|
|
if (is_local_mountpoint(old_dentry) || is_local_mountpoint(new_dentry))
|
|
@@ -4491,9 +4497,8 @@ int vfs_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|
d_exchange(old_dentry, new_dentry);
|
|
}
|
|
out:
|
|
- if (!is_dir || (flags & RENAME_EXCHANGE))
|
|
- unlock_two_nondirectories(source, target);
|
|
- else if (target)
|
|
+ inode_unlock(source);
|
|
+ if (target)
|
|
inode_unlock(target);
|
|
dput(new_dentry);
|
|
if (!error) {
|
|
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
|
|
index c54dd49c993c5..231da9fadf098 100644
|
|
--- a/fs/nfs/nfs4proc.c
|
|
+++ b/fs/nfs/nfs4proc.c
|
|
@@ -915,6 +915,7 @@ out:
|
|
out_noaction:
|
|
return ret;
|
|
session_recover:
|
|
+ set_bit(NFS4_SLOT_TBL_DRAINING, &session->fc_slot_table.slot_tbl_state);
|
|
nfs4_schedule_session_recovery(session, status);
|
|
dprintk("%s ERROR: %d Reset session\n", __func__, status);
|
|
nfs41_sequence_free_slot(res);
|
|
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
|
|
index a4f2c0cc6a49e..ff95a08574721 100644
|
|
--- a/fs/nfsd/nfs4xdr.c
|
|
+++ b/fs/nfsd/nfs4xdr.c
|
|
@@ -3409,7 +3409,7 @@ nfsd4_encode_open(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_op
|
|
p = xdr_reserve_space(xdr, 32);
|
|
if (!p)
|
|
return nfserr_resource;
|
|
- *p++ = cpu_to_be32(0);
|
|
+ *p++ = cpu_to_be32(open->op_recall);
|
|
|
|
/*
|
|
* TODO: space_limit's in delegations
|
|
diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
|
|
index 8508ab5750174..ec4eadf459ae8 100644
|
|
--- a/fs/notify/fanotify/fanotify_user.c
|
|
+++ b/fs/notify/fanotify/fanotify_user.c
|
|
@@ -928,8 +928,11 @@ static int fanotify_test_fid(struct path *path, __kernel_fsid_t *fsid)
|
|
return 0;
|
|
}
|
|
|
|
-static int fanotify_events_supported(struct path *path, __u64 mask)
|
|
+static int fanotify_events_supported(struct path *path, __u64 mask,
|
|
+ unsigned int flags)
|
|
{
|
|
+ unsigned int mark_type = flags & FANOTIFY_MARK_TYPE_BITS;
|
|
+
|
|
/*
|
|
* Some filesystems such as 'proc' acquire unusual locks when opening
|
|
* files. For them fanotify permission events have high chances of
|
|
@@ -941,6 +944,21 @@ static int fanotify_events_supported(struct path *path, __u64 mask)
|
|
if (mask & FANOTIFY_PERM_EVENTS &&
|
|
path->mnt->mnt_sb->s_type->fs_flags & FS_DISALLOW_NOTIFY_PERM)
|
|
return -EINVAL;
|
|
+
|
|
+ /*
|
|
+ * mount and sb marks are not allowed on kernel internal pseudo fs,
|
|
+ * like pipe_mnt, because that would subscribe to events on all the
|
|
+ * anonynous pipes in the system.
|
|
+ *
|
|
+ * SB_NOUSER covers all of the internal pseudo fs whose objects are not
|
|
+ * exposed to user's mount namespace, but there are other SB_KERNMOUNT
|
|
+ * fs, like nsfs, debugfs, for which the value of allowing sb and mount
|
|
+ * mark is questionable. For now we leave them alone.
|
|
+ */
|
|
+ if (mark_type != FAN_MARK_INODE &&
|
|
+ path->mnt->mnt_sb->s_flags & SB_NOUSER)
|
|
+ return -EINVAL;
|
|
+
|
|
return 0;
|
|
}
|
|
|
|
@@ -1050,7 +1068,7 @@ static int do_fanotify_mark(int fanotify_fd, unsigned int flags, __u64 mask,
|
|
goto fput_and_out;
|
|
|
|
if (flags & FAN_MARK_ADD) {
|
|
- ret = fanotify_events_supported(&path, mask);
|
|
+ ret = fanotify_events_supported(&path, mask, flags);
|
|
if (ret)
|
|
goto path_put_and_out;
|
|
}
|
|
diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
|
|
index 286340f312dcb..73aed51447b9a 100644
|
|
--- a/fs/pstore/ram_core.c
|
|
+++ b/fs/pstore/ram_core.c
|
|
@@ -579,6 +579,8 @@ struct persistent_ram_zone *persistent_ram_new(phys_addr_t start, size_t size,
|
|
raw_spin_lock_init(&prz->buffer_lock);
|
|
prz->flags = flags;
|
|
prz->label = kstrdup(label, GFP_KERNEL);
|
|
+ if (!prz->label)
|
|
+ goto err;
|
|
|
|
ret = persistent_ram_buffer_map(start, size, prz, memtype);
|
|
if (ret)
|
|
diff --git a/include/drm/drm_panel.h b/include/drm/drm_panel.h
|
|
index 624bd15ecfab6..ce8da64022b43 100644
|
|
--- a/include/drm/drm_panel.h
|
|
+++ b/include/drm/drm_panel.h
|
|
@@ -139,6 +139,15 @@ struct drm_panel {
|
|
*/
|
|
const struct drm_panel_funcs *funcs;
|
|
|
|
+ /**
|
|
+ * @connector_type:
|
|
+ *
|
|
+ * Type of the panel as a DRM_MODE_CONNECTOR_* value. This is used to
|
|
+ * initialise the drm_connector corresponding to the panel with the
|
|
+ * correct connector type.
|
|
+ */
|
|
+ int connector_type;
|
|
+
|
|
/**
|
|
* @list:
|
|
*
|
|
@@ -147,7 +156,9 @@ struct drm_panel {
|
|
struct list_head list;
|
|
};
|
|
|
|
-void drm_panel_init(struct drm_panel *panel);
|
|
+void drm_panel_init(struct drm_panel *panel, struct device *dev,
|
|
+ const struct drm_panel_funcs *funcs,
|
|
+ int connector_type);
|
|
|
|
int drm_panel_add(struct drm_panel *panel);
|
|
void drm_panel_remove(struct drm_panel *panel);
|
|
diff --git a/include/linux/etherdevice.h b/include/linux/etherdevice.h
|
|
index 0f1e95240c0c0..66b89189a1e2e 100644
|
|
--- a/include/linux/etherdevice.h
|
|
+++ b/include/linux/etherdevice.h
|
|
@@ -288,6 +288,18 @@ static inline void ether_addr_copy(u8 *dst, const u8 *src)
|
|
#endif
|
|
}
|
|
|
|
+/**
|
|
+ * eth_hw_addr_set - Assign Ethernet address to a net_device
|
|
+ * @dev: pointer to net_device structure
|
|
+ * @addr: address to assign
|
|
+ *
|
|
+ * Assign given address to the net_device, addr_assign_type is not changed.
|
|
+ */
|
|
+static inline void eth_hw_addr_set(struct net_device *dev, const u8 *addr)
|
|
+{
|
|
+ ether_addr_copy(dev->dev_addr, addr);
|
|
+}
|
|
+
|
|
/**
|
|
* eth_hw_addr_inherit - Copy dev_addr from another net_device
|
|
* @dst: pointer to net_device to copy dev_addr to
|
|
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
|
|
index 8dea4b53d664d..bf623f0e04d64 100644
|
|
--- a/include/linux/netdevice.h
|
|
+++ b/include/linux/netdevice.h
|
|
@@ -4189,6 +4189,24 @@ void __hw_addr_unsync_dev(struct netdev_hw_addr_list *list,
|
|
void __hw_addr_init(struct netdev_hw_addr_list *list);
|
|
|
|
/* Functions used for device addresses handling */
|
|
+static inline void
|
|
+__dev_addr_set(struct net_device *dev, const u8 *addr, size_t len)
|
|
+{
|
|
+ memcpy(dev->dev_addr, addr, len);
|
|
+}
|
|
+
|
|
+static inline void dev_addr_set(struct net_device *dev, const u8 *addr)
|
|
+{
|
|
+ __dev_addr_set(dev, addr, dev->addr_len);
|
|
+}
|
|
+
|
|
+static inline void
|
|
+dev_addr_mod(struct net_device *dev, unsigned int offset,
|
|
+ const u8 *addr, size_t len)
|
|
+{
|
|
+ memcpy(&dev->dev_addr[offset], addr, len);
|
|
+}
|
|
+
|
|
int dev_addr_add(struct net_device *dev, const unsigned char *addr,
|
|
unsigned char addr_type);
|
|
int dev_addr_del(struct net_device *dev, const unsigned char *addr,
|
|
diff --git a/include/linux/netfilter/nfnetlink.h b/include/linux/netfilter/nfnetlink.h
|
|
index 41de4156540a9..0518ca72b7616 100644
|
|
--- a/include/linux/netfilter/nfnetlink.h
|
|
+++ b/include/linux/netfilter/nfnetlink.h
|
|
@@ -56,6 +56,33 @@ static inline u16 nfnl_msg_type(u8 subsys, u8 msg_type)
|
|
return subsys << 8 | msg_type;
|
|
}
|
|
|
|
+static inline void nfnl_fill_hdr(struct nlmsghdr *nlh, u8 family, u8 version,
|
|
+ __be16 res_id)
|
|
+{
|
|
+ struct nfgenmsg *nfmsg;
|
|
+
|
|
+ nfmsg = nlmsg_data(nlh);
|
|
+ nfmsg->nfgen_family = family;
|
|
+ nfmsg->version = version;
|
|
+ nfmsg->res_id = res_id;
|
|
+}
|
|
+
|
|
+static inline struct nlmsghdr *nfnl_msg_put(struct sk_buff *skb, u32 portid,
|
|
+ u32 seq, int type, int flags,
|
|
+ u8 family, u8 version,
|
|
+ __be16 res_id)
|
|
+{
|
|
+ struct nlmsghdr *nlh;
|
|
+
|
|
+ nlh = nlmsg_put(skb, portid, seq, type, sizeof(struct nfgenmsg), flags);
|
|
+ if (!nlh)
|
|
+ return NULL;
|
|
+
|
|
+ nfnl_fill_hdr(nlh, family, version, res_id);
|
|
+
|
|
+ return nlh;
|
|
+}
|
|
+
|
|
void nfnl_lock(__u8 subsys_id);
|
|
void nfnl_unlock(__u8 subsys_id);
|
|
#ifdef CONFIG_PROVE_LOCKING
|
|
diff --git a/include/linux/nmi.h b/include/linux/nmi.h
|
|
index e972d1ae1ee63..6cb593d9ed08a 100644
|
|
--- a/include/linux/nmi.h
|
|
+++ b/include/linux/nmi.h
|
|
@@ -197,7 +197,7 @@ u64 hw_nmi_get_sample_period(int watchdog_thresh);
|
|
#endif
|
|
|
|
#if defined(CONFIG_HARDLOCKUP_CHECK_TIMESTAMP) && \
|
|
- defined(CONFIG_HARDLOCKUP_DETECTOR)
|
|
+ defined(CONFIG_HARDLOCKUP_DETECTOR_PERF)
|
|
void watchdog_update_hrtimer_threshold(u64 period);
|
|
#else
|
|
static inline void watchdog_update_hrtimer_threshold(u64 period) { }
|
|
diff --git a/include/linux/pci.h b/include/linux/pci.h
|
|
index fc343d123127b..1cd5caa567cf5 100644
|
|
--- a/include/linux/pci.h
|
|
+++ b/include/linux/pci.h
|
|
@@ -1687,6 +1687,7 @@ static inline struct pci_dev *pci_get_class(unsigned int class,
|
|
#define pci_dev_put(dev) do { } while (0)
|
|
|
|
static inline void pci_set_master(struct pci_dev *dev) { }
|
|
+static inline void pci_clear_master(struct pci_dev *dev) { }
|
|
static inline int pci_enable_device(struct pci_dev *dev) { return -EIO; }
|
|
static inline void pci_disable_device(struct pci_dev *dev) { }
|
|
static inline int pci_assign_resource(struct pci_dev *dev, int i)
|
|
diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
|
|
index b3f88470cbb58..2f355c3c0d15f 100644
|
|
--- a/include/linux/sched/signal.h
|
|
+++ b/include/linux/sched/signal.h
|
|
@@ -123,7 +123,7 @@ struct signal_struct {
|
|
#ifdef CONFIG_POSIX_TIMERS
|
|
|
|
/* POSIX.1b Interval Timers */
|
|
- int posix_timer_id;
|
|
+ unsigned int next_posix_timer_id;
|
|
struct list_head posix_timers;
|
|
|
|
/* ITIMER_REAL timer for the process */
|
|
diff --git a/include/linux/serial_8250.h b/include/linux/serial_8250.h
|
|
index bb2bc99388cae..432fdabd00e4b 100644
|
|
--- a/include/linux/serial_8250.h
|
|
+++ b/include/linux/serial_8250.h
|
|
@@ -95,7 +95,6 @@ struct uart_8250_port {
|
|
struct list_head list; /* ports on this IRQ */
|
|
u32 capabilities; /* port capabilities */
|
|
unsigned short bugs; /* port bugs */
|
|
- bool fifo_bug; /* min RX trigger if enabled */
|
|
unsigned int tx_loadsz; /* transmit fifo load size */
|
|
unsigned char acr;
|
|
unsigned char fcr;
|
|
diff --git a/include/linux/tcp.h b/include/linux/tcp.h
|
|
index 89751c89f11f4..68dacc1994376 100644
|
|
--- a/include/linux/tcp.h
|
|
+++ b/include/linux/tcp.h
|
|
@@ -458,7 +458,7 @@ static inline void fastopen_queue_tune(struct sock *sk, int backlog)
|
|
struct request_sock_queue *queue = &inet_csk(sk)->icsk_accept_queue;
|
|
int somaxconn = READ_ONCE(sock_net(sk)->core.sysctl_somaxconn);
|
|
|
|
- queue->fastopenq.max_qlen = min_t(unsigned int, backlog, somaxconn);
|
|
+ WRITE_ONCE(queue->fastopenq.max_qlen, min_t(unsigned int, backlog, somaxconn));
|
|
}
|
|
|
|
static inline void tcp_move_syn(struct tcp_sock *tp,
|
|
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
|
|
index 4261d1c6e87b1..887f0b94d6e99 100644
|
|
--- a/include/linux/workqueue.h
|
|
+++ b/include/linux/workqueue.h
|
|
@@ -73,7 +73,6 @@ enum {
|
|
WORK_OFFQ_FLAG_BASE = WORK_STRUCT_COLOR_SHIFT,
|
|
|
|
__WORK_OFFQ_CANCELING = WORK_OFFQ_FLAG_BASE,
|
|
- WORK_OFFQ_CANCELING = (1 << __WORK_OFFQ_CANCELING),
|
|
|
|
/*
|
|
* When a work item is off queue, its high bits point to the last
|
|
@@ -84,12 +83,6 @@ enum {
|
|
WORK_OFFQ_POOL_SHIFT = WORK_OFFQ_FLAG_BASE + WORK_OFFQ_FLAG_BITS,
|
|
WORK_OFFQ_LEFT = BITS_PER_LONG - WORK_OFFQ_POOL_SHIFT,
|
|
WORK_OFFQ_POOL_BITS = WORK_OFFQ_LEFT <= 31 ? WORK_OFFQ_LEFT : 31,
|
|
- WORK_OFFQ_POOL_NONE = (1LU << WORK_OFFQ_POOL_BITS) - 1,
|
|
-
|
|
- /* convenience constants */
|
|
- WORK_STRUCT_FLAG_MASK = (1UL << WORK_STRUCT_FLAG_BITS) - 1,
|
|
- WORK_STRUCT_WQ_DATA_MASK = ~WORK_STRUCT_FLAG_MASK,
|
|
- WORK_STRUCT_NO_POOL = (unsigned long)WORK_OFFQ_POOL_NONE << WORK_OFFQ_POOL_SHIFT,
|
|
|
|
/* bit mask for work_busy() return values */
|
|
WORK_BUSY_PENDING = 1 << 0,
|
|
@@ -99,6 +92,14 @@ enum {
|
|
WORKER_DESC_LEN = 24,
|
|
};
|
|
|
|
+/* Convenience constants - of type 'unsigned long', not 'enum'! */
|
|
+#define WORK_OFFQ_CANCELING (1ul << __WORK_OFFQ_CANCELING)
|
|
+#define WORK_OFFQ_POOL_NONE ((1ul << WORK_OFFQ_POOL_BITS) - 1)
|
|
+#define WORK_STRUCT_NO_POOL (WORK_OFFQ_POOL_NONE << WORK_OFFQ_POOL_SHIFT)
|
|
+
|
|
+#define WORK_STRUCT_FLAG_MASK ((1ul << WORK_STRUCT_FLAG_BITS) - 1)
|
|
+#define WORK_STRUCT_WQ_DATA_MASK (~WORK_STRUCT_FLAG_MASK)
|
|
+
|
|
struct work_struct {
|
|
atomic_long_t data;
|
|
struct list_head entry;
|
|
diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
|
|
index a8cc2750990f9..7ab13f515749d 100644
|
|
--- a/include/net/netfilter/nf_tables.h
|
|
+++ b/include/net/netfilter/nf_tables.h
|
|
@@ -756,6 +756,7 @@ struct nft_expr_type {
|
|
|
|
enum nft_trans_phase {
|
|
NFT_TRANS_PREPARE,
|
|
+ NFT_TRANS_PREPARE_ERROR,
|
|
NFT_TRANS_ABORT,
|
|
NFT_TRANS_COMMIT,
|
|
NFT_TRANS_RELEASE
|
|
@@ -1363,6 +1364,7 @@ static inline void nft_set_elem_clear_busy(struct nft_set_ext *ext)
|
|
* struct nft_trans - nf_tables object update in transaction
|
|
*
|
|
* @list: used internally
|
|
+ * @binding_list: list of objects with possible bindings
|
|
* @msg_type: message type
|
|
* @put_net: ctx->net needs to be put
|
|
* @ctx: transaction context
|
|
@@ -1370,6 +1372,7 @@ static inline void nft_set_elem_clear_busy(struct nft_set_ext *ext)
|
|
*/
|
|
struct nft_trans {
|
|
struct list_head list;
|
|
+ struct list_head binding_list;
|
|
int msg_type;
|
|
bool put_net;
|
|
struct nft_ctx ctx;
|
|
@@ -1472,4 +1475,15 @@ void nf_tables_trans_destroy_flush_work(void);
|
|
int nf_msecs_to_jiffies64(const struct nlattr *nla, u64 *result);
|
|
__be64 nf_jiffies64_to_msecs(u64 input);
|
|
|
|
+struct nftables_pernet {
|
|
+ struct list_head tables;
|
|
+ struct list_head commit_list;
|
|
+ struct list_head binding_list;
|
|
+ struct list_head module_list;
|
|
+ struct list_head notify_list;
|
|
+ struct mutex commit_mutex;
|
|
+ unsigned int base_seq;
|
|
+ u8 validate_state;
|
|
+};
|
|
+
|
|
#endif /* _NET_NF_TABLES_H */
|
|
diff --git a/include/net/netns/nftables.h b/include/net/netns/nftables.h
|
|
index a1a8d45adb42a..8c77832d02404 100644
|
|
--- a/include/net/netns/nftables.h
|
|
+++ b/include/net/netns/nftables.h
|
|
@@ -5,13 +5,7 @@
|
|
#include <linux/list.h>
|
|
|
|
struct netns_nftables {
|
|
- struct list_head tables;
|
|
- struct list_head commit_list;
|
|
- struct list_head module_list;
|
|
- struct mutex commit_mutex;
|
|
- unsigned int base_seq;
|
|
u8 gencursor;
|
|
- u8 validate_state;
|
|
};
|
|
|
|
#endif
|
|
diff --git a/include/net/nfc/nfc.h b/include/net/nfc/nfc.h
|
|
index 5d277d68fd8d9..c55e72474eb2b 100644
|
|
--- a/include/net/nfc/nfc.h
|
|
+++ b/include/net/nfc/nfc.h
|
|
@@ -266,7 +266,7 @@ struct sk_buff *nfc_alloc_send_skb(struct nfc_dev *dev, struct sock *sk,
|
|
struct sk_buff *nfc_alloc_recv_skb(unsigned int size, gfp_t gfp);
|
|
|
|
int nfc_set_remote_general_bytes(struct nfc_dev *dev,
|
|
- u8 *gt, u8 gt_len);
|
|
+ const u8 *gt, u8 gt_len);
|
|
u8 *nfc_get_local_general_bytes(struct nfc_dev *dev, size_t *gb_len);
|
|
|
|
int nfc_fw_download_done(struct nfc_dev *dev, const char *firmware_name,
|
|
@@ -280,7 +280,7 @@ int nfc_dep_link_is_up(struct nfc_dev *dev, u32 target_idx,
|
|
u8 comm_mode, u8 rf_mode);
|
|
|
|
int nfc_tm_activated(struct nfc_dev *dev, u32 protocol, u8 comm_mode,
|
|
- u8 *gb, size_t gb_len);
|
|
+ const u8 *gb, size_t gb_len);
|
|
int nfc_tm_deactivated(struct nfc_dev *dev);
|
|
int nfc_tm_data_received(struct nfc_dev *dev, struct sk_buff *skb);
|
|
|
|
diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
|
|
index 2d932834ed5bf..fd99650a2e229 100644
|
|
--- a/include/net/pkt_sched.h
|
|
+++ b/include/net/pkt_sched.h
|
|
@@ -131,7 +131,7 @@ extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1];
|
|
*/
|
|
static inline unsigned int psched_mtu(const struct net_device *dev)
|
|
{
|
|
- return dev->mtu + dev->hard_header_len;
|
|
+ return READ_ONCE(dev->mtu) + dev->hard_header_len;
|
|
}
|
|
|
|
static inline struct net *qdisc_net(struct Qdisc *q)
|
|
diff --git a/include/net/sock.h b/include/net/sock.h
|
|
index 87e57f81ee82b..ee8630d6abc16 100644
|
|
--- a/include/net/sock.h
|
|
+++ b/include/net/sock.h
|
|
@@ -1863,6 +1863,7 @@ static inline void sock_graft(struct sock *sk, struct socket *parent)
|
|
}
|
|
|
|
kuid_t sock_i_uid(struct sock *sk);
|
|
+unsigned long __sock_i_ino(struct sock *sk);
|
|
unsigned long sock_i_ino(struct sock *sk);
|
|
|
|
static inline kuid_t sock_net_uid(const struct net *net, const struct sock *sk)
|
|
diff --git a/include/net/tcp.h b/include/net/tcp.h
|
|
index 077feeca6c99e..4e909148fce39 100644
|
|
--- a/include/net/tcp.h
|
|
+++ b/include/net/tcp.h
|
|
@@ -125,6 +125,7 @@ void tcp_time_wait(struct sock *sk, int state, int timeo);
|
|
* to combine FIN-WAIT-2 timeout with
|
|
* TIME-WAIT timer.
|
|
*/
|
|
+#define TCP_FIN_TIMEOUT_MAX (120 * HZ) /* max TCP_LINGER2 value (two minutes) */
|
|
|
|
#define TCP_DELACK_MAX ((unsigned)(HZ/5)) /* maximal time to delay before sending an ACK */
|
|
#if HZ >= 100
|
|
@@ -1952,7 +1953,11 @@ void __tcp_v4_send_check(struct sk_buff *skb, __be32 saddr, __be32 daddr);
|
|
static inline u32 tcp_notsent_lowat(const struct tcp_sock *tp)
|
|
{
|
|
struct net *net = sock_net((struct sock *)tp);
|
|
- return tp->notsent_lowat ?: READ_ONCE(net->ipv4.sysctl_tcp_notsent_lowat);
|
|
+ u32 val;
|
|
+
|
|
+ val = READ_ONCE(tp->notsent_lowat);
|
|
+
|
|
+ return val ?: READ_ONCE(net->ipv4.sysctl_tcp_notsent_lowat);
|
|
}
|
|
|
|
/* @wake is one when sk_stream_write_space() calls us.
|
|
diff --git a/include/trace/events/timer.h b/include/trace/events/timer.h
|
|
index 295517f109d71..1b5371f0317af 100644
|
|
--- a/include/trace/events/timer.h
|
|
+++ b/include/trace/events/timer.h
|
|
@@ -156,7 +156,11 @@ DEFINE_EVENT(timer_class, timer_cancel,
|
|
{ HRTIMER_MODE_ABS_SOFT, "ABS|SOFT" }, \
|
|
{ HRTIMER_MODE_REL_SOFT, "REL|SOFT" }, \
|
|
{ HRTIMER_MODE_ABS_PINNED_SOFT, "ABS|PINNED|SOFT" }, \
|
|
- { HRTIMER_MODE_REL_PINNED_SOFT, "REL|PINNED|SOFT" })
|
|
+ { HRTIMER_MODE_REL_PINNED_SOFT, "REL|PINNED|SOFT" }, \
|
|
+ { HRTIMER_MODE_ABS_HARD, "ABS|HARD" }, \
|
|
+ { HRTIMER_MODE_REL_HARD, "REL|HARD" }, \
|
|
+ { HRTIMER_MODE_ABS_PINNED_HARD, "ABS|PINNED|HARD" }, \
|
|
+ { HRTIMER_MODE_REL_PINNED_HARD, "REL|PINNED|HARD" })
|
|
|
|
/**
|
|
* hrtimer_init - called when the hrtimer is initialized
|
|
diff --git a/include/uapi/linux/affs_hardblocks.h b/include/uapi/linux/affs_hardblocks.h
|
|
index 5e2fb8481252a..a5aff2eb5f708 100644
|
|
--- a/include/uapi/linux/affs_hardblocks.h
|
|
+++ b/include/uapi/linux/affs_hardblocks.h
|
|
@@ -7,42 +7,42 @@
|
|
/* Just the needed definitions for the RDB of an Amiga HD. */
|
|
|
|
struct RigidDiskBlock {
|
|
- __u32 rdb_ID;
|
|
+ __be32 rdb_ID;
|
|
__be32 rdb_SummedLongs;
|
|
- __s32 rdb_ChkSum;
|
|
- __u32 rdb_HostID;
|
|
+ __be32 rdb_ChkSum;
|
|
+ __be32 rdb_HostID;
|
|
__be32 rdb_BlockBytes;
|
|
- __u32 rdb_Flags;
|
|
- __u32 rdb_BadBlockList;
|
|
+ __be32 rdb_Flags;
|
|
+ __be32 rdb_BadBlockList;
|
|
__be32 rdb_PartitionList;
|
|
- __u32 rdb_FileSysHeaderList;
|
|
- __u32 rdb_DriveInit;
|
|
- __u32 rdb_Reserved1[6];
|
|
- __u32 rdb_Cylinders;
|
|
- __u32 rdb_Sectors;
|
|
- __u32 rdb_Heads;
|
|
- __u32 rdb_Interleave;
|
|
- __u32 rdb_Park;
|
|
- __u32 rdb_Reserved2[3];
|
|
- __u32 rdb_WritePreComp;
|
|
- __u32 rdb_ReducedWrite;
|
|
- __u32 rdb_StepRate;
|
|
- __u32 rdb_Reserved3[5];
|
|
- __u32 rdb_RDBBlocksLo;
|
|
- __u32 rdb_RDBBlocksHi;
|
|
- __u32 rdb_LoCylinder;
|
|
- __u32 rdb_HiCylinder;
|
|
- __u32 rdb_CylBlocks;
|
|
- __u32 rdb_AutoParkSeconds;
|
|
- __u32 rdb_HighRDSKBlock;
|
|
- __u32 rdb_Reserved4;
|
|
+ __be32 rdb_FileSysHeaderList;
|
|
+ __be32 rdb_DriveInit;
|
|
+ __be32 rdb_Reserved1[6];
|
|
+ __be32 rdb_Cylinders;
|
|
+ __be32 rdb_Sectors;
|
|
+ __be32 rdb_Heads;
|
|
+ __be32 rdb_Interleave;
|
|
+ __be32 rdb_Park;
|
|
+ __be32 rdb_Reserved2[3];
|
|
+ __be32 rdb_WritePreComp;
|
|
+ __be32 rdb_ReducedWrite;
|
|
+ __be32 rdb_StepRate;
|
|
+ __be32 rdb_Reserved3[5];
|
|
+ __be32 rdb_RDBBlocksLo;
|
|
+ __be32 rdb_RDBBlocksHi;
|
|
+ __be32 rdb_LoCylinder;
|
|
+ __be32 rdb_HiCylinder;
|
|
+ __be32 rdb_CylBlocks;
|
|
+ __be32 rdb_AutoParkSeconds;
|
|
+ __be32 rdb_HighRDSKBlock;
|
|
+ __be32 rdb_Reserved4;
|
|
char rdb_DiskVendor[8];
|
|
char rdb_DiskProduct[16];
|
|
char rdb_DiskRevision[4];
|
|
char rdb_ControllerVendor[8];
|
|
char rdb_ControllerProduct[16];
|
|
char rdb_ControllerRevision[4];
|
|
- __u32 rdb_Reserved5[10];
|
|
+ __be32 rdb_Reserved5[10];
|
|
};
|
|
|
|
#define IDNAME_RIGIDDISK 0x5244534B /* "RDSK" */
|
|
@@ -50,16 +50,16 @@ struct RigidDiskBlock {
|
|
struct PartitionBlock {
|
|
__be32 pb_ID;
|
|
__be32 pb_SummedLongs;
|
|
- __s32 pb_ChkSum;
|
|
- __u32 pb_HostID;
|
|
+ __be32 pb_ChkSum;
|
|
+ __be32 pb_HostID;
|
|
__be32 pb_Next;
|
|
- __u32 pb_Flags;
|
|
- __u32 pb_Reserved1[2];
|
|
- __u32 pb_DevFlags;
|
|
+ __be32 pb_Flags;
|
|
+ __be32 pb_Reserved1[2];
|
|
+ __be32 pb_DevFlags;
|
|
__u8 pb_DriveName[32];
|
|
- __u32 pb_Reserved2[15];
|
|
+ __be32 pb_Reserved2[15];
|
|
__be32 pb_Environment[17];
|
|
- __u32 pb_EReserved[15];
|
|
+ __be32 pb_EReserved[15];
|
|
};
|
|
|
|
#define IDNAME_PARTITION 0x50415254 /* "PART" */
|
|
diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
|
|
index 9c89429f31130..895c5ba8b6ac2 100644
|
|
--- a/include/uapi/linux/videodev2.h
|
|
+++ b/include/uapi/linux/videodev2.h
|
|
@@ -1588,7 +1588,7 @@ struct v4l2_input {
|
|
__u8 name[32]; /* Label */
|
|
__u32 type; /* Type of input */
|
|
__u32 audioset; /* Associated audios (bitfield) */
|
|
- __u32 tuner; /* enum v4l2_tuner_type */
|
|
+ __u32 tuner; /* Tuner index */
|
|
v4l2_std_id std;
|
|
__u32 status;
|
|
__u32 capabilities;
|
|
diff --git a/kernel/bpf/bpf_lru_list.c b/kernel/bpf/bpf_lru_list.c
|
|
index d99e89f113c43..3dabdd137d102 100644
|
|
--- a/kernel/bpf/bpf_lru_list.c
|
|
+++ b/kernel/bpf/bpf_lru_list.c
|
|
@@ -41,7 +41,12 @@ static struct list_head *local_pending_list(struct bpf_lru_locallist *loc_l)
|
|
/* bpf_lru_node helpers */
|
|
static bool bpf_lru_node_is_ref(const struct bpf_lru_node *node)
|
|
{
|
|
- return node->ref;
|
|
+ return READ_ONCE(node->ref);
|
|
+}
|
|
+
|
|
+static void bpf_lru_node_clear_ref(struct bpf_lru_node *node)
|
|
+{
|
|
+ WRITE_ONCE(node->ref, 0);
|
|
}
|
|
|
|
static void bpf_lru_list_count_inc(struct bpf_lru_list *l,
|
|
@@ -89,7 +94,7 @@ static void __bpf_lru_node_move_in(struct bpf_lru_list *l,
|
|
|
|
bpf_lru_list_count_inc(l, tgt_type);
|
|
node->type = tgt_type;
|
|
- node->ref = 0;
|
|
+ bpf_lru_node_clear_ref(node);
|
|
list_move(&node->list, &l->lists[tgt_type]);
|
|
}
|
|
|
|
@@ -110,7 +115,7 @@ static void __bpf_lru_node_move(struct bpf_lru_list *l,
|
|
bpf_lru_list_count_inc(l, tgt_type);
|
|
node->type = tgt_type;
|
|
}
|
|
- node->ref = 0;
|
|
+ bpf_lru_node_clear_ref(node);
|
|
|
|
/* If the moving node is the next_inactive_rotation candidate,
|
|
* move the next_inactive_rotation pointer also.
|
|
@@ -353,7 +358,7 @@ static void __local_list_add_pending(struct bpf_lru *lru,
|
|
*(u32 *)((void *)node + lru->hash_offset) = hash;
|
|
node->cpu = cpu;
|
|
node->type = BPF_LRU_LOCAL_LIST_T_PENDING;
|
|
- node->ref = 0;
|
|
+ bpf_lru_node_clear_ref(node);
|
|
list_add(&node->list, local_pending_list(loc_l));
|
|
}
|
|
|
|
@@ -419,7 +424,7 @@ static struct bpf_lru_node *bpf_percpu_lru_pop_free(struct bpf_lru *lru,
|
|
if (!list_empty(free_list)) {
|
|
node = list_first_entry(free_list, struct bpf_lru_node, list);
|
|
*(u32 *)((void *)node + lru->hash_offset) = hash;
|
|
- node->ref = 0;
|
|
+ bpf_lru_node_clear_ref(node);
|
|
__bpf_lru_node_move(l, node, BPF_LRU_LIST_T_INACTIVE);
|
|
}
|
|
|
|
@@ -522,7 +527,7 @@ static void bpf_common_lru_push_free(struct bpf_lru *lru,
|
|
}
|
|
|
|
node->type = BPF_LRU_LOCAL_LIST_T_FREE;
|
|
- node->ref = 0;
|
|
+ bpf_lru_node_clear_ref(node);
|
|
list_move(&node->list, local_free_list(loc_l));
|
|
|
|
raw_spin_unlock_irqrestore(&loc_l->lock, flags);
|
|
@@ -568,7 +573,7 @@ static void bpf_common_lru_populate(struct bpf_lru *lru, void *buf,
|
|
|
|
node = (struct bpf_lru_node *)(buf + node_offset);
|
|
node->type = BPF_LRU_LIST_T_FREE;
|
|
- node->ref = 0;
|
|
+ bpf_lru_node_clear_ref(node);
|
|
list_add(&node->list, &l->lists[BPF_LRU_LIST_T_FREE]);
|
|
buf += elem_size;
|
|
}
|
|
@@ -594,7 +599,7 @@ again:
|
|
node = (struct bpf_lru_node *)(buf + node_offset);
|
|
node->cpu = cpu;
|
|
node->type = BPF_LRU_LIST_T_FREE;
|
|
- node->ref = 0;
|
|
+ bpf_lru_node_clear_ref(node);
|
|
list_add(&node->list, &l->lists[BPF_LRU_LIST_T_FREE]);
|
|
i++;
|
|
buf += elem_size;
|
|
diff --git a/kernel/bpf/bpf_lru_list.h b/kernel/bpf/bpf_lru_list.h
|
|
index f02504640e185..41f8fea530c8d 100644
|
|
--- a/kernel/bpf/bpf_lru_list.h
|
|
+++ b/kernel/bpf/bpf_lru_list.h
|
|
@@ -63,11 +63,8 @@ struct bpf_lru {
|
|
|
|
static inline void bpf_lru_node_set_ref(struct bpf_lru_node *node)
|
|
{
|
|
- /* ref is an approximation on access frequency. It does not
|
|
- * have to be very accurate. Hence, no protection is used.
|
|
- */
|
|
- if (!node->ref)
|
|
- node->ref = 1;
|
|
+ if (!READ_ONCE(node->ref))
|
|
+ WRITE_ONCE(node->ref, 1);
|
|
}
|
|
|
|
int bpf_lru_init(struct bpf_lru *lru, bool percpu, u32 hash_offset,
|
|
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
|
|
index d65b0fc8fb48b..3694d90c3722f 100644
|
|
--- a/kernel/kexec_core.c
|
|
+++ b/kernel/kexec_core.c
|
|
@@ -1019,6 +1019,7 @@ int crash_shrink_memory(unsigned long new_size)
|
|
start = crashk_res.start;
|
|
end = crashk_res.end;
|
|
old_size = (end == 0) ? 0 : end - start + 1;
|
|
+ new_size = roundup(new_size, KEXEC_CRASH_MEM_ALIGN);
|
|
if (new_size >= old_size) {
|
|
ret = (new_size == old_size) ? 0 : -EINVAL;
|
|
goto unlock;
|
|
@@ -1030,9 +1031,7 @@ int crash_shrink_memory(unsigned long new_size)
|
|
goto unlock;
|
|
}
|
|
|
|
- start = roundup(start, KEXEC_CRASH_MEM_ALIGN);
|
|
- end = roundup(start + new_size, KEXEC_CRASH_MEM_ALIGN);
|
|
-
|
|
+ end = start + new_size;
|
|
crash_free_reserved_phys_range(end, crashk_res.end);
|
|
|
|
if ((start == end) && (crashk_res.parent != NULL))
|
|
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
|
|
index 9fcba0d2ab19b..2680216234ff2 100644
|
|
--- a/kernel/sched/fair.c
|
|
+++ b/kernel/sched/fair.c
|
|
@@ -8938,7 +8938,7 @@ static int load_balance(int this_cpu, struct rq *this_rq,
|
|
.sd = sd,
|
|
.dst_cpu = this_cpu,
|
|
.dst_rq = this_rq,
|
|
- .dst_grpmask = sched_group_span(sd->groups),
|
|
+ .dst_grpmask = group_balance_mask(sd->groups),
|
|
.idle = idle,
|
|
.loop_break = sched_nr_migrate_break,
|
|
.cpus = cpus,
|
|
diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
|
|
index efe3873021a37..f3b8313475acd 100644
|
|
--- a/kernel/time/posix-timers.c
|
|
+++ b/kernel/time/posix-timers.c
|
|
@@ -138,25 +138,30 @@ static struct k_itimer *posix_timer_by_id(timer_t id)
|
|
static int posix_timer_add(struct k_itimer *timer)
|
|
{
|
|
struct signal_struct *sig = current->signal;
|
|
- int first_free_id = sig->posix_timer_id;
|
|
struct hlist_head *head;
|
|
- int ret = -ENOENT;
|
|
+ unsigned int cnt, id;
|
|
|
|
- do {
|
|
+ /*
|
|
+ * FIXME: Replace this by a per signal struct xarray once there is
|
|
+ * a plan to handle the resulting CRIU regression gracefully.
|
|
+ */
|
|
+ for (cnt = 0; cnt <= INT_MAX; cnt++) {
|
|
spin_lock(&hash_lock);
|
|
- head = &posix_timers_hashtable[hash(sig, sig->posix_timer_id)];
|
|
- if (!__posix_timers_find(head, sig, sig->posix_timer_id)) {
|
|
+ id = sig->next_posix_timer_id;
|
|
+
|
|
+ /* Write the next ID back. Clamp it to the positive space */
|
|
+ sig->next_posix_timer_id = (id + 1) & INT_MAX;
|
|
+
|
|
+ head = &posix_timers_hashtable[hash(sig, id)];
|
|
+ if (!__posix_timers_find(head, sig, id)) {
|
|
hlist_add_head_rcu(&timer->t_hash, head);
|
|
- ret = sig->posix_timer_id;
|
|
+ spin_unlock(&hash_lock);
|
|
+ return id;
|
|
}
|
|
- if (++sig->posix_timer_id < 0)
|
|
- sig->posix_timer_id = 0;
|
|
- if ((sig->posix_timer_id == first_free_id) && (ret == -ENOENT))
|
|
- /* Loop over all possible ids completed */
|
|
- ret = -EAGAIN;
|
|
spin_unlock(&hash_lock);
|
|
- } while (ret == -ENOENT);
|
|
- return ret;
|
|
+ }
|
|
+ /* POSIX return code when no timer ID could be allocated */
|
|
+ return -EAGAIN;
|
|
}
|
|
|
|
static inline void unlock_timer(struct k_itimer *timr, unsigned long flags)
|
|
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
|
|
index c24039c4d75ab..afd7f3a51485e 100644
|
|
--- a/kernel/trace/ring_buffer.c
|
|
+++ b/kernel/trace/ring_buffer.c
|
|
@@ -4487,28 +4487,34 @@ unsigned long ring_buffer_size(struct ring_buffer *buffer, int cpu)
|
|
}
|
|
EXPORT_SYMBOL_GPL(ring_buffer_size);
|
|
|
|
+static void rb_clear_buffer_page(struct buffer_page *page)
|
|
+{
|
|
+ local_set(&page->write, 0);
|
|
+ local_set(&page->entries, 0);
|
|
+ rb_init_page(page->page);
|
|
+ page->read = 0;
|
|
+}
|
|
+
|
|
static void
|
|
rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer)
|
|
{
|
|
+ struct buffer_page *page;
|
|
+
|
|
rb_head_page_deactivate(cpu_buffer);
|
|
|
|
cpu_buffer->head_page
|
|
= list_entry(cpu_buffer->pages, struct buffer_page, list);
|
|
- local_set(&cpu_buffer->head_page->write, 0);
|
|
- local_set(&cpu_buffer->head_page->entries, 0);
|
|
- local_set(&cpu_buffer->head_page->page->commit, 0);
|
|
-
|
|
- cpu_buffer->head_page->read = 0;
|
|
+ rb_clear_buffer_page(cpu_buffer->head_page);
|
|
+ list_for_each_entry(page, cpu_buffer->pages, list) {
|
|
+ rb_clear_buffer_page(page);
|
|
+ }
|
|
|
|
cpu_buffer->tail_page = cpu_buffer->head_page;
|
|
cpu_buffer->commit_page = cpu_buffer->head_page;
|
|
|
|
INIT_LIST_HEAD(&cpu_buffer->reader_page->list);
|
|
INIT_LIST_HEAD(&cpu_buffer->new_pages);
|
|
- local_set(&cpu_buffer->reader_page->write, 0);
|
|
- local_set(&cpu_buffer->reader_page->entries, 0);
|
|
- local_set(&cpu_buffer->reader_page->page->commit, 0);
|
|
- cpu_buffer->reader_page->read = 0;
|
|
+ rb_clear_buffer_page(cpu_buffer->reader_page);
|
|
|
|
local_set(&cpu_buffer->entries_bytes, 0);
|
|
local_set(&cpu_buffer->overrun, 0);
|
|
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
|
|
index 219cd2c819369..7f7c700a61560 100644
|
|
--- a/kernel/trace/trace.c
|
|
+++ b/kernel/trace/trace.c
|
|
@@ -7263,7 +7263,7 @@ static const struct file_operations tracing_err_log_fops = {
|
|
.open = tracing_err_log_open,
|
|
.write = tracing_err_log_write,
|
|
.read = seq_read,
|
|
- .llseek = seq_lseek,
|
|
+ .llseek = tracing_lseek,
|
|
.release = tracing_err_log_release,
|
|
};
|
|
|
|
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
|
|
index 3cb937c17ce04..1ede6d41ab8da 100644
|
|
--- a/kernel/trace/trace_events_hist.c
|
|
+++ b/kernel/trace/trace_events_hist.c
|
|
@@ -6423,13 +6423,16 @@ static int event_hist_trigger_func(struct event_command *cmd_ops,
|
|
if (get_named_trigger_data(trigger_data))
|
|
goto enable;
|
|
|
|
- if (has_hist_vars(hist_data))
|
|
- save_hist_vars(hist_data);
|
|
-
|
|
ret = create_actions(hist_data);
|
|
if (ret)
|
|
goto out_unreg;
|
|
|
|
+ if (has_hist_vars(hist_data) || hist_data->n_var_refs) {
|
|
+ ret = save_hist_vars(hist_data);
|
|
+ if (ret)
|
|
+ goto out_unreg;
|
|
+ }
|
|
+
|
|
ret = tracing_map_init(hist_data->map);
|
|
if (ret)
|
|
goto out_unreg;
|
|
diff --git a/kernel/trace/trace_probe_tmpl.h b/kernel/trace/trace_probe_tmpl.h
|
|
index e5282828f4a60..29348874ebde7 100644
|
|
--- a/kernel/trace/trace_probe_tmpl.h
|
|
+++ b/kernel/trace/trace_probe_tmpl.h
|
|
@@ -143,6 +143,8 @@ stage3:
|
|
array:
|
|
/* the last stage: Loop on array */
|
|
if (code->op == FETCH_OP_LP_ARRAY) {
|
|
+ if (ret < 0)
|
|
+ ret = 0;
|
|
total += ret;
|
|
if (++i < code->param) {
|
|
code = s3;
|
|
diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
|
|
index 247bf0b1582ca..1e8a49dc956e2 100644
|
|
--- a/kernel/watchdog_hld.c
|
|
+++ b/kernel/watchdog_hld.c
|
|
@@ -114,14 +114,14 @@ static void watchdog_overflow_callback(struct perf_event *event,
|
|
/* Ensure the watchdog never gets throttled */
|
|
event->hw.interrupts = 0;
|
|
|
|
+ if (!watchdog_check_timestamp())
|
|
+ return;
|
|
+
|
|
if (__this_cpu_read(watchdog_nmi_touch) == true) {
|
|
__this_cpu_write(watchdog_nmi_touch, false);
|
|
return;
|
|
}
|
|
|
|
- if (!watchdog_check_timestamp())
|
|
- return;
|
|
-
|
|
/* check for a hardlockup
|
|
* This is done by making sure our timer interrupt
|
|
* is incrementing. The timer interrupt should have
|
|
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
|
|
index dd96391b44de0..856188b0681af 100644
|
|
--- a/kernel/workqueue.c
|
|
+++ b/kernel/workqueue.c
|
|
@@ -684,12 +684,17 @@ static void clear_work_data(struct work_struct *work)
|
|
set_work_data(work, WORK_STRUCT_NO_POOL, 0);
|
|
}
|
|
|
|
+static inline struct pool_workqueue *work_struct_pwq(unsigned long data)
|
|
+{
|
|
+ return (struct pool_workqueue *)(data & WORK_STRUCT_WQ_DATA_MASK);
|
|
+}
|
|
+
|
|
static struct pool_workqueue *get_work_pwq(struct work_struct *work)
|
|
{
|
|
unsigned long data = atomic_long_read(&work->data);
|
|
|
|
if (data & WORK_STRUCT_PWQ)
|
|
- return (void *)(data & WORK_STRUCT_WQ_DATA_MASK);
|
|
+ return work_struct_pwq(data);
|
|
else
|
|
return NULL;
|
|
}
|
|
@@ -717,8 +722,7 @@ static struct worker_pool *get_work_pool(struct work_struct *work)
|
|
assert_rcu_or_pool_mutex();
|
|
|
|
if (data & WORK_STRUCT_PWQ)
|
|
- return ((struct pool_workqueue *)
|
|
- (data & WORK_STRUCT_WQ_DATA_MASK))->pool;
|
|
+ return work_struct_pwq(data)->pool;
|
|
|
|
pool_id = data >> WORK_OFFQ_POOL_SHIFT;
|
|
if (pool_id == WORK_OFFQ_POOL_NONE)
|
|
@@ -739,8 +743,7 @@ static int get_work_pool_id(struct work_struct *work)
|
|
unsigned long data = atomic_long_read(&work->data);
|
|
|
|
if (data & WORK_STRUCT_PWQ)
|
|
- return ((struct pool_workqueue *)
|
|
- (data & WORK_STRUCT_WQ_DATA_MASK))->pool->id;
|
|
+ return work_struct_pwq(data)->pool->id;
|
|
|
|
return data >> WORK_OFFQ_POOL_SHIFT;
|
|
}
|
|
diff --git a/lib/debugobjects.c b/lib/debugobjects.c
|
|
index 26fa04335537b..b0e4301d74954 100644
|
|
--- a/lib/debugobjects.c
|
|
+++ b/lib/debugobjects.c
|
|
@@ -474,6 +474,15 @@ static void debug_print_object(struct debug_obj *obj, char *msg)
|
|
struct debug_obj_descr *descr = obj->descr;
|
|
static int limit;
|
|
|
|
+ /*
|
|
+ * Don't report if lookup_object_or_alloc() by the current thread
|
|
+ * failed because lookup_object_or_alloc()/debug_objects_oom() by a
|
|
+ * concurrent thread turned off debug_objects_enabled and cleared
|
|
+ * the hash buckets.
|
|
+ */
|
|
+ if (!debug_objects_enabled)
|
|
+ return;
|
|
+
|
|
if (limit < 5 && descr != descr_test) {
|
|
void *hint = descr->debug_hint ?
|
|
descr->debug_hint(obj->object) : NULL;
|
|
diff --git a/lib/ts_bm.c b/lib/ts_bm.c
|
|
index b352903c50e38..0a22ae48af61f 100644
|
|
--- a/lib/ts_bm.c
|
|
+++ b/lib/ts_bm.c
|
|
@@ -60,10 +60,12 @@ static unsigned int bm_find(struct ts_config *conf, struct ts_state *state)
|
|
struct ts_bm *bm = ts_config_priv(conf);
|
|
unsigned int i, text_len, consumed = state->offset;
|
|
const u8 *text;
|
|
- int shift = bm->patlen - 1, bs;
|
|
+ int bs;
|
|
const u8 icase = conf->flags & TS_IGNORECASE;
|
|
|
|
for (;;) {
|
|
+ int shift = bm->patlen - 1;
|
|
+
|
|
text_len = conf->get_next_block(consumed, &text, conf, state);
|
|
|
|
if (unlikely(text_len == 0))
|
|
diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c
|
|
index e2a999890d05e..6b650dfc084dc 100644
|
|
--- a/net/bridge/br_if.c
|
|
+++ b/net/bridge/br_if.c
|
|
@@ -157,8 +157,9 @@ void br_manage_promisc(struct net_bridge *br)
|
|
* This lets us disable promiscuous mode and write
|
|
* this config to hw.
|
|
*/
|
|
- if (br->auto_cnt == 0 ||
|
|
- (br->auto_cnt == 1 && br_auto_port(p)))
|
|
+ if ((p->dev->priv_flags & IFF_UNICAST_FLT) &&
|
|
+ (br->auto_cnt == 0 ||
|
|
+ (br->auto_cnt == 1 && br_auto_port(p))))
|
|
br_port_clear_promisc(p);
|
|
else
|
|
br_port_set_promisc(p);
|
|
diff --git a/net/can/bcm.c b/net/can/bcm.c
|
|
index 23c7d5f896bd2..5cb4b6129263c 100644
|
|
--- a/net/can/bcm.c
|
|
+++ b/net/can/bcm.c
|
|
@@ -1523,6 +1523,12 @@ static int bcm_release(struct socket *sock)
|
|
|
|
lock_sock(sk);
|
|
|
|
+#if IS_ENABLED(CONFIG_PROC_FS)
|
|
+ /* remove procfs entry */
|
|
+ if (net->can.bcmproc_dir && bo->bcm_proc_read)
|
|
+ remove_proc_entry(bo->procname, net->can.bcmproc_dir);
|
|
+#endif /* CONFIG_PROC_FS */
|
|
+
|
|
list_for_each_entry_safe(op, next, &bo->tx_ops, list)
|
|
bcm_remove_op(op);
|
|
|
|
@@ -1558,12 +1564,6 @@ static int bcm_release(struct socket *sock)
|
|
list_for_each_entry_safe(op, next, &bo->rx_ops, list)
|
|
bcm_remove_op(op);
|
|
|
|
-#if IS_ENABLED(CONFIG_PROC_FS)
|
|
- /* remove procfs entry */
|
|
- if (net->can.bcmproc_dir && bo->bcm_proc_read)
|
|
- remove_proc_entry(bo->procname, net->can.bcmproc_dir);
|
|
-#endif /* CONFIG_PROC_FS */
|
|
-
|
|
/* remove device reference */
|
|
if (bo->bound) {
|
|
bo->bound = 0;
|
|
diff --git a/net/core/devlink.c b/net/core/devlink.c
|
|
index 2dd354d869cd7..b4dabe5d89f72 100644
|
|
--- a/net/core/devlink.c
|
|
+++ b/net/core/devlink.c
|
|
@@ -6299,7 +6299,10 @@ EXPORT_SYMBOL_GPL(devlink_free);
|
|
|
|
static void devlink_port_type_warn(struct work_struct *work)
|
|
{
|
|
- WARN(true, "Type was not set for devlink port.");
|
|
+ struct devlink_port *port = container_of(to_delayed_work(work),
|
|
+ struct devlink_port,
|
|
+ type_warn_dw);
|
|
+ dev_warn(port->devlink->dev, "Type was not set for devlink port.");
|
|
}
|
|
|
|
static bool devlink_port_type_should_warn(struct devlink_port *devlink_port)
|
|
diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
|
|
index da1ef00fc9cc2..1db92a44548f0 100644
|
|
--- a/net/core/rtnetlink.c
|
|
+++ b/net/core/rtnetlink.c
|
|
@@ -922,24 +922,27 @@ static inline int rtnl_vfinfo_size(const struct net_device *dev,
|
|
nla_total_size(sizeof(struct ifla_vf_rate)) +
|
|
nla_total_size(sizeof(struct ifla_vf_link_state)) +
|
|
nla_total_size(sizeof(struct ifla_vf_rss_query_en)) +
|
|
- nla_total_size(0) + /* nest IFLA_VF_STATS */
|
|
- /* IFLA_VF_STATS_RX_PACKETS */
|
|
- nla_total_size_64bit(sizeof(__u64)) +
|
|
- /* IFLA_VF_STATS_TX_PACKETS */
|
|
- nla_total_size_64bit(sizeof(__u64)) +
|
|
- /* IFLA_VF_STATS_RX_BYTES */
|
|
- nla_total_size_64bit(sizeof(__u64)) +
|
|
- /* IFLA_VF_STATS_TX_BYTES */
|
|
- nla_total_size_64bit(sizeof(__u64)) +
|
|
- /* IFLA_VF_STATS_BROADCAST */
|
|
- nla_total_size_64bit(sizeof(__u64)) +
|
|
- /* IFLA_VF_STATS_MULTICAST */
|
|
- nla_total_size_64bit(sizeof(__u64)) +
|
|
- /* IFLA_VF_STATS_RX_DROPPED */
|
|
- nla_total_size_64bit(sizeof(__u64)) +
|
|
- /* IFLA_VF_STATS_TX_DROPPED */
|
|
- nla_total_size_64bit(sizeof(__u64)) +
|
|
nla_total_size(sizeof(struct ifla_vf_trust)));
|
|
+ if (~ext_filter_mask & RTEXT_FILTER_SKIP_STATS) {
|
|
+ size += num_vfs *
|
|
+ (nla_total_size(0) + /* nest IFLA_VF_STATS */
|
|
+ /* IFLA_VF_STATS_RX_PACKETS */
|
|
+ nla_total_size_64bit(sizeof(__u64)) +
|
|
+ /* IFLA_VF_STATS_TX_PACKETS */
|
|
+ nla_total_size_64bit(sizeof(__u64)) +
|
|
+ /* IFLA_VF_STATS_RX_BYTES */
|
|
+ nla_total_size_64bit(sizeof(__u64)) +
|
|
+ /* IFLA_VF_STATS_TX_BYTES */
|
|
+ nla_total_size_64bit(sizeof(__u64)) +
|
|
+ /* IFLA_VF_STATS_BROADCAST */
|
|
+ nla_total_size_64bit(sizeof(__u64)) +
|
|
+ /* IFLA_VF_STATS_MULTICAST */
|
|
+ nla_total_size_64bit(sizeof(__u64)) +
|
|
+ /* IFLA_VF_STATS_RX_DROPPED */
|
|
+ nla_total_size_64bit(sizeof(__u64)) +
|
|
+ /* IFLA_VF_STATS_TX_DROPPED */
|
|
+ nla_total_size_64bit(sizeof(__u64)));
|
|
+ }
|
|
return size;
|
|
} else
|
|
return 0;
|
|
@@ -1189,7 +1192,8 @@ static noinline_for_stack int rtnl_fill_stats(struct sk_buff *skb,
|
|
static noinline_for_stack int rtnl_fill_vfinfo(struct sk_buff *skb,
|
|
struct net_device *dev,
|
|
int vfs_num,
|
|
- struct nlattr *vfinfo)
|
|
+ struct nlattr *vfinfo,
|
|
+ u32 ext_filter_mask)
|
|
{
|
|
struct ifla_vf_rss_query_en vf_rss_query_en;
|
|
struct nlattr *vf, *vfstats, *vfvlanlist;
|
|
@@ -1279,33 +1283,35 @@ static noinline_for_stack int rtnl_fill_vfinfo(struct sk_buff *skb,
|
|
goto nla_put_vf_failure;
|
|
}
|
|
nla_nest_end(skb, vfvlanlist);
|
|
- memset(&vf_stats, 0, sizeof(vf_stats));
|
|
- if (dev->netdev_ops->ndo_get_vf_stats)
|
|
- dev->netdev_ops->ndo_get_vf_stats(dev, vfs_num,
|
|
- &vf_stats);
|
|
- vfstats = nla_nest_start_noflag(skb, IFLA_VF_STATS);
|
|
- if (!vfstats)
|
|
- goto nla_put_vf_failure;
|
|
- if (nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_PACKETS,
|
|
- vf_stats.rx_packets, IFLA_VF_STATS_PAD) ||
|
|
- nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_PACKETS,
|
|
- vf_stats.tx_packets, IFLA_VF_STATS_PAD) ||
|
|
- nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_BYTES,
|
|
- vf_stats.rx_bytes, IFLA_VF_STATS_PAD) ||
|
|
- nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_BYTES,
|
|
- vf_stats.tx_bytes, IFLA_VF_STATS_PAD) ||
|
|
- nla_put_u64_64bit(skb, IFLA_VF_STATS_BROADCAST,
|
|
- vf_stats.broadcast, IFLA_VF_STATS_PAD) ||
|
|
- nla_put_u64_64bit(skb, IFLA_VF_STATS_MULTICAST,
|
|
- vf_stats.multicast, IFLA_VF_STATS_PAD) ||
|
|
- nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_DROPPED,
|
|
- vf_stats.rx_dropped, IFLA_VF_STATS_PAD) ||
|
|
- nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_DROPPED,
|
|
- vf_stats.tx_dropped, IFLA_VF_STATS_PAD)) {
|
|
- nla_nest_cancel(skb, vfstats);
|
|
- goto nla_put_vf_failure;
|
|
+ if (~ext_filter_mask & RTEXT_FILTER_SKIP_STATS) {
|
|
+ memset(&vf_stats, 0, sizeof(vf_stats));
|
|
+ if (dev->netdev_ops->ndo_get_vf_stats)
|
|
+ dev->netdev_ops->ndo_get_vf_stats(dev, vfs_num,
|
|
+ &vf_stats);
|
|
+ vfstats = nla_nest_start_noflag(skb, IFLA_VF_STATS);
|
|
+ if (!vfstats)
|
|
+ goto nla_put_vf_failure;
|
|
+ if (nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_PACKETS,
|
|
+ vf_stats.rx_packets, IFLA_VF_STATS_PAD) ||
|
|
+ nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_PACKETS,
|
|
+ vf_stats.tx_packets, IFLA_VF_STATS_PAD) ||
|
|
+ nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_BYTES,
|
|
+ vf_stats.rx_bytes, IFLA_VF_STATS_PAD) ||
|
|
+ nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_BYTES,
|
|
+ vf_stats.tx_bytes, IFLA_VF_STATS_PAD) ||
|
|
+ nla_put_u64_64bit(skb, IFLA_VF_STATS_BROADCAST,
|
|
+ vf_stats.broadcast, IFLA_VF_STATS_PAD) ||
|
|
+ nla_put_u64_64bit(skb, IFLA_VF_STATS_MULTICAST,
|
|
+ vf_stats.multicast, IFLA_VF_STATS_PAD) ||
|
|
+ nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_DROPPED,
|
|
+ vf_stats.rx_dropped, IFLA_VF_STATS_PAD) ||
|
|
+ nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_DROPPED,
|
|
+ vf_stats.tx_dropped, IFLA_VF_STATS_PAD)) {
|
|
+ nla_nest_cancel(skb, vfstats);
|
|
+ goto nla_put_vf_failure;
|
|
+ }
|
|
+ nla_nest_end(skb, vfstats);
|
|
}
|
|
- nla_nest_end(skb, vfstats);
|
|
nla_nest_end(skb, vf);
|
|
return 0;
|
|
|
|
@@ -1338,7 +1344,7 @@ static noinline_for_stack int rtnl_fill_vf(struct sk_buff *skb,
|
|
return -EMSGSIZE;
|
|
|
|
for (i = 0; i < num_vfs; i++) {
|
|
- if (rtnl_fill_vfinfo(skb, dev, i, vfinfo))
|
|
+ if (rtnl_fill_vfinfo(skb, dev, i, vfinfo, ext_filter_mask))
|
|
return -EMSGSIZE;
|
|
}
|
|
|
|
@@ -3580,7 +3586,7 @@ static int nlmsg_populate_fdb_fill(struct sk_buff *skb,
|
|
ndm->ndm_ifindex = dev->ifindex;
|
|
ndm->ndm_state = ndm_state;
|
|
|
|
- if (nla_put(skb, NDA_LLADDR, ETH_ALEN, addr))
|
|
+ if (nla_put(skb, NDA_LLADDR, dev->addr_len, addr))
|
|
goto nla_put_failure;
|
|
if (vid)
|
|
if (nla_put(skb, NDA_VLAN, sizeof(u16), &vid))
|
|
@@ -3594,10 +3600,10 @@ nla_put_failure:
|
|
return -EMSGSIZE;
|
|
}
|
|
|
|
-static inline size_t rtnl_fdb_nlmsg_size(void)
|
|
+static inline size_t rtnl_fdb_nlmsg_size(const struct net_device *dev)
|
|
{
|
|
return NLMSG_ALIGN(sizeof(struct ndmsg)) +
|
|
- nla_total_size(ETH_ALEN) + /* NDA_LLADDR */
|
|
+ nla_total_size(dev->addr_len) + /* NDA_LLADDR */
|
|
nla_total_size(sizeof(u16)) + /* NDA_VLAN */
|
|
0;
|
|
}
|
|
@@ -3609,7 +3615,7 @@ static void rtnl_fdb_notify(struct net_device *dev, u8 *addr, u16 vid, int type,
|
|
struct sk_buff *skb;
|
|
int err = -ENOBUFS;
|
|
|
|
- skb = nlmsg_new(rtnl_fdb_nlmsg_size(), GFP_ATOMIC);
|
|
+ skb = nlmsg_new(rtnl_fdb_nlmsg_size(dev), GFP_ATOMIC);
|
|
if (!skb)
|
|
goto errout;
|
|
|
|
diff --git a/net/core/sock.c b/net/core/sock.c
|
|
index 5e1dccbd61a60..d55eea5538bce 100644
|
|
--- a/net/core/sock.c
|
|
+++ b/net/core/sock.c
|
|
@@ -2085,13 +2085,24 @@ kuid_t sock_i_uid(struct sock *sk)
|
|
}
|
|
EXPORT_SYMBOL(sock_i_uid);
|
|
|
|
-unsigned long sock_i_ino(struct sock *sk)
|
|
+unsigned long __sock_i_ino(struct sock *sk)
|
|
{
|
|
unsigned long ino;
|
|
|
|
- read_lock_bh(&sk->sk_callback_lock);
|
|
+ read_lock(&sk->sk_callback_lock);
|
|
ino = sk->sk_socket ? SOCK_INODE(sk->sk_socket)->i_ino : 0;
|
|
- read_unlock_bh(&sk->sk_callback_lock);
|
|
+ read_unlock(&sk->sk_callback_lock);
|
|
+ return ino;
|
|
+}
|
|
+EXPORT_SYMBOL(__sock_i_ino);
|
|
+
|
|
+unsigned long sock_i_ino(struct sock *sk)
|
|
+{
|
|
+ unsigned long ino;
|
|
+
|
|
+ local_bh_disable();
|
|
+ ino = __sock_i_ino(sk);
|
|
+ local_bh_enable();
|
|
return ino;
|
|
}
|
|
EXPORT_SYMBOL(sock_i_ino);
|
|
diff --git a/net/dsa/tag_sja1105.c b/net/dsa/tag_sja1105.c
|
|
index 12f3ce52e62eb..836a75030a520 100644
|
|
--- a/net/dsa/tag_sja1105.c
|
|
+++ b/net/dsa/tag_sja1105.c
|
|
@@ -48,8 +48,8 @@ static void sja1105_meta_unpack(const struct sk_buff *skb,
|
|
* a unified unpacking command for both device series.
|
|
*/
|
|
packing(buf, &meta->tstamp, 31, 0, 4, UNPACK, 0);
|
|
- packing(buf + 4, &meta->dmac_byte_4, 7, 0, 1, UNPACK, 0);
|
|
- packing(buf + 5, &meta->dmac_byte_3, 7, 0, 1, UNPACK, 0);
|
|
+ packing(buf + 4, &meta->dmac_byte_3, 7, 0, 1, UNPACK, 0);
|
|
+ packing(buf + 5, &meta->dmac_byte_4, 7, 0, 1, UNPACK, 0);
|
|
packing(buf + 6, &meta->source_port, 7, 0, 1, UNPACK, 0);
|
|
packing(buf + 7, &meta->switch_id, 7, 0, 1, UNPACK, 0);
|
|
}
|
|
diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
|
|
index 9d14b3289f003..e4f2790fd6410 100644
|
|
--- a/net/ipv4/inet_hashtables.c
|
|
+++ b/net/ipv4/inet_hashtables.c
|
|
@@ -536,20 +536,8 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
|
|
spin_lock(lock);
|
|
if (osk) {
|
|
WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
|
|
- ret = sk_hashed(osk);
|
|
- if (ret) {
|
|
- /* Before deleting the node, we insert a new one to make
|
|
- * sure that the look-up-sk process would not miss either
|
|
- * of them and that at least one node would exist in ehash
|
|
- * table all the time. Otherwise there's a tiny chance
|
|
- * that lookup process could find nothing in ehash table.
|
|
- */
|
|
- __sk_nulls_add_node_tail_rcu(sk, list);
|
|
- sk_nulls_del_node_init_rcu(osk);
|
|
- }
|
|
- goto unlock;
|
|
- }
|
|
- if (found_dup_sk) {
|
|
+ ret = sk_nulls_del_node_init_rcu(osk);
|
|
+ } else if (found_dup_sk) {
|
|
*found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
|
|
if (*found_dup_sk)
|
|
ret = false;
|
|
@@ -558,7 +546,6 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
|
|
if (ret)
|
|
__sk_nulls_add_node_rcu(sk, list);
|
|
|
|
-unlock:
|
|
spin_unlock(lock);
|
|
|
|
return ret;
|
|
diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
|
|
index a00102d7c7fd4..c411c87ae865f 100644
|
|
--- a/net/ipv4/inet_timewait_sock.c
|
|
+++ b/net/ipv4/inet_timewait_sock.c
|
|
@@ -81,10 +81,10 @@ void inet_twsk_put(struct inet_timewait_sock *tw)
|
|
}
|
|
EXPORT_SYMBOL_GPL(inet_twsk_put);
|
|
|
|
-static void inet_twsk_add_node_tail_rcu(struct inet_timewait_sock *tw,
|
|
- struct hlist_nulls_head *list)
|
|
+static void inet_twsk_add_node_rcu(struct inet_timewait_sock *tw,
|
|
+ struct hlist_nulls_head *list)
|
|
{
|
|
- hlist_nulls_add_tail_rcu(&tw->tw_node, list);
|
|
+ hlist_nulls_add_head_rcu(&tw->tw_node, list);
|
|
}
|
|
|
|
static void inet_twsk_add_bind_node(struct inet_timewait_sock *tw,
|
|
@@ -120,7 +120,7 @@ void inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk,
|
|
|
|
spin_lock(lock);
|
|
|
|
- inet_twsk_add_node_tail_rcu(tw, &ehead->chain);
|
|
+ inet_twsk_add_node_rcu(tw, &ehead->chain);
|
|
|
|
/* Step 3: Remove SK from hash chain */
|
|
if (__sk_nulls_del_node_init_rcu(sk))
|
|
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
|
|
index fdf2ddc4864df..647cb664c2ad0 100644
|
|
--- a/net/ipv4/tcp.c
|
|
+++ b/net/ipv4/tcp.c
|
|
@@ -3066,18 +3066,18 @@ static int do_tcp_setsockopt(struct sock *sk, int level,
|
|
|
|
case TCP_LINGER2:
|
|
if (val < 0)
|
|
- tp->linger2 = -1;
|
|
- else if (val > net->ipv4.sysctl_tcp_fin_timeout / HZ)
|
|
- tp->linger2 = 0;
|
|
+ WRITE_ONCE(tp->linger2, -1);
|
|
+ else if (val > TCP_FIN_TIMEOUT_MAX / HZ)
|
|
+ WRITE_ONCE(tp->linger2, TCP_FIN_TIMEOUT_MAX);
|
|
else
|
|
- tp->linger2 = val * HZ;
|
|
+ WRITE_ONCE(tp->linger2, val * HZ);
|
|
break;
|
|
|
|
case TCP_DEFER_ACCEPT:
|
|
/* Translate value in seconds to number of retransmits */
|
|
- icsk->icsk_accept_queue.rskq_defer_accept =
|
|
- secs_to_retrans(val, TCP_TIMEOUT_INIT / HZ,
|
|
- TCP_RTO_MAX / HZ);
|
|
+ WRITE_ONCE(icsk->icsk_accept_queue.rskq_defer_accept,
|
|
+ secs_to_retrans(val, TCP_TIMEOUT_INIT / HZ,
|
|
+ TCP_RTO_MAX / HZ));
|
|
break;
|
|
|
|
case TCP_WINDOW_CLAMP:
|
|
@@ -3165,7 +3165,7 @@ static int do_tcp_setsockopt(struct sock *sk, int level,
|
|
err = tcp_repair_set_window(tp, optval, optlen);
|
|
break;
|
|
case TCP_NOTSENT_LOWAT:
|
|
- tp->notsent_lowat = val;
|
|
+ WRITE_ONCE(tp->notsent_lowat, val);
|
|
sk->sk_write_space(sk);
|
|
break;
|
|
case TCP_INQ:
|
|
@@ -3177,7 +3177,7 @@ static int do_tcp_setsockopt(struct sock *sk, int level,
|
|
case TCP_TX_DELAY:
|
|
if (val)
|
|
tcp_enable_tx_delay();
|
|
- tp->tcp_tx_delay = val;
|
|
+ WRITE_ONCE(tp->tcp_tx_delay, val);
|
|
break;
|
|
default:
|
|
err = -ENOPROTOOPT;
|
|
@@ -3476,13 +3476,14 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
|
|
val = icsk->icsk_syn_retries ? : net->ipv4.sysctl_tcp_syn_retries;
|
|
break;
|
|
case TCP_LINGER2:
|
|
- val = tp->linger2;
|
|
+ val = READ_ONCE(tp->linger2);
|
|
if (val >= 0)
|
|
val = (val ? : READ_ONCE(net->ipv4.sysctl_tcp_fin_timeout)) / HZ;
|
|
break;
|
|
case TCP_DEFER_ACCEPT:
|
|
- val = retrans_to_secs(icsk->icsk_accept_queue.rskq_defer_accept,
|
|
- TCP_TIMEOUT_INIT / HZ, TCP_RTO_MAX / HZ);
|
|
+ val = READ_ONCE(icsk->icsk_accept_queue.rskq_defer_accept);
|
|
+ val = retrans_to_secs(val, TCP_TIMEOUT_INIT / HZ,
|
|
+ TCP_RTO_MAX / HZ);
|
|
break;
|
|
case TCP_WINDOW_CLAMP:
|
|
val = tp->window_clamp;
|
|
@@ -3622,7 +3623,7 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
|
|
break;
|
|
|
|
case TCP_FASTOPEN:
|
|
- val = icsk->icsk_accept_queue.fastopenq.max_qlen;
|
|
+ val = READ_ONCE(icsk->icsk_accept_queue.fastopenq.max_qlen);
|
|
break;
|
|
|
|
case TCP_FASTOPEN_CONNECT:
|
|
@@ -3634,14 +3635,14 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
|
|
break;
|
|
|
|
case TCP_TX_DELAY:
|
|
- val = tp->tcp_tx_delay;
|
|
+ val = READ_ONCE(tp->tcp_tx_delay);
|
|
break;
|
|
|
|
case TCP_TIMESTAMP:
|
|
val = tcp_time_stamp_raw() + tp->tsoffset;
|
|
break;
|
|
case TCP_NOTSENT_LOWAT:
|
|
- val = tp->notsent_lowat;
|
|
+ val = READ_ONCE(tp->notsent_lowat);
|
|
break;
|
|
case TCP_INQ:
|
|
val = tp->recvmsg_inq;
|
|
diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
|
|
index 21705b2ddaffa..35088cd30840d 100644
|
|
--- a/net/ipv4/tcp_fastopen.c
|
|
+++ b/net/ipv4/tcp_fastopen.c
|
|
@@ -312,6 +312,7 @@ static struct sock *tcp_fastopen_create_child(struct sock *sk,
|
|
static bool tcp_fastopen_queue_check(struct sock *sk)
|
|
{
|
|
struct fastopen_queue *fastopenq;
|
|
+ int max_qlen;
|
|
|
|
/* Make sure the listener has enabled fastopen, and we don't
|
|
* exceed the max # of pending TFO requests allowed before trying
|
|
@@ -324,10 +325,11 @@ static bool tcp_fastopen_queue_check(struct sock *sk)
|
|
* temporarily vs a server not supporting Fast Open at all.
|
|
*/
|
|
fastopenq = &inet_csk(sk)->icsk_accept_queue.fastopenq;
|
|
- if (fastopenq->max_qlen == 0)
|
|
+ max_qlen = READ_ONCE(fastopenq->max_qlen);
|
|
+ if (max_qlen == 0)
|
|
return false;
|
|
|
|
- if (fastopenq->qlen >= fastopenq->max_qlen) {
|
|
+ if (fastopenq->qlen >= max_qlen) {
|
|
struct request_sock *req1;
|
|
spin_lock(&fastopenq->lock);
|
|
req1 = fastopenq->rskq_rst_head;
|
|
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
|
|
index 44398317f033a..8308c3c3a6e46 100644
|
|
--- a/net/ipv4/tcp_input.c
|
|
+++ b/net/ipv4/tcp_input.c
|
|
@@ -3445,8 +3445,11 @@ static int tcp_ack_update_window(struct sock *sk, const struct sk_buff *skb, u32
|
|
static bool __tcp_oow_rate_limited(struct net *net, int mib_idx,
|
|
u32 *last_oow_ack_time)
|
|
{
|
|
- if (*last_oow_ack_time) {
|
|
- s32 elapsed = (s32)(tcp_jiffies32 - *last_oow_ack_time);
|
|
+ /* Paired with the WRITE_ONCE() in this function. */
|
|
+ u32 val = READ_ONCE(*last_oow_ack_time);
|
|
+
|
|
+ if (val) {
|
|
+ s32 elapsed = (s32)(tcp_jiffies32 - val);
|
|
|
|
if (0 <= elapsed &&
|
|
elapsed < READ_ONCE(net->ipv4.sysctl_tcp_invalid_ratelimit)) {
|
|
@@ -3455,7 +3458,10 @@ static bool __tcp_oow_rate_limited(struct net *net, int mib_idx,
|
|
}
|
|
}
|
|
|
|
- *last_oow_ack_time = tcp_jiffies32;
|
|
+ /* Paired with the prior READ_ONCE() and with itself,
|
|
+ * as we might be lockless.
|
|
+ */
|
|
+ WRITE_ONCE(*last_oow_ack_time, tcp_jiffies32);
|
|
|
|
return false; /* not rate-limited: go ahead, send dupack now! */
|
|
}
|
|
diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
|
|
index a0123760fb2c7..46e3c939958bb 100644
|
|
--- a/net/ipv6/addrconf.c
|
|
+++ b/net/ipv6/addrconf.c
|
|
@@ -313,9 +313,8 @@ static void addrconf_del_dad_work(struct inet6_ifaddr *ifp)
|
|
static void addrconf_mod_rs_timer(struct inet6_dev *idev,
|
|
unsigned long when)
|
|
{
|
|
- if (!timer_pending(&idev->rs_timer))
|
|
+ if (!mod_timer(&idev->rs_timer, jiffies + when))
|
|
in6_dev_hold(idev);
|
|
- mod_timer(&idev->rs_timer, jiffies + when);
|
|
}
|
|
|
|
static void addrconf_mod_dad_work(struct inet6_ifaddr *ifp,
|
|
diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
|
|
index 3db10cae7b178..169467b5c98a6 100644
|
|
--- a/net/ipv6/icmp.c
|
|
+++ b/net/ipv6/icmp.c
|
|
@@ -410,7 +410,10 @@ static struct net_device *icmp6_dev(const struct sk_buff *skb)
|
|
if (unlikely(dev->ifindex == LOOPBACK_IFINDEX || netif_is_l3_master(skb->dev))) {
|
|
const struct rt6_info *rt6 = skb_rt6_info(skb);
|
|
|
|
- if (rt6)
|
|
+ /* The destination could be an external IP in Ext Hdr (SRv6, RPL, etc.),
|
|
+ * and ip6_null_entry could be set to skb if no route is found.
|
|
+ */
|
|
+ if (rt6 && rt6->rt6i_idev)
|
|
dev = rt6->rt6i_idev->dev;
|
|
}
|
|
|
|
diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
|
|
index 0977137b00dc4..2d34bd98fccea 100644
|
|
--- a/net/ipv6/ip6_gre.c
|
|
+++ b/net/ipv6/ip6_gre.c
|
|
@@ -941,7 +941,8 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
|
|
goto tx_err;
|
|
|
|
if (skb->len > dev->mtu + dev->hard_header_len) {
|
|
- pskb_trim(skb, dev->mtu + dev->hard_header_len);
|
|
+ if (pskb_trim(skb, dev->mtu + dev->hard_header_len))
|
|
+ goto tx_err;
|
|
truncate = true;
|
|
}
|
|
|
|
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
|
|
index 797d45ceb2c74..93eb622219756 100644
|
|
--- a/net/ipv6/udp.c
|
|
+++ b/net/ipv6/udp.c
|
|
@@ -87,7 +87,7 @@ static u32 udp6_ehashfn(const struct net *net,
|
|
fhash = __ipv6_addr_jhash(faddr, udp_ipv6_hash_secret);
|
|
|
|
return __inet6_ehashfn(lhash, lport, fhash, fport,
|
|
- udp_ipv6_hash_secret + net_hash_mix(net));
|
|
+ udp6_ehash_secret + net_hash_mix(net));
|
|
}
|
|
|
|
int udp_v6_get_port(struct sock *sk, unsigned short snum)
|
|
diff --git a/net/llc/llc_input.c b/net/llc/llc_input.c
|
|
index 82cb93f66b9bd..f9e801cc50f5e 100644
|
|
--- a/net/llc/llc_input.c
|
|
+++ b/net/llc/llc_input.c
|
|
@@ -162,9 +162,6 @@ int llc_rcv(struct sk_buff *skb, struct net_device *dev,
|
|
void (*sta_handler)(struct sk_buff *skb);
|
|
void (*sap_handler)(struct llc_sap *sap, struct sk_buff *skb);
|
|
|
|
- if (!net_eq(dev_net(dev), &init_net))
|
|
- goto drop;
|
|
-
|
|
/*
|
|
* When the interface is in promisc. mode, drop all the crap that it
|
|
* receives, do not try to analyse it.
|
|
diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
|
|
index e3c14f8890a89..1cf143f5df2e9 100644
|
|
--- a/net/netfilter/ipset/ip_set_core.c
|
|
+++ b/net/netfilter/ipset/ip_set_core.c
|
|
@@ -811,20 +811,9 @@ static struct nlmsghdr *
|
|
start_msg(struct sk_buff *skb, u32 portid, u32 seq, unsigned int flags,
|
|
enum ipset_cmd cmd)
|
|
{
|
|
- struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
-
|
|
- nlh = nlmsg_put(skb, portid, seq, nfnl_msg_type(NFNL_SUBSYS_IPSET, cmd),
|
|
- sizeof(*nfmsg), flags);
|
|
- if (!nlh)
|
|
- return NULL;
|
|
-
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = NFPROTO_IPV4;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = 0;
|
|
-
|
|
- return nlh;
|
|
+ return nfnl_msg_put(skb, portid, seq,
|
|
+ nfnl_msg_type(NFNL_SUBSYS_IPSET, cmd), flags,
|
|
+ NFPROTO_IPV4, NFNETLINK_V0, 0);
|
|
}
|
|
|
|
/* Create a set */
|
|
diff --git a/net/netfilter/nf_conntrack_helper.c b/net/netfilter/nf_conntrack_helper.c
|
|
index 118f415928aef..32cc91f5ba99f 100644
|
|
--- a/net/netfilter/nf_conntrack_helper.c
|
|
+++ b/net/netfilter/nf_conntrack_helper.c
|
|
@@ -404,6 +404,9 @@ int nf_conntrack_helper_register(struct nf_conntrack_helper *me)
|
|
BUG_ON(me->expect_class_max >= NF_CT_MAX_EXPECT_CLASSES);
|
|
BUG_ON(strlen(me->name) > NF_CT_HELPER_NAME_LEN - 1);
|
|
|
|
+ if (!nf_ct_helper_hash)
|
|
+ return -ENOENT;
|
|
+
|
|
if (me->expect_policy->max_expected > NF_CT_EXPECT_MAX_CNT)
|
|
return -EINVAL;
|
|
|
|
@@ -587,4 +590,5 @@ void nf_conntrack_helper_fini(void)
|
|
{
|
|
nf_ct_extend_unregister(&helper_extend);
|
|
kvfree(nf_ct_helper_hash);
|
|
+ nf_ct_helper_hash = NULL;
|
|
}
|
|
diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
|
|
index 4328d10ad1bc3..45d02185f4b92 100644
|
|
--- a/net/netfilter/nf_conntrack_netlink.c
|
|
+++ b/net/netfilter/nf_conntrack_netlink.c
|
|
@@ -515,20 +515,15 @@ ctnetlink_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
|
|
{
|
|
const struct nf_conntrack_zone *zone;
|
|
struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
struct nlattr *nest_parms;
|
|
unsigned int flags = portid ? NLM_F_MULTI : 0, event;
|
|
|
|
event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK, IPCTNL_MSG_CT_NEW);
|
|
- nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, event, flags, nf_ct_l3num(ct),
|
|
+ NFNETLINK_V0, 0);
|
|
+ if (!nlh)
|
|
goto nlmsg_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = nf_ct_l3num(ct);
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = 0;
|
|
-
|
|
zone = nf_ct_zone(ct);
|
|
|
|
nest_parms = nla_nest_start(skb, CTA_TUPLE_ORIG);
|
|
@@ -685,7 +680,6 @@ ctnetlink_conntrack_event(unsigned int events, struct nf_ct_event *item)
|
|
const struct nf_conntrack_zone *zone;
|
|
struct net *net;
|
|
struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
struct nlattr *nest_parms;
|
|
struct nf_conn *ct = item->ct;
|
|
struct sk_buff *skb;
|
|
@@ -715,15 +709,11 @@ ctnetlink_conntrack_event(unsigned int events, struct nf_ct_event *item)
|
|
goto errout;
|
|
|
|
type = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK, type);
|
|
- nlh = nlmsg_put(skb, item->portid, 0, type, sizeof(*nfmsg), flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, item->portid, 0, type, flags, nf_ct_l3num(ct),
|
|
+ NFNETLINK_V0, 0);
|
|
+ if (!nlh)
|
|
goto nlmsg_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = nf_ct_l3num(ct);
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = 0;
|
|
-
|
|
zone = nf_ct_zone(ct);
|
|
|
|
nest_parms = nla_nest_start(skb, CTA_TUPLE_ORIG);
|
|
@@ -2200,20 +2190,15 @@ ctnetlink_ct_stat_cpu_fill_info(struct sk_buff *skb, u32 portid, u32 seq,
|
|
__u16 cpu, const struct ip_conntrack_stat *st)
|
|
{
|
|
struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
unsigned int flags = portid ? NLM_F_MULTI : 0, event;
|
|
|
|
event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK,
|
|
IPCTNL_MSG_CT_GET_STATS_CPU);
|
|
- nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, event, flags, AF_UNSPEC,
|
|
+ NFNETLINK_V0, htons(cpu));
|
|
+ if (!nlh)
|
|
goto nlmsg_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = AF_UNSPEC;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = htons(cpu);
|
|
-
|
|
if (nla_put_be32(skb, CTA_STATS_FOUND, htonl(st->found)) ||
|
|
nla_put_be32(skb, CTA_STATS_INVALID, htonl(st->invalid)) ||
|
|
nla_put_be32(skb, CTA_STATS_IGNORE, htonl(st->ignore)) ||
|
|
@@ -2284,20 +2269,15 @@ ctnetlink_stat_ct_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
|
|
struct net *net)
|
|
{
|
|
struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
unsigned int flags = portid ? NLM_F_MULTI : 0, event;
|
|
unsigned int nr_conntracks = atomic_read(&net->ct.count);
|
|
|
|
event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK, IPCTNL_MSG_CT_GET_STATS);
|
|
- nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, event, flags, AF_UNSPEC,
|
|
+ NFNETLINK_V0, 0);
|
|
+ if (!nlh)
|
|
goto nlmsg_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = AF_UNSPEC;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = 0;
|
|
-
|
|
if (nla_put_be32(skb, CTA_STATS_GLOBAL_ENTRIES, htonl(nr_conntracks)))
|
|
goto nla_put_failure;
|
|
|
|
@@ -2803,19 +2783,14 @@ ctnetlink_exp_fill_info(struct sk_buff *skb, u32 portid, u32 seq,
|
|
int event, const struct nf_conntrack_expect *exp)
|
|
{
|
|
struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
unsigned int flags = portid ? NLM_F_MULTI : 0;
|
|
|
|
event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK_EXP, event);
|
|
- nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, event, flags,
|
|
+ exp->tuple.src.l3num, NFNETLINK_V0, 0);
|
|
+ if (!nlh)
|
|
goto nlmsg_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = exp->tuple.src.l3num;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = 0;
|
|
-
|
|
if (ctnetlink_exp_dump_expect(skb, exp) < 0)
|
|
goto nla_put_failure;
|
|
|
|
@@ -2835,7 +2810,6 @@ ctnetlink_expect_event(unsigned int events, struct nf_exp_event *item)
|
|
struct nf_conntrack_expect *exp = item->exp;
|
|
struct net *net = nf_ct_exp_net(exp);
|
|
struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
struct sk_buff *skb;
|
|
unsigned int type, group;
|
|
int flags = 0;
|
|
@@ -2858,15 +2832,11 @@ ctnetlink_expect_event(unsigned int events, struct nf_exp_event *item)
|
|
goto errout;
|
|
|
|
type = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK_EXP, type);
|
|
- nlh = nlmsg_put(skb, item->portid, 0, type, sizeof(*nfmsg), flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, item->portid, 0, type, flags,
|
|
+ exp->tuple.src.l3num, NFNETLINK_V0, 0);
|
|
+ if (!nlh)
|
|
goto nlmsg_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = exp->tuple.src.l3num;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = 0;
|
|
-
|
|
if (ctnetlink_exp_dump_expect(skb, exp) < 0)
|
|
goto nla_put_failure;
|
|
|
|
@@ -3436,20 +3406,15 @@ ctnetlink_exp_stat_fill_info(struct sk_buff *skb, u32 portid, u32 seq, int cpu,
|
|
const struct ip_conntrack_stat *st)
|
|
{
|
|
struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
unsigned int flags = portid ? NLM_F_MULTI : 0, event;
|
|
|
|
event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK,
|
|
IPCTNL_MSG_EXP_GET_STATS_CPU);
|
|
- nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, event, flags, AF_UNSPEC,
|
|
+ NFNETLINK_V0, htons(cpu));
|
|
+ if (!nlh)
|
|
goto nlmsg_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = AF_UNSPEC;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = htons(cpu);
|
|
-
|
|
if (nla_put_be32(skb, CTA_STATS_EXP_NEW, htonl(st->expect_new)) ||
|
|
nla_put_be32(skb, CTA_STATS_EXP_CREATE, htonl(st->expect_create)) ||
|
|
nla_put_be32(skb, CTA_STATS_EXP_DELETE, htonl(st->expect_delete)))
|
|
diff --git a/net/netfilter/nf_conntrack_proto_dccp.c b/net/netfilter/nf_conntrack_proto_dccp.c
|
|
index b3f4a334f9d78..67b8dedef2935 100644
|
|
--- a/net/netfilter/nf_conntrack_proto_dccp.c
|
|
+++ b/net/netfilter/nf_conntrack_proto_dccp.c
|
|
@@ -430,9 +430,19 @@ static bool dccp_error(const struct dccp_hdr *dh,
|
|
struct sk_buff *skb, unsigned int dataoff,
|
|
const struct nf_hook_state *state)
|
|
{
|
|
+ static const unsigned long require_seq48 = 1 << DCCP_PKT_REQUEST |
|
|
+ 1 << DCCP_PKT_RESPONSE |
|
|
+ 1 << DCCP_PKT_CLOSEREQ |
|
|
+ 1 << DCCP_PKT_CLOSE |
|
|
+ 1 << DCCP_PKT_RESET |
|
|
+ 1 << DCCP_PKT_SYNC |
|
|
+ 1 << DCCP_PKT_SYNCACK;
|
|
unsigned int dccp_len = skb->len - dataoff;
|
|
unsigned int cscov;
|
|
const char *msg;
|
|
+ u8 type;
|
|
+
|
|
+ BUILD_BUG_ON(DCCP_PKT_INVALID >= BITS_PER_LONG);
|
|
|
|
if (dh->dccph_doff * 4 < sizeof(struct dccp_hdr) ||
|
|
dh->dccph_doff * 4 > dccp_len) {
|
|
@@ -457,10 +467,17 @@ static bool dccp_error(const struct dccp_hdr *dh,
|
|
goto out_invalid;
|
|
}
|
|
|
|
- if (dh->dccph_type >= DCCP_PKT_INVALID) {
|
|
+ type = dh->dccph_type;
|
|
+ if (type >= DCCP_PKT_INVALID) {
|
|
msg = "nf_ct_dccp: reserved packet type ";
|
|
goto out_invalid;
|
|
}
|
|
+
|
|
+ if (test_bit(type, &require_seq48) && !dh->dccph_x) {
|
|
+ msg = "nf_ct_dccp: type lacks 48bit sequence numbers";
|
|
+ goto out_invalid;
|
|
+ }
|
|
+
|
|
return false;
|
|
out_invalid:
|
|
nf_l4proto_log_invalid(skb, state->net, state->pf,
|
|
@@ -468,24 +485,53 @@ out_invalid:
|
|
return true;
|
|
}
|
|
|
|
+struct nf_conntrack_dccp_buf {
|
|
+ struct dccp_hdr dh; /* generic header part */
|
|
+ struct dccp_hdr_ext ext; /* optional depending dh->dccph_x */
|
|
+ union { /* depends on header type */
|
|
+ struct dccp_hdr_ack_bits ack;
|
|
+ struct dccp_hdr_request req;
|
|
+ struct dccp_hdr_response response;
|
|
+ struct dccp_hdr_reset rst;
|
|
+ } u;
|
|
+};
|
|
+
|
|
+static struct dccp_hdr *
|
|
+dccp_header_pointer(const struct sk_buff *skb, int offset, const struct dccp_hdr *dh,
|
|
+ struct nf_conntrack_dccp_buf *buf)
|
|
+{
|
|
+ unsigned int hdrlen = __dccp_hdr_len(dh);
|
|
+
|
|
+ if (hdrlen > sizeof(*buf))
|
|
+ return NULL;
|
|
+
|
|
+ return skb_header_pointer(skb, offset, hdrlen, buf);
|
|
+}
|
|
+
|
|
int nf_conntrack_dccp_packet(struct nf_conn *ct, struct sk_buff *skb,
|
|
unsigned int dataoff,
|
|
enum ip_conntrack_info ctinfo,
|
|
const struct nf_hook_state *state)
|
|
{
|
|
enum ip_conntrack_dir dir = CTINFO2DIR(ctinfo);
|
|
- struct dccp_hdr _dh, *dh;
|
|
+ struct nf_conntrack_dccp_buf _dh;
|
|
u_int8_t type, old_state, new_state;
|
|
enum ct_dccp_roles role;
|
|
unsigned int *timeouts;
|
|
+ struct dccp_hdr *dh;
|
|
|
|
- dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &_dh);
|
|
+ dh = skb_header_pointer(skb, dataoff, sizeof(*dh), &_dh.dh);
|
|
if (!dh)
|
|
return NF_DROP;
|
|
|
|
if (dccp_error(dh, skb, dataoff, state))
|
|
return -NF_ACCEPT;
|
|
|
|
+ /* pull again, including possible 48 bit sequences and subtype header */
|
|
+ dh = dccp_header_pointer(skb, dataoff, dh, &_dh);
|
|
+ if (!dh)
|
|
+ return NF_DROP;
|
|
+
|
|
type = dh->dccph_type;
|
|
if (!nf_ct_is_confirmed(ct) && !dccp_new(ct, skb, dh))
|
|
return -NF_ACCEPT;
|
|
diff --git a/net/netfilter/nf_conntrack_sip.c b/net/netfilter/nf_conntrack_sip.c
|
|
index 78fd9122b70c7..751df19fe0f8a 100644
|
|
--- a/net/netfilter/nf_conntrack_sip.c
|
|
+++ b/net/netfilter/nf_conntrack_sip.c
|
|
@@ -611,7 +611,7 @@ int ct_sip_parse_numerical_param(const struct nf_conn *ct, const char *dptr,
|
|
start += strlen(name);
|
|
*val = simple_strtoul(start, &end, 0);
|
|
if (start == end)
|
|
- return 0;
|
|
+ return -1;
|
|
if (matchoff && matchlen) {
|
|
*matchoff = start - dptr;
|
|
*matchlen = end - start;
|
|
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
|
|
index 914fbd9ecef96..7d22bc8aa2787 100644
|
|
--- a/net/netfilter/nf_tables_api.c
|
|
+++ b/net/netfilter/nf_tables_api.c
|
|
@@ -20,10 +20,13 @@
|
|
#include <net/netfilter/nf_tables.h>
|
|
#include <net/netfilter/nf_tables_offload.h>
|
|
#include <net/net_namespace.h>
|
|
+#include <net/netns/generic.h>
|
|
#include <net/sock.h>
|
|
|
|
#define NFT_MODULE_AUTOLOAD_LIMIT (MODULE_NAME_LEN - sizeof("nft-expr-255-"))
|
|
|
|
+unsigned int nf_tables_net_id __read_mostly;
|
|
+
|
|
static LIST_HEAD(nf_tables_expressions);
|
|
static LIST_HEAD(nf_tables_objects);
|
|
static LIST_HEAD(nf_tables_flowtables);
|
|
@@ -67,7 +70,9 @@ static const struct rhashtable_params nft_objname_ht_params = {
|
|
|
|
static void nft_validate_state_update(struct net *net, u8 new_validate_state)
|
|
{
|
|
- switch (net->nft.validate_state) {
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
+
|
|
+ switch (nft_net->validate_state) {
|
|
case NFT_VALIDATE_SKIP:
|
|
WARN_ON_ONCE(new_validate_state == NFT_VALIDATE_DO);
|
|
break;
|
|
@@ -78,7 +83,7 @@ static void nft_validate_state_update(struct net *net, u8 new_validate_state)
|
|
return;
|
|
}
|
|
|
|
- net->nft.validate_state = new_validate_state;
|
|
+ nft_net->validate_state = new_validate_state;
|
|
}
|
|
static void nf_tables_trans_destroy_work(struct work_struct *w);
|
|
static DECLARE_WORK(trans_destroy_work, nf_tables_trans_destroy_work);
|
|
@@ -114,6 +119,7 @@ static struct nft_trans *nft_trans_alloc_gfp(const struct nft_ctx *ctx,
|
|
return NULL;
|
|
|
|
INIT_LIST_HEAD(&trans->list);
|
|
+ INIT_LIST_HEAD(&trans->binding_list);
|
|
trans->msg_type = msg_type;
|
|
trans->ctx = *ctx;
|
|
|
|
@@ -126,34 +132,67 @@ static struct nft_trans *nft_trans_alloc(const struct nft_ctx *ctx,
|
|
return nft_trans_alloc_gfp(ctx, msg_type, size, GFP_KERNEL);
|
|
}
|
|
|
|
-static void nft_trans_destroy(struct nft_trans *trans)
|
|
+static void nft_trans_list_del(struct nft_trans *trans)
|
|
{
|
|
list_del(&trans->list);
|
|
+ list_del(&trans->binding_list);
|
|
+}
|
|
+
|
|
+static void nft_trans_destroy(struct nft_trans *trans)
|
|
+{
|
|
+ nft_trans_list_del(trans);
|
|
kfree(trans);
|
|
}
|
|
|
|
-static void nft_set_trans_bind(const struct nft_ctx *ctx, struct nft_set *set)
|
|
+static void __nft_set_trans_bind(const struct nft_ctx *ctx, struct nft_set *set,
|
|
+ bool bind)
|
|
{
|
|
+ struct nftables_pernet *nft_net;
|
|
struct net *net = ctx->net;
|
|
struct nft_trans *trans;
|
|
|
|
if (!nft_set_is_anonymous(set))
|
|
return;
|
|
|
|
- list_for_each_entry_reverse(trans, &net->nft.commit_list, list) {
|
|
+ nft_net = net_generic(net, nf_tables_net_id);
|
|
+ list_for_each_entry_reverse(trans, &nft_net->commit_list, list) {
|
|
switch (trans->msg_type) {
|
|
case NFT_MSG_NEWSET:
|
|
if (nft_trans_set(trans) == set)
|
|
- nft_trans_set_bound(trans) = true;
|
|
+ nft_trans_set_bound(trans) = bind;
|
|
break;
|
|
case NFT_MSG_NEWSETELEM:
|
|
if (nft_trans_elem_set(trans) == set)
|
|
- nft_trans_elem_set_bound(trans) = true;
|
|
+ nft_trans_elem_set_bound(trans) = bind;
|
|
break;
|
|
}
|
|
}
|
|
}
|
|
|
|
+static void nft_set_trans_bind(const struct nft_ctx *ctx, struct nft_set *set)
|
|
+{
|
|
+ return __nft_set_trans_bind(ctx, set, true);
|
|
+}
|
|
+
|
|
+static void nft_set_trans_unbind(const struct nft_ctx *ctx, struct nft_set *set)
|
|
+{
|
|
+ return __nft_set_trans_bind(ctx, set, false);
|
|
+}
|
|
+
|
|
+static void nft_trans_commit_list_add_tail(struct net *net, struct nft_trans *trans)
|
|
+{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
+
|
|
+ switch (trans->msg_type) {
|
|
+ case NFT_MSG_NEWSET:
|
|
+ if (nft_set_is_anonymous(nft_trans_set(trans)))
|
|
+ list_add_tail(&trans->binding_list, &nft_net->binding_list);
|
|
+ break;
|
|
+ }
|
|
+
|
|
+ list_add_tail(&trans->list, &nft_net->commit_list);
|
|
+}
|
|
+
|
|
static int nf_tables_register_hook(struct net *net,
|
|
const struct nft_table *table,
|
|
struct nft_chain *chain)
|
|
@@ -204,7 +243,7 @@ static int nft_trans_table_add(struct nft_ctx *ctx, int msg_type)
|
|
if (msg_type == NFT_MSG_NEWTABLE)
|
|
nft_activate_next(ctx->net, ctx->table);
|
|
|
|
- list_add_tail(&trans->list, &ctx->net->nft.commit_list);
|
|
+ nft_trans_commit_list_add_tail(ctx->net, trans);
|
|
return 0;
|
|
}
|
|
|
|
@@ -231,7 +270,7 @@ static struct nft_trans *nft_trans_chain_add(struct nft_ctx *ctx, int msg_type)
|
|
if (msg_type == NFT_MSG_NEWCHAIN)
|
|
nft_activate_next(ctx->net, ctx->chain);
|
|
|
|
- list_add_tail(&trans->list, &ctx->net->nft.commit_list);
|
|
+ nft_trans_commit_list_add_tail(ctx->net, trans);
|
|
return trans;
|
|
}
|
|
|
|
@@ -304,7 +343,7 @@ static struct nft_trans *nft_trans_rule_add(struct nft_ctx *ctx, int msg_type,
|
|
ntohl(nla_get_be32(ctx->nla[NFTA_RULE_ID]));
|
|
}
|
|
nft_trans_rule(trans) = rule;
|
|
- list_add_tail(&trans->list, &ctx->net->nft.commit_list);
|
|
+ nft_trans_commit_list_add_tail(ctx->net, trans);
|
|
|
|
return trans;
|
|
}
|
|
@@ -359,7 +398,7 @@ static int nft_trans_set_add(const struct nft_ctx *ctx, int msg_type,
|
|
nft_activate_next(ctx->net, set);
|
|
}
|
|
nft_trans_set(trans) = set;
|
|
- list_add_tail(&trans->list, &ctx->net->nft.commit_list);
|
|
+ nft_trans_commit_list_add_tail(ctx->net, trans);
|
|
|
|
return 0;
|
|
}
|
|
@@ -391,7 +430,7 @@ static int nft_trans_obj_add(struct nft_ctx *ctx, int msg_type,
|
|
nft_activate_next(ctx->net, obj);
|
|
|
|
nft_trans_obj(trans) = obj;
|
|
- list_add_tail(&trans->list, &ctx->net->nft.commit_list);
|
|
+ nft_trans_commit_list_add_tail(ctx->net, trans);
|
|
|
|
return 0;
|
|
}
|
|
@@ -424,7 +463,7 @@ static int nft_trans_flowtable_add(struct nft_ctx *ctx, int msg_type,
|
|
nft_activate_next(ctx->net, flowtable);
|
|
|
|
nft_trans_flowtable(trans) = flowtable;
|
|
- list_add_tail(&trans->list, &ctx->net->nft.commit_list);
|
|
+ nft_trans_commit_list_add_tail(ctx->net, trans);
|
|
|
|
return 0;
|
|
}
|
|
@@ -452,13 +491,15 @@ static struct nft_table *nft_table_lookup(const struct net *net,
|
|
const struct nlattr *nla,
|
|
u8 family, u8 genmask)
|
|
{
|
|
+ struct nftables_pernet *nft_net;
|
|
struct nft_table *table;
|
|
|
|
if (nla == NULL)
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
- list_for_each_entry_rcu(table, &net->nft.tables, list,
|
|
- lockdep_is_held(&net->nft.commit_mutex)) {
|
|
+ nft_net = net_generic(net, nf_tables_net_id);
|
|
+ list_for_each_entry_rcu(table, &nft_net->tables, list,
|
|
+ lockdep_is_held(&nft_net->commit_mutex)) {
|
|
if (!nla_strcmp(nla, table->name) &&
|
|
table->family == family &&
|
|
nft_active_genmask(table, genmask))
|
|
@@ -472,9 +513,11 @@ static struct nft_table *nft_table_lookup_byhandle(const struct net *net,
|
|
const struct nlattr *nla,
|
|
u8 genmask)
|
|
{
|
|
+ struct nftables_pernet *nft_net;
|
|
struct nft_table *table;
|
|
|
|
- list_for_each_entry(table, &net->nft.tables, list) {
|
|
+ nft_net = net_generic(net, nf_tables_net_id);
|
|
+ list_for_each_entry(table, &nft_net->tables, list) {
|
|
if (be64_to_cpu(nla_get_be64(nla)) == table->handle &&
|
|
nft_active_genmask(table, genmask))
|
|
return table;
|
|
@@ -526,6 +569,7 @@ struct nft_module_request {
|
|
static int nft_request_module(struct net *net, const char *fmt, ...)
|
|
{
|
|
char module_name[MODULE_NAME_LEN];
|
|
+ struct nftables_pernet *nft_net;
|
|
struct nft_module_request *req;
|
|
va_list args;
|
|
int ret;
|
|
@@ -536,7 +580,8 @@ static int nft_request_module(struct net *net, const char *fmt, ...)
|
|
if (ret >= MODULE_NAME_LEN)
|
|
return 0;
|
|
|
|
- list_for_each_entry(req, &net->nft.module_list, list) {
|
|
+ nft_net = net_generic(net, nf_tables_net_id);
|
|
+ list_for_each_entry(req, &nft_net->module_list, list) {
|
|
if (!strcmp(req->module, module_name)) {
|
|
if (req->done)
|
|
return 0;
|
|
@@ -552,7 +597,7 @@ static int nft_request_module(struct net *net, const char *fmt, ...)
|
|
|
|
req->done = false;
|
|
strlcpy(req->module, module_name, MODULE_NAME_LEN);
|
|
- list_add_tail(&req->list, &net->nft.module_list);
|
|
+ list_add_tail(&req->list, &nft_net->module_list);
|
|
|
|
return -EAGAIN;
|
|
}
|
|
@@ -588,6 +633,13 @@ nf_tables_chain_type_lookup(struct net *net, const struct nlattr *nla,
|
|
return ERR_PTR(-ENOENT);
|
|
}
|
|
|
|
+static __be16 nft_base_seq(const struct net *net)
|
|
+{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
+
|
|
+ return htons(nft_net->base_seq & 0xffff);
|
|
+}
|
|
+
|
|
static const struct nla_policy nft_table_policy[NFTA_TABLE_MAX + 1] = {
|
|
[NFTA_TABLE_NAME] = { .type = NLA_STRING,
|
|
.len = NFT_TABLE_MAXNAMELEN - 1 },
|
|
@@ -600,18 +652,13 @@ static int nf_tables_fill_table_info(struct sk_buff *skb, struct net *net,
|
|
int family, const struct nft_table *table)
|
|
{
|
|
struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
|
|
event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
|
|
- nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg), flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
|
|
+ NFNETLINK_V0, nft_base_seq(net));
|
|
+ if (!nlh)
|
|
goto nla_put_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = family;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = htons(net->nft.base_seq & 0xffff);
|
|
-
|
|
if (nla_put_string(skb, NFTA_TABLE_NAME, table->name) ||
|
|
nla_put_be32(skb, NFTA_TABLE_FLAGS, htonl(table->flags)) ||
|
|
nla_put_be32(skb, NFTA_TABLE_USE, htonl(table->use)) ||
|
|
@@ -658,15 +705,17 @@ static int nf_tables_dump_tables(struct sk_buff *skb,
|
|
struct netlink_callback *cb)
|
|
{
|
|
const struct nfgenmsg *nfmsg = nlmsg_data(cb->nlh);
|
|
+ struct nftables_pernet *nft_net;
|
|
const struct nft_table *table;
|
|
unsigned int idx = 0, s_idx = cb->args[0];
|
|
struct net *net = sock_net(skb->sk);
|
|
int family = nfmsg->nfgen_family;
|
|
|
|
rcu_read_lock();
|
|
- cb->seq = net->nft.base_seq;
|
|
+ nft_net = net_generic(net, nf_tables_net_id);
|
|
+ cb->seq = nft_net->base_seq;
|
|
|
|
- list_for_each_entry_rcu(table, &net->nft.tables, list) {
|
|
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
|
|
if (family != NFPROTO_UNSPEC && family != table->family)
|
|
continue;
|
|
|
|
@@ -770,7 +819,7 @@ static void nft_table_disable(struct net *net, struct nft_table *table, u32 cnt)
|
|
if (cnt && i++ == cnt)
|
|
break;
|
|
|
|
- nf_unregister_net_hook(net, &nft_base_chain(chain)->ops);
|
|
+ nf_tables_unregister_hook(net, table, chain);
|
|
}
|
|
}
|
|
|
|
@@ -785,7 +834,7 @@ static int nf_tables_table_enable(struct net *net, struct nft_table *table)
|
|
if (!nft_is_base_chain(chain))
|
|
continue;
|
|
|
|
- err = nf_register_net_hook(net, &nft_base_chain(chain)->ops);
|
|
+ err = nf_tables_register_hook(net, table, chain);
|
|
if (err < 0)
|
|
goto err;
|
|
|
|
@@ -829,17 +878,18 @@ static int nf_tables_updtable(struct nft_ctx *ctx)
|
|
nft_trans_table_enable(trans) = false;
|
|
} else if (!(flags & NFT_TABLE_F_DORMANT) &&
|
|
ctx->table->flags & NFT_TABLE_F_DORMANT) {
|
|
+ ctx->table->flags &= ~NFT_TABLE_F_DORMANT;
|
|
ret = nf_tables_table_enable(ctx->net, ctx->table);
|
|
- if (ret >= 0) {
|
|
- ctx->table->flags &= ~NFT_TABLE_F_DORMANT;
|
|
+ if (ret >= 0)
|
|
nft_trans_table_enable(trans) = true;
|
|
- }
|
|
+ else
|
|
+ ctx->table->flags |= NFT_TABLE_F_DORMANT;
|
|
}
|
|
if (ret < 0)
|
|
goto err;
|
|
|
|
nft_trans_table_update(trans) = true;
|
|
- list_add_tail(&trans->list, &ctx->net->nft.commit_list);
|
|
+ nft_trans_commit_list_add_tail(ctx->net, trans);
|
|
return 0;
|
|
err:
|
|
nft_trans_destroy(trans);
|
|
@@ -902,6 +952,7 @@ static int nf_tables_newtable(struct net *net, struct sock *nlsk,
|
|
const struct nlattr * const nla[],
|
|
struct netlink_ext_ack *extack)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
|
u8 genmask = nft_genmask_next(net);
|
|
int family = nfmsg->nfgen_family;
|
|
@@ -911,7 +962,7 @@ static int nf_tables_newtable(struct net *net, struct sock *nlsk,
|
|
struct nft_ctx ctx;
|
|
int err;
|
|
|
|
- lockdep_assert_held(&net->nft.commit_mutex);
|
|
+ lockdep_assert_held(&nft_net->commit_mutex);
|
|
attr = nla[NFTA_TABLE_NAME];
|
|
table = nft_table_lookup(net, attr, family, genmask);
|
|
if (IS_ERR(table)) {
|
|
@@ -961,7 +1012,7 @@ static int nf_tables_newtable(struct net *net, struct sock *nlsk,
|
|
if (err < 0)
|
|
goto err_trans;
|
|
|
|
- list_add_tail_rcu(&table->list, &net->nft.tables);
|
|
+ list_add_tail_rcu(&table->list, &nft_net->tables);
|
|
return 0;
|
|
err_trans:
|
|
rhltable_destroy(&table->chains_ht);
|
|
@@ -1041,11 +1092,12 @@ out:
|
|
|
|
static int nft_flush(struct nft_ctx *ctx, int family)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(ctx->net, nf_tables_net_id);
|
|
struct nft_table *table, *nt;
|
|
const struct nlattr * const *nla = ctx->nla;
|
|
int err = 0;
|
|
|
|
- list_for_each_entry_safe(table, nt, &ctx->net->nft.tables, list) {
|
|
+ list_for_each_entry_safe(table, nt, &nft_net->tables, list) {
|
|
if (family != AF_UNSPEC && table->family != family)
|
|
continue;
|
|
|
|
@@ -1159,7 +1211,9 @@ nft_chain_lookup_byhandle(const struct nft_table *table, u64 handle, u8 genmask)
|
|
static bool lockdep_commit_lock_is_held(const struct net *net)
|
|
{
|
|
#ifdef CONFIG_PROVE_LOCKING
|
|
- return lockdep_is_held(&net->nft.commit_mutex);
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
+
|
|
+ return lockdep_is_held(&nft_net->commit_mutex);
|
|
#else
|
|
return true;
|
|
#endif
|
|
@@ -1263,18 +1317,13 @@ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net,
|
|
const struct nft_chain *chain)
|
|
{
|
|
struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
|
|
event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
|
|
- nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg), flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
|
|
+ NFNETLINK_V0, nft_base_seq(net));
|
|
+ if (!nlh)
|
|
goto nla_put_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = family;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = htons(net->nft.base_seq & 0xffff);
|
|
-
|
|
if (nla_put_string(skb, NFTA_CHAIN_TABLE, table->name))
|
|
goto nla_put_failure;
|
|
if (nla_put_be64(skb, NFTA_CHAIN_HANDLE, cpu_to_be64(chain->handle),
|
|
@@ -1367,11 +1416,13 @@ static int nf_tables_dump_chains(struct sk_buff *skb,
|
|
unsigned int idx = 0, s_idx = cb->args[0];
|
|
struct net *net = sock_net(skb->sk);
|
|
int family = nfmsg->nfgen_family;
|
|
+ struct nftables_pernet *nft_net;
|
|
|
|
rcu_read_lock();
|
|
- cb->seq = net->nft.base_seq;
|
|
+ nft_net = net_generic(net, nf_tables_net_id);
|
|
+ cb->seq = nft_net->base_seq;
|
|
|
|
- list_for_each_entry_rcu(table, &net->nft.tables, list) {
|
|
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
|
|
if (family != NFPROTO_UNSPEC && family != table->family)
|
|
continue;
|
|
|
|
@@ -1557,12 +1608,13 @@ static int nft_chain_parse_hook(struct net *net,
|
|
struct nft_chain_hook *hook, u8 family,
|
|
bool autoload)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
struct nlattr *ha[NFTA_HOOK_MAX + 1];
|
|
const struct nft_chain_type *type;
|
|
struct net_device *dev;
|
|
int err;
|
|
|
|
- lockdep_assert_held(&net->nft.commit_mutex);
|
|
+ lockdep_assert_held(&nft_net->commit_mutex);
|
|
lockdep_nfnl_nft_mutex_not_held();
|
|
|
|
err = nla_parse_nested_deprecated(ha, NFTA_HOOK_MAX,
|
|
@@ -1847,6 +1899,7 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
|
|
|
|
if (nla[NFTA_CHAIN_HANDLE] &&
|
|
nla[NFTA_CHAIN_NAME]) {
|
|
+ struct nftables_pernet *nft_net = net_generic(ctx->net, nf_tables_net_id);
|
|
struct nft_trans *tmp;
|
|
char *name;
|
|
|
|
@@ -1856,7 +1909,7 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
|
|
goto err;
|
|
|
|
err = -EEXIST;
|
|
- list_for_each_entry(tmp, &ctx->net->nft.commit_list, list) {
|
|
+ list_for_each_entry(tmp, &nft_net->commit_list, list) {
|
|
if (tmp->msg_type == NFT_MSG_NEWCHAIN &&
|
|
tmp->ctx.table == table &&
|
|
nft_trans_chain_update(tmp) &&
|
|
@@ -1869,7 +1922,7 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
|
|
|
|
nft_trans_chain_name(trans) = name;
|
|
}
|
|
- list_add_tail(&trans->list, &ctx->net->nft.commit_list);
|
|
+ nft_trans_commit_list_add_tail(ctx->net, trans);
|
|
|
|
return 0;
|
|
err:
|
|
@@ -1883,6 +1936,7 @@ static int nf_tables_newchain(struct net *net, struct sock *nlsk,
|
|
const struct nlattr * const nla[],
|
|
struct netlink_ext_ack *extack)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
|
u8 genmask = nft_genmask_next(net);
|
|
int family = nfmsg->nfgen_family;
|
|
@@ -1894,7 +1948,7 @@ static int nf_tables_newchain(struct net *net, struct sock *nlsk,
|
|
u64 handle = 0;
|
|
u32 flags = 0;
|
|
|
|
- lockdep_assert_held(&net->nft.commit_mutex);
|
|
+ lockdep_assert_held(&nft_net->commit_mutex);
|
|
|
|
table = nft_table_lookup(net, nla[NFTA_CHAIN_TABLE], family, genmask);
|
|
if (IS_ERR(table)) {
|
|
@@ -2353,20 +2407,15 @@ static int nf_tables_fill_rule_info(struct sk_buff *skb, struct net *net,
|
|
const struct nft_rule *prule)
|
|
{
|
|
struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
const struct nft_expr *expr, *next;
|
|
struct nlattr *list;
|
|
u16 type = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
|
|
|
|
- nlh = nlmsg_put(skb, portid, seq, type, sizeof(struct nfgenmsg), flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, type, flags, family, NFNETLINK_V0,
|
|
+ nft_base_seq(net));
|
|
+ if (!nlh)
|
|
goto nla_put_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = family;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = htons(net->nft.base_seq & 0xffff);
|
|
-
|
|
if (nla_put_string(skb, NFTA_RULE_TABLE, table->name))
|
|
goto nla_put_failure;
|
|
if (nla_put_string(skb, NFTA_RULE_CHAIN, chain->name))
|
|
@@ -2487,11 +2536,13 @@ static int nf_tables_dump_rules(struct sk_buff *skb,
|
|
unsigned int idx = 0;
|
|
struct net *net = sock_net(skb->sk);
|
|
int family = nfmsg->nfgen_family;
|
|
+ struct nftables_pernet *nft_net;
|
|
|
|
rcu_read_lock();
|
|
- cb->seq = net->nft.base_seq;
|
|
+ nft_net = net_generic(net, nf_tables_net_id);
|
|
+ cb->seq = nft_net->base_seq;
|
|
|
|
- list_for_each_entry_rcu(table, &net->nft.tables, list) {
|
|
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
|
|
if (family != NFPROTO_UNSPEC && family != table->family)
|
|
continue;
|
|
|
|
@@ -2708,6 +2759,8 @@ static int nft_table_validate(struct net *net, const struct nft_table *table)
|
|
err = nft_chain_validate(&ctx, chain);
|
|
if (err < 0)
|
|
return err;
|
|
+
|
|
+ cond_resched();
|
|
}
|
|
|
|
return 0;
|
|
@@ -2724,6 +2777,7 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
|
|
const struct nlattr * const nla[],
|
|
struct netlink_ext_ack *extack)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
|
u8 genmask = nft_genmask_next(net);
|
|
struct nft_expr_info *info = NULL;
|
|
@@ -2741,7 +2795,7 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
|
|
int err, rem;
|
|
u64 handle, pos_handle;
|
|
|
|
- lockdep_assert_held(&net->nft.commit_mutex);
|
|
+ lockdep_assert_held(&nft_net->commit_mutex);
|
|
|
|
table = nft_table_lookup(net, nla[NFTA_RULE_TABLE], family, genmask);
|
|
if (IS_ERR(table)) {
|
|
@@ -2896,7 +2950,7 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
|
|
kvfree(info);
|
|
chain->use++;
|
|
|
|
- if (net->nft.validate_state == NFT_VALIDATE_DO)
|
|
+ if (nft_net->validate_state == NFT_VALIDATE_DO)
|
|
return nft_table_validate(net, table);
|
|
|
|
if (chain->flags & NFT_CHAIN_HW_OFFLOAD) {
|
|
@@ -2909,7 +2963,8 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk,
|
|
|
|
return 0;
|
|
err2:
|
|
- nf_tables_rule_release(&ctx, rule);
|
|
+ nft_rule_expr_deactivate(&ctx, rule, NFT_TRANS_PREPARE_ERROR);
|
|
+ nf_tables_rule_destroy(&ctx, rule);
|
|
err1:
|
|
for (i = 0; i < n; i++) {
|
|
if (info[i].ops) {
|
|
@@ -2926,10 +2981,11 @@ static struct nft_rule *nft_rule_lookup_byid(const struct net *net,
|
|
const struct nft_chain *chain,
|
|
const struct nlattr *nla)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
u32 id = ntohl(nla_get_be32(nla));
|
|
struct nft_trans *trans;
|
|
|
|
- list_for_each_entry(trans, &net->nft.commit_list, list) {
|
|
+ list_for_each_entry(trans, &nft_net->commit_list, list) {
|
|
struct nft_rule *rule = nft_trans_rule(trans);
|
|
|
|
if (trans->msg_type == NFT_MSG_NEWRULE &&
|
|
@@ -3048,12 +3104,13 @@ nft_select_set_ops(const struct nft_ctx *ctx,
|
|
const struct nft_set_desc *desc,
|
|
enum nft_set_policies policy)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(ctx->net, nf_tables_net_id);
|
|
const struct nft_set_ops *ops, *bops;
|
|
struct nft_set_estimate est, best;
|
|
const struct nft_set_type *type;
|
|
u32 flags = 0;
|
|
|
|
- lockdep_assert_held(&ctx->net->nft.commit_mutex);
|
|
+ lockdep_assert_held(&nft_net->commit_mutex);
|
|
lockdep_nfnl_nft_mutex_not_held();
|
|
#ifdef CONFIG_MODULES
|
|
if (list_empty(&nf_tables_set_types)) {
|
|
@@ -3198,10 +3255,11 @@ static struct nft_set *nft_set_lookup_byid(const struct net *net,
|
|
const struct nft_table *table,
|
|
const struct nlattr *nla, u8 genmask)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
struct nft_trans *trans;
|
|
u32 id = ntohl(nla_get_be32(nla));
|
|
|
|
- list_for_each_entry(trans, &net->nft.commit_list, list) {
|
|
+ list_for_each_entry(trans, &nft_net->commit_list, list) {
|
|
if (trans->msg_type == NFT_MSG_NEWSET) {
|
|
struct nft_set *set = nft_trans_set(trans);
|
|
|
|
@@ -3309,23 +3367,17 @@ __be64 nf_jiffies64_to_msecs(u64 input)
|
|
static int nf_tables_fill_set(struct sk_buff *skb, const struct nft_ctx *ctx,
|
|
const struct nft_set *set, u16 event, u16 flags)
|
|
{
|
|
- struct nfgenmsg *nfmsg;
|
|
struct nlmsghdr *nlh;
|
|
struct nlattr *desc;
|
|
u32 portid = ctx->portid;
|
|
u32 seq = ctx->seq;
|
|
|
|
event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
|
|
- nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg),
|
|
- flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, event, flags, ctx->family,
|
|
+ NFNETLINK_V0, nft_base_seq(ctx->net));
|
|
+ if (!nlh)
|
|
goto nla_put_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = ctx->family;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = htons(ctx->net->nft.base_seq & 0xffff);
|
|
-
|
|
if (nla_put_string(skb, NFTA_SET_TABLE, ctx->table->name))
|
|
goto nla_put_failure;
|
|
if (nla_put_string(skb, NFTA_SET_NAME, set->name))
|
|
@@ -3421,14 +3473,16 @@ static int nf_tables_dump_sets(struct sk_buff *skb, struct netlink_callback *cb)
|
|
struct nft_table *table, *cur_table = (struct nft_table *)cb->args[2];
|
|
struct net *net = sock_net(skb->sk);
|
|
struct nft_ctx *ctx = cb->data, ctx_set;
|
|
+ struct nftables_pernet *nft_net;
|
|
|
|
if (cb->args[1])
|
|
return skb->len;
|
|
|
|
rcu_read_lock();
|
|
- cb->seq = net->nft.base_seq;
|
|
+ nft_net = net_generic(net, nf_tables_net_id);
|
|
+ cb->seq = nft_net->base_seq;
|
|
|
|
- list_for_each_entry_rcu(table, &net->nft.tables, list) {
|
|
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
|
|
if (ctx->family != NFPROTO_UNSPEC &&
|
|
ctx->family != table->family)
|
|
continue;
|
|
@@ -3929,6 +3983,15 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
|
|
enum nft_trans_phase phase)
|
|
{
|
|
switch (phase) {
|
|
+ case NFT_TRANS_PREPARE_ERROR:
|
|
+ nft_set_trans_unbind(ctx, set);
|
|
+ if (nft_set_is_anonymous(set))
|
|
+ nft_deactivate_next(ctx->net, set);
|
|
+ else
|
|
+ list_del_rcu(&binding->list);
|
|
+
|
|
+ set->use--;
|
|
+ break;
|
|
case NFT_TRANS_PREPARE:
|
|
if (nft_set_is_anonymous(set))
|
|
nft_deactivate_next(ctx->net, set);
|
|
@@ -4134,18 +4197,19 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
|
|
{
|
|
struct nft_set_dump_ctx *dump_ctx = cb->data;
|
|
struct net *net = sock_net(skb->sk);
|
|
+ struct nftables_pernet *nft_net;
|
|
struct nft_table *table;
|
|
struct nft_set *set;
|
|
struct nft_set_dump_args args;
|
|
bool set_found = false;
|
|
- struct nfgenmsg *nfmsg;
|
|
struct nlmsghdr *nlh;
|
|
struct nlattr *nest;
|
|
u32 portid, seq;
|
|
int event;
|
|
|
|
rcu_read_lock();
|
|
- list_for_each_entry_rcu(table, &net->nft.tables, list) {
|
|
+ nft_net = net_generic(net, nf_tables_net_id);
|
|
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
|
|
if (dump_ctx->ctx.family != NFPROTO_UNSPEC &&
|
|
dump_ctx->ctx.family != table->family)
|
|
continue;
|
|
@@ -4171,16 +4235,11 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
|
|
portid = NETLINK_CB(cb->skb).portid;
|
|
seq = cb->nlh->nlmsg_seq;
|
|
|
|
- nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg),
|
|
- NLM_F_MULTI);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, event, NLM_F_MULTI,
|
|
+ table->family, NFNETLINK_V0, nft_base_seq(net));
|
|
+ if (!nlh)
|
|
goto nla_put_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = table->family;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = htons(net->nft.base_seq & 0xffff);
|
|
-
|
|
if (nla_put_string(skb, NFTA_SET_ELEM_LIST_TABLE, table->name))
|
|
goto nla_put_failure;
|
|
if (nla_put_string(skb, NFTA_SET_ELEM_LIST_SET, set->name))
|
|
@@ -4237,22 +4296,16 @@ static int nf_tables_fill_setelem_info(struct sk_buff *skb,
|
|
const struct nft_set *set,
|
|
const struct nft_set_elem *elem)
|
|
{
|
|
- struct nfgenmsg *nfmsg;
|
|
struct nlmsghdr *nlh;
|
|
struct nlattr *nest;
|
|
int err;
|
|
|
|
event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
|
|
- nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg),
|
|
- flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, event, flags, ctx->family,
|
|
+ NFNETLINK_V0, nft_base_seq(ctx->net));
|
|
+ if (!nlh)
|
|
goto nla_put_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = ctx->family;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = htons(ctx->net->nft.base_seq & 0xffff);
|
|
-
|
|
if (nla_put_string(skb, NFTA_SET_TABLE, ctx->table->name))
|
|
goto nla_put_failure;
|
|
if (nla_put_string(skb, NFTA_SET_NAME, set->name))
|
|
@@ -4760,7 +4813,7 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
|
|
}
|
|
|
|
nft_trans_elem(trans) = elem;
|
|
- list_add_tail(&trans->list, &ctx->net->nft.commit_list);
|
|
+ nft_trans_commit_list_add_tail(ctx->net, trans);
|
|
return 0;
|
|
|
|
err6:
|
|
@@ -4785,6 +4838,7 @@ static int nf_tables_newsetelem(struct net *net, struct sock *nlsk,
|
|
const struct nlattr * const nla[],
|
|
struct netlink_ext_ack *extack)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
u8 genmask = nft_genmask_next(net);
|
|
const struct nlattr *attr;
|
|
struct nft_set *set;
|
|
@@ -4814,7 +4868,7 @@ static int nf_tables_newsetelem(struct net *net, struct sock *nlsk,
|
|
return err;
|
|
}
|
|
|
|
- if (net->nft.validate_state == NFT_VALIDATE_DO)
|
|
+ if (nft_net->validate_state == NFT_VALIDATE_DO)
|
|
return nft_table_validate(net, ctx.table);
|
|
|
|
return 0;
|
|
@@ -4927,7 +4981,7 @@ static int nft_del_setelem(struct nft_ctx *ctx, struct nft_set *set,
|
|
nft_set_elem_deactivate(ctx->net, set, &elem);
|
|
|
|
nft_trans_elem(trans) = elem;
|
|
- list_add_tail(&trans->list, &ctx->net->nft.commit_list);
|
|
+ nft_trans_commit_list_add_tail(ctx->net, trans);
|
|
return 0;
|
|
|
|
fail_ops:
|
|
@@ -4961,7 +5015,7 @@ static int nft_flush_set(const struct nft_ctx *ctx,
|
|
nft_set_elem_deactivate(ctx->net, set, elem);
|
|
nft_trans_elem_set(trans) = set;
|
|
nft_trans_elem(trans) = *elem;
|
|
- list_add_tail(&trans->list, &ctx->net->nft.commit_list);
|
|
+ nft_trans_commit_list_add_tail(ctx->net, trans);
|
|
|
|
return 0;
|
|
err1:
|
|
@@ -5260,7 +5314,7 @@ static int nf_tables_updobj(const struct nft_ctx *ctx,
|
|
nft_trans_obj(trans) = obj;
|
|
nft_trans_obj_update(trans) = true;
|
|
nft_trans_obj_newobj(trans) = newobj;
|
|
- list_add_tail(&trans->list, &ctx->net->nft.commit_list);
|
|
+ nft_trans_commit_list_add_tail(ctx->net, trans);
|
|
|
|
return 0;
|
|
|
|
@@ -5371,19 +5425,14 @@ static int nf_tables_fill_obj_info(struct sk_buff *skb, struct net *net,
|
|
int family, const struct nft_table *table,
|
|
struct nft_object *obj, bool reset)
|
|
{
|
|
- struct nfgenmsg *nfmsg;
|
|
struct nlmsghdr *nlh;
|
|
|
|
event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
|
|
- nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg), flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
|
|
+ NFNETLINK_V0, nft_base_seq(net));
|
|
+ if (!nlh)
|
|
goto nla_put_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = family;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = htons(net->nft.base_seq & 0xffff);
|
|
-
|
|
if (nla_put_string(skb, NFTA_OBJ_TABLE, table->name) ||
|
|
nla_put_string(skb, NFTA_OBJ_NAME, obj->key.name) ||
|
|
nla_put_be32(skb, NFTA_OBJ_TYPE, htonl(obj->ops->type->type)) ||
|
|
@@ -5414,6 +5463,7 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
|
|
struct nft_obj_filter *filter = cb->data;
|
|
struct net *net = sock_net(skb->sk);
|
|
int family = nfmsg->nfgen_family;
|
|
+ struct nftables_pernet *nft_net;
|
|
struct nft_object *obj;
|
|
bool reset = false;
|
|
|
|
@@ -5421,9 +5471,10 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
|
|
reset = true;
|
|
|
|
rcu_read_lock();
|
|
- cb->seq = net->nft.base_seq;
|
|
+ nft_net = net_generic(net, nf_tables_net_id);
|
|
+ cb->seq = nft_net->base_seq;
|
|
|
|
- list_for_each_entry_rcu(table, &net->nft.tables, list) {
|
|
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
|
|
if (family != NFPROTO_UNSPEC && family != table->family)
|
|
continue;
|
|
|
|
@@ -5706,6 +5757,7 @@ void nf_tables_deactivate_flowtable(const struct nft_ctx *ctx,
|
|
enum nft_trans_phase phase)
|
|
{
|
|
switch (phase) {
|
|
+ case NFT_TRANS_PREPARE_ERROR:
|
|
case NFT_TRANS_PREPARE:
|
|
case NFT_TRANS_ABORT:
|
|
case NFT_TRANS_RELEASE:
|
|
@@ -6046,20 +6098,15 @@ static int nf_tables_fill_flowtable_info(struct sk_buff *skb, struct net *net,
|
|
struct nft_flowtable *flowtable)
|
|
{
|
|
struct nlattr *nest, *nest_devs;
|
|
- struct nfgenmsg *nfmsg;
|
|
struct nlmsghdr *nlh;
|
|
int i;
|
|
|
|
event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
|
|
- nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg), flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
|
|
+ NFNETLINK_V0, nft_base_seq(net));
|
|
+ if (!nlh)
|
|
goto nla_put_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = family;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = htons(net->nft.base_seq & 0xffff);
|
|
-
|
|
if (nla_put_string(skb, NFTA_FLOWTABLE_TABLE, flowtable->table->name) ||
|
|
nla_put_string(skb, NFTA_FLOWTABLE_NAME, flowtable->name) ||
|
|
nla_put_be32(skb, NFTA_FLOWTABLE_USE, htonl(flowtable->use)) ||
|
|
@@ -6108,13 +6155,15 @@ static int nf_tables_dump_flowtable(struct sk_buff *skb,
|
|
unsigned int idx = 0, s_idx = cb->args[0];
|
|
struct net *net = sock_net(skb->sk);
|
|
int family = nfmsg->nfgen_family;
|
|
+ struct nftables_pernet *nft_net;
|
|
struct nft_flowtable *flowtable;
|
|
const struct nft_table *table;
|
|
|
|
rcu_read_lock();
|
|
- cb->seq = net->nft.base_seq;
|
|
+ nft_net = net_generic(net, nf_tables_net_id);
|
|
+ cb->seq = nft_net->base_seq;
|
|
|
|
- list_for_each_entry_rcu(table, &net->nft.tables, list) {
|
|
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
|
|
if (family != NFPROTO_UNSPEC && family != table->family)
|
|
continue;
|
|
|
|
@@ -6284,21 +6333,17 @@ static void nf_tables_flowtable_destroy(struct nft_flowtable *flowtable)
|
|
static int nf_tables_fill_gen_info(struct sk_buff *skb, struct net *net,
|
|
u32 portid, u32 seq)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
char buf[TASK_COMM_LEN];
|
|
int event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, NFT_MSG_NEWGEN);
|
|
|
|
- nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg), 0);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, event, 0, AF_UNSPEC,
|
|
+ NFNETLINK_V0, nft_base_seq(net));
|
|
+ if (!nlh)
|
|
goto nla_put_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = AF_UNSPEC;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = htons(net->nft.base_seq & 0xffff);
|
|
-
|
|
- if (nla_put_be32(skb, NFTA_GEN_ID, htonl(net->nft.base_seq)) ||
|
|
+ if (nla_put_be32(skb, NFTA_GEN_ID, htonl(nft_net->base_seq)) ||
|
|
nla_put_be32(skb, NFTA_GEN_PROC_PID, htonl(task_pid_nr(current))) ||
|
|
nla_put_string(skb, NFTA_GEN_PROC_NAME, get_task_comm(buf, current)))
|
|
goto nla_put_failure;
|
|
@@ -6331,6 +6376,7 @@ static int nf_tables_flowtable_event(struct notifier_block *this,
|
|
{
|
|
struct net_device *dev = netdev_notifier_info_to_dev(ptr);
|
|
struct nft_flowtable *flowtable;
|
|
+ struct nftables_pernet *nft_net;
|
|
struct nft_table *table;
|
|
struct net *net;
|
|
|
|
@@ -6338,13 +6384,14 @@ static int nf_tables_flowtable_event(struct notifier_block *this,
|
|
return 0;
|
|
|
|
net = dev_net(dev);
|
|
- mutex_lock(&net->nft.commit_mutex);
|
|
- list_for_each_entry(table, &net->nft.tables, list) {
|
|
+ nft_net = net_generic(net, nf_tables_net_id);
|
|
+ mutex_lock(&nft_net->commit_mutex);
|
|
+ list_for_each_entry(table, &nft_net->tables, list) {
|
|
list_for_each_entry(flowtable, &table->flowtables, list) {
|
|
nft_flowtable_event(event, dev, flowtable);
|
|
}
|
|
}
|
|
- mutex_unlock(&net->nft.commit_mutex);
|
|
+ mutex_unlock(&nft_net->commit_mutex);
|
|
|
|
return NOTIFY_DONE;
|
|
}
|
|
@@ -6525,16 +6572,17 @@ static const struct nfnl_callback nf_tables_cb[NFT_MSG_MAX] = {
|
|
|
|
static int nf_tables_validate(struct net *net)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
struct nft_table *table;
|
|
|
|
- switch (net->nft.validate_state) {
|
|
+ switch (nft_net->validate_state) {
|
|
case NFT_VALIDATE_SKIP:
|
|
break;
|
|
case NFT_VALIDATE_NEED:
|
|
nft_validate_state_update(net, NFT_VALIDATE_DO);
|
|
/* fall through */
|
|
case NFT_VALIDATE_DO:
|
|
- list_for_each_entry(table, &net->nft.tables, list) {
|
|
+ list_for_each_entry(table, &nft_net->tables, list) {
|
|
if (nft_table_validate(net, table) < 0)
|
|
return -EAGAIN;
|
|
}
|
|
@@ -6664,7 +6712,7 @@ static void nf_tables_trans_destroy_work(struct work_struct *w)
|
|
synchronize_rcu();
|
|
|
|
list_for_each_entry_safe(trans, next, &head, list) {
|
|
- list_del(&trans->list);
|
|
+ nft_trans_list_del(trans);
|
|
nft_commit_release(trans);
|
|
}
|
|
}
|
|
@@ -6708,9 +6756,10 @@ static int nf_tables_commit_chain_prepare(struct net *net, struct nft_chain *cha
|
|
|
|
static void nf_tables_commit_chain_prepare_cancel(struct net *net)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
struct nft_trans *trans, *next;
|
|
|
|
- list_for_each_entry_safe(trans, next, &net->nft.commit_list, list) {
|
|
+ list_for_each_entry_safe(trans, next, &nft_net->commit_list, list) {
|
|
struct nft_chain *chain = trans->ctx.chain;
|
|
|
|
if (trans->msg_type == NFT_MSG_NEWRULE ||
|
|
@@ -6808,10 +6857,11 @@ static void nft_chain_del(struct nft_chain *chain)
|
|
|
|
static void nf_tables_module_autoload_cleanup(struct net *net)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
struct nft_module_request *req, *next;
|
|
|
|
- WARN_ON_ONCE(!list_empty(&net->nft.commit_list));
|
|
- list_for_each_entry_safe(req, next, &net->nft.module_list, list) {
|
|
+ WARN_ON_ONCE(!list_empty(&nft_net->commit_list));
|
|
+ list_for_each_entry_safe(req, next, &nft_net->module_list, list) {
|
|
WARN_ON_ONCE(!req->done);
|
|
list_del(&req->list);
|
|
kfree(req);
|
|
@@ -6820,6 +6870,7 @@ static void nf_tables_module_autoload_cleanup(struct net *net)
|
|
|
|
static void nf_tables_commit_release(struct net *net)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
struct nft_trans *trans;
|
|
|
|
/* all side effects have to be made visible.
|
|
@@ -6829,41 +6880,54 @@ static void nf_tables_commit_release(struct net *net)
|
|
* Memory reclaim happens asynchronously from work queue
|
|
* to prevent expensive synchronize_rcu() in commit phase.
|
|
*/
|
|
- if (list_empty(&net->nft.commit_list)) {
|
|
+ if (list_empty(&nft_net->commit_list)) {
|
|
nf_tables_module_autoload_cleanup(net);
|
|
- mutex_unlock(&net->nft.commit_mutex);
|
|
+ mutex_unlock(&nft_net->commit_mutex);
|
|
return;
|
|
}
|
|
|
|
- trans = list_last_entry(&net->nft.commit_list,
|
|
+ trans = list_last_entry(&nft_net->commit_list,
|
|
struct nft_trans, list);
|
|
get_net(trans->ctx.net);
|
|
WARN_ON_ONCE(trans->put_net);
|
|
|
|
trans->put_net = true;
|
|
spin_lock(&nf_tables_destroy_list_lock);
|
|
- list_splice_tail_init(&net->nft.commit_list, &nf_tables_destroy_list);
|
|
+ list_splice_tail_init(&nft_net->commit_list, &nf_tables_destroy_list);
|
|
spin_unlock(&nf_tables_destroy_list_lock);
|
|
|
|
nf_tables_module_autoload_cleanup(net);
|
|
schedule_work(&trans_destroy_work);
|
|
|
|
- mutex_unlock(&net->nft.commit_mutex);
|
|
+ mutex_unlock(&nft_net->commit_mutex);
|
|
}
|
|
|
|
static int nf_tables_commit(struct net *net, struct sk_buff *skb)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
struct nft_trans *trans, *next;
|
|
struct nft_trans_elem *te;
|
|
struct nft_chain *chain;
|
|
struct nft_table *table;
|
|
int err;
|
|
|
|
- if (list_empty(&net->nft.commit_list)) {
|
|
- mutex_unlock(&net->nft.commit_mutex);
|
|
+ if (list_empty(&nft_net->commit_list)) {
|
|
+ mutex_unlock(&nft_net->commit_mutex);
|
|
return 0;
|
|
}
|
|
|
|
+ list_for_each_entry(trans, &nft_net->binding_list, binding_list) {
|
|
+ switch (trans->msg_type) {
|
|
+ case NFT_MSG_NEWSET:
|
|
+ if (nft_set_is_anonymous(nft_trans_set(trans)) &&
|
|
+ !nft_trans_set_bound(trans)) {
|
|
+ pr_warn_once("nftables ruleset with unbound set\n");
|
|
+ return -EINVAL;
|
|
+ }
|
|
+ break;
|
|
+ }
|
|
+ }
|
|
+
|
|
/* 0. Validate ruleset, otherwise roll back for error reporting. */
|
|
if (nf_tables_validate(net) < 0)
|
|
return -EAGAIN;
|
|
@@ -6873,7 +6937,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
|
|
return err;
|
|
|
|
/* 1. Allocate space for next generation rules_gen_X[] */
|
|
- list_for_each_entry_safe(trans, next, &net->nft.commit_list, list) {
|
|
+ list_for_each_entry_safe(trans, next, &nft_net->commit_list, list) {
|
|
int ret;
|
|
|
|
if (trans->msg_type == NFT_MSG_NEWRULE ||
|
|
@@ -6889,7 +6953,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
|
|
}
|
|
|
|
/* step 2. Make rules_gen_X visible to packet path */
|
|
- list_for_each_entry(table, &net->nft.tables, list) {
|
|
+ list_for_each_entry(table, &nft_net->tables, list) {
|
|
list_for_each_entry(chain, &table->chains, list)
|
|
nf_tables_commit_chain(net, chain);
|
|
}
|
|
@@ -6898,12 +6962,13 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
|
|
* Bump generation counter, invalidate any dump in progress.
|
|
* Cannot fail after this point.
|
|
*/
|
|
- while (++net->nft.base_seq == 0);
|
|
+ while (++nft_net->base_seq == 0)
|
|
+ ;
|
|
|
|
/* step 3. Start new generation, rules_gen_X now in use. */
|
|
net->nft.gencursor = nft_gencursor_next(net);
|
|
|
|
- list_for_each_entry_safe(trans, next, &net->nft.commit_list, list) {
|
|
+ list_for_each_entry_safe(trans, next, &nft_net->commit_list, list) {
|
|
switch (trans->msg_type) {
|
|
case NFT_MSG_NEWTABLE:
|
|
if (nft_trans_table_update(trans)) {
|
|
@@ -7045,17 +7110,18 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
|
|
|
|
static void nf_tables_module_autoload(struct net *net)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
struct nft_module_request *req, *next;
|
|
LIST_HEAD(module_list);
|
|
|
|
- list_splice_init(&net->nft.module_list, &module_list);
|
|
- mutex_unlock(&net->nft.commit_mutex);
|
|
+ list_splice_init(&nft_net->module_list, &module_list);
|
|
+ mutex_unlock(&nft_net->commit_mutex);
|
|
list_for_each_entry_safe(req, next, &module_list, list) {
|
|
request_module("%s", req->module);
|
|
req->done = true;
|
|
}
|
|
- mutex_lock(&net->nft.commit_mutex);
|
|
- list_splice(&module_list, &net->nft.module_list);
|
|
+ mutex_lock(&nft_net->commit_mutex);
|
|
+ list_splice(&module_list, &nft_net->module_list);
|
|
}
|
|
|
|
static void nf_tables_abort_release(struct nft_trans *trans)
|
|
@@ -7089,6 +7155,7 @@ static void nf_tables_abort_release(struct nft_trans *trans)
|
|
|
|
static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
struct nft_trans *trans, *next;
|
|
struct nft_trans_elem *te;
|
|
|
|
@@ -7096,7 +7163,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
|
|
nf_tables_validate(net) < 0)
|
|
return -EAGAIN;
|
|
|
|
- list_for_each_entry_safe_reverse(trans, next, &net->nft.commit_list,
|
|
+ list_for_each_entry_safe_reverse(trans, next, &nft_net->commit_list,
|
|
list) {
|
|
switch (trans->msg_type) {
|
|
case NFT_MSG_NEWTABLE:
|
|
@@ -7208,8 +7275,8 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
|
|
synchronize_rcu();
|
|
|
|
list_for_each_entry_safe_reverse(trans, next,
|
|
- &net->nft.commit_list, list) {
|
|
- list_del(&trans->list);
|
|
+ &nft_net->commit_list, list) {
|
|
+ nft_trans_list_del(trans);
|
|
nf_tables_abort_release(trans);
|
|
}
|
|
|
|
@@ -7224,22 +7291,24 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
|
|
static int nf_tables_abort(struct net *net, struct sk_buff *skb,
|
|
enum nfnl_abort_action action)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
int ret = __nf_tables_abort(net, action);
|
|
|
|
- mutex_unlock(&net->nft.commit_mutex);
|
|
+ mutex_unlock(&nft_net->commit_mutex);
|
|
|
|
return ret;
|
|
}
|
|
|
|
static bool nf_tables_valid_genid(struct net *net, u32 genid)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
bool genid_ok;
|
|
|
|
- mutex_lock(&net->nft.commit_mutex);
|
|
+ mutex_lock(&nft_net->commit_mutex);
|
|
|
|
- genid_ok = genid == 0 || net->nft.base_seq == genid;
|
|
+ genid_ok = genid == 0 || nft_net->base_seq == genid;
|
|
if (!genid_ok)
|
|
- mutex_unlock(&net->nft.commit_mutex);
|
|
+ mutex_unlock(&nft_net->commit_mutex);
|
|
|
|
/* else, commit mutex has to be released by commit or abort function */
|
|
return genid_ok;
|
|
@@ -7586,6 +7655,9 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
|
|
|
|
if (!tb[NFTA_VERDICT_CODE])
|
|
return -EINVAL;
|
|
+
|
|
+ /* zero padding hole for memcmp */
|
|
+ memset(data, 0, sizeof(*data));
|
|
data->verdict.code = ntohl(nla_get_be32(tb[NFTA_VERDICT_CODE]));
|
|
|
|
switch (data->verdict.code) {
|
|
@@ -7796,19 +7868,19 @@ EXPORT_SYMBOL_GPL(__nft_release_basechain);
|
|
|
|
static void __nft_release_hooks(struct net *net)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
struct nft_table *table;
|
|
struct nft_chain *chain;
|
|
|
|
- list_for_each_entry(table, &net->nft.tables, list) {
|
|
+ list_for_each_entry(table, &nft_net->tables, list) {
|
|
list_for_each_entry(chain, &table->chains, list)
|
|
nf_tables_unregister_hook(net, table, chain);
|
|
}
|
|
}
|
|
|
|
-static void __nft_release_tables(struct net *net)
|
|
+static void __nft_release_table(struct net *net, struct nft_table *table)
|
|
{
|
|
struct nft_flowtable *flowtable, *nf;
|
|
- struct nft_table *table, *nt;
|
|
struct nft_chain *chain, *nc;
|
|
struct nft_object *obj, *ne;
|
|
struct nft_rule *rule, *nr;
|
|
@@ -7818,77 +7890,94 @@ static void __nft_release_tables(struct net *net)
|
|
.family = NFPROTO_NETDEV,
|
|
};
|
|
|
|
- list_for_each_entry_safe(table, nt, &net->nft.tables, list) {
|
|
- ctx.family = table->family;
|
|
- ctx.table = table;
|
|
- list_for_each_entry(chain, &table->chains, list) {
|
|
- ctx.chain = chain;
|
|
- list_for_each_entry_safe(rule, nr, &chain->rules, list) {
|
|
- list_del(&rule->list);
|
|
- chain->use--;
|
|
- nf_tables_rule_release(&ctx, rule);
|
|
- }
|
|
- }
|
|
- list_for_each_entry_safe(flowtable, nf, &table->flowtables, list) {
|
|
- list_del(&flowtable->list);
|
|
- table->use--;
|
|
- nf_tables_flowtable_destroy(flowtable);
|
|
- }
|
|
- list_for_each_entry_safe(set, ns, &table->sets, list) {
|
|
- list_del(&set->list);
|
|
- table->use--;
|
|
- nft_set_destroy(set);
|
|
- }
|
|
- list_for_each_entry_safe(obj, ne, &table->objects, list) {
|
|
- nft_obj_del(obj);
|
|
- table->use--;
|
|
- nft_obj_destroy(&ctx, obj);
|
|
- }
|
|
- list_for_each_entry_safe(chain, nc, &table->chains, list) {
|
|
- ctx.chain = chain;
|
|
- nft_chain_del(chain);
|
|
- table->use--;
|
|
- nf_tables_chain_destroy(&ctx);
|
|
+ ctx.family = table->family;
|
|
+ ctx.table = table;
|
|
+ list_for_each_entry(chain, &table->chains, list) {
|
|
+ ctx.chain = chain;
|
|
+ list_for_each_entry_safe(rule, nr, &chain->rules, list) {
|
|
+ list_del(&rule->list);
|
|
+ chain->use--;
|
|
+ nf_tables_rule_release(&ctx, rule);
|
|
}
|
|
- list_del(&table->list);
|
|
- nf_tables_table_destroy(&ctx);
|
|
}
|
|
+ list_for_each_entry_safe(flowtable, nf, &table->flowtables, list) {
|
|
+ list_del(&flowtable->list);
|
|
+ table->use--;
|
|
+ nf_tables_flowtable_destroy(flowtable);
|
|
+ }
|
|
+ list_for_each_entry_safe(set, ns, &table->sets, list) {
|
|
+ list_del(&set->list);
|
|
+ table->use--;
|
|
+ nft_set_destroy(set);
|
|
+ }
|
|
+ list_for_each_entry_safe(obj, ne, &table->objects, list) {
|
|
+ nft_obj_del(obj);
|
|
+ table->use--;
|
|
+ nft_obj_destroy(&ctx, obj);
|
|
+ }
|
|
+ list_for_each_entry_safe(chain, nc, &table->chains, list) {
|
|
+ ctx.chain = chain;
|
|
+ nft_chain_del(chain);
|
|
+ table->use--;
|
|
+ nf_tables_chain_destroy(&ctx);
|
|
+ }
|
|
+ list_del(&table->list);
|
|
+ nf_tables_table_destroy(&ctx);
|
|
+}
|
|
+
|
|
+static void __nft_release_tables(struct net *net)
|
|
+{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
+ struct nft_table *table, *nt;
|
|
+
|
|
+ list_for_each_entry_safe(table, nt, &nft_net->tables, list)
|
|
+ __nft_release_table(net, table);
|
|
}
|
|
|
|
static int __net_init nf_tables_init_net(struct net *net)
|
|
{
|
|
- INIT_LIST_HEAD(&net->nft.tables);
|
|
- INIT_LIST_HEAD(&net->nft.commit_list);
|
|
- INIT_LIST_HEAD(&net->nft.module_list);
|
|
- mutex_init(&net->nft.commit_mutex);
|
|
- net->nft.base_seq = 1;
|
|
- net->nft.validate_state = NFT_VALIDATE_SKIP;
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
+
|
|
+ INIT_LIST_HEAD(&nft_net->tables);
|
|
+ INIT_LIST_HEAD(&nft_net->commit_list);
|
|
+ INIT_LIST_HEAD(&nft_net->binding_list);
|
|
+ INIT_LIST_HEAD(&nft_net->module_list);
|
|
+ INIT_LIST_HEAD(&nft_net->notify_list);
|
|
+ mutex_init(&nft_net->commit_mutex);
|
|
+ nft_net->base_seq = 1;
|
|
+ nft_net->validate_state = NFT_VALIDATE_SKIP;
|
|
|
|
return 0;
|
|
}
|
|
|
|
static void __net_exit nf_tables_pre_exit_net(struct net *net)
|
|
{
|
|
- mutex_lock(&net->nft.commit_mutex);
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
+
|
|
+ mutex_lock(&nft_net->commit_mutex);
|
|
__nft_release_hooks(net);
|
|
- mutex_unlock(&net->nft.commit_mutex);
|
|
+ mutex_unlock(&nft_net->commit_mutex);
|
|
}
|
|
|
|
static void __net_exit nf_tables_exit_net(struct net *net)
|
|
{
|
|
- mutex_lock(&net->nft.commit_mutex);
|
|
- if (!list_empty(&net->nft.commit_list))
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
+
|
|
+ mutex_lock(&nft_net->commit_mutex);
|
|
+ if (!list_empty(&nft_net->commit_list))
|
|
__nf_tables_abort(net, NFNL_ABORT_NONE);
|
|
__nft_release_tables(net);
|
|
- mutex_unlock(&net->nft.commit_mutex);
|
|
- WARN_ON_ONCE(!list_empty(&net->nft.tables));
|
|
- WARN_ON_ONCE(!list_empty(&net->nft.module_list));
|
|
+ mutex_unlock(&nft_net->commit_mutex);
|
|
+ WARN_ON_ONCE(!list_empty(&nft_net->tables));
|
|
+ WARN_ON_ONCE(!list_empty(&nft_net->module_list));
|
|
}
|
|
|
|
static struct pernet_operations nf_tables_net_ops = {
|
|
.init = nf_tables_init_net,
|
|
.pre_exit = nf_tables_pre_exit_net,
|
|
.exit = nf_tables_exit_net,
|
|
+ .id = &nf_tables_net_id,
|
|
+ .size = sizeof(struct nftables_pernet),
|
|
};
|
|
|
|
static int __init nf_tables_module_init(void)
|
|
diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c
|
|
index 2d3bc22c855c7..1e691eff1c40d 100644
|
|
--- a/net/netfilter/nf_tables_offload.c
|
|
+++ b/net/netfilter/nf_tables_offload.c
|
|
@@ -7,6 +7,8 @@
|
|
#include <net/netfilter/nf_tables_offload.h>
|
|
#include <net/pkt_cls.h>
|
|
|
|
+extern unsigned int nf_tables_net_id;
|
|
+
|
|
static struct nft_flow_rule *nft_flow_rule_alloc(int num_actions)
|
|
{
|
|
struct nft_flow_rule *flow;
|
|
@@ -345,11 +347,12 @@ static int nft_flow_offload_chain(struct nft_chain *chain,
|
|
|
|
int nft_flow_rule_offload_commit(struct net *net)
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(net, nf_tables_net_id);
|
|
struct nft_trans *trans;
|
|
int err = 0;
|
|
u8 policy;
|
|
|
|
- list_for_each_entry(trans, &net->nft.commit_list, list) {
|
|
+ list_for_each_entry(trans, &nft_net->commit_list, list) {
|
|
if (trans->ctx.family != NFPROTO_NETDEV)
|
|
continue;
|
|
|
|
@@ -400,7 +403,7 @@ int nft_flow_rule_offload_commit(struct net *net)
|
|
break;
|
|
}
|
|
|
|
- list_for_each_entry(trans, &net->nft.commit_list, list) {
|
|
+ list_for_each_entry(trans, &nft_net->commit_list, list) {
|
|
if (trans->ctx.family != NFPROTO_NETDEV)
|
|
continue;
|
|
|
|
@@ -419,14 +422,14 @@ int nft_flow_rule_offload_commit(struct net *net)
|
|
return err;
|
|
}
|
|
|
|
-static struct nft_chain *__nft_offload_get_chain(struct net_device *dev)
|
|
+static struct nft_chain *__nft_offload_get_chain(const struct nftables_pernet *nft_net,
|
|
+ struct net_device *dev)
|
|
{
|
|
struct nft_base_chain *basechain;
|
|
- struct net *net = dev_net(dev);
|
|
const struct nft_table *table;
|
|
struct nft_chain *chain;
|
|
|
|
- list_for_each_entry(table, &net->nft.tables, list) {
|
|
+ list_for_each_entry(table, &nft_net->tables, list) {
|
|
if (table->family != NFPROTO_NETDEV)
|
|
continue;
|
|
|
|
@@ -450,18 +453,20 @@ static void nft_indr_block_cb(struct net_device *dev,
|
|
flow_indr_block_bind_cb_t *cb, void *cb_priv,
|
|
enum flow_block_command cmd)
|
|
{
|
|
+ struct nftables_pernet *nft_net;
|
|
struct net *net = dev_net(dev);
|
|
struct nft_chain *chain;
|
|
|
|
- mutex_lock(&net->nft.commit_mutex);
|
|
- chain = __nft_offload_get_chain(dev);
|
|
+ nft_net = net_generic(net, nf_tables_net_id);
|
|
+ mutex_lock(&nft_net->commit_mutex);
|
|
+ chain = __nft_offload_get_chain(nft_net, dev);
|
|
if (chain && chain->flags & NFT_CHAIN_HW_OFFLOAD) {
|
|
struct nft_base_chain *basechain;
|
|
|
|
basechain = nft_base_chain(chain);
|
|
nft_indr_block_ing_cmd(dev, basechain, cb, cb_priv, cmd);
|
|
}
|
|
- mutex_unlock(&net->nft.commit_mutex);
|
|
+ mutex_unlock(&nft_net->commit_mutex);
|
|
}
|
|
|
|
static void nft_offload_chain_clean(struct nft_chain *chain)
|
|
@@ -480,17 +485,19 @@ static int nft_offload_netdev_event(struct notifier_block *this,
|
|
unsigned long event, void *ptr)
|
|
{
|
|
struct net_device *dev = netdev_notifier_info_to_dev(ptr);
|
|
+ struct nftables_pernet *nft_net;
|
|
struct net *net = dev_net(dev);
|
|
struct nft_chain *chain;
|
|
|
|
if (event != NETDEV_UNREGISTER)
|
|
return NOTIFY_DONE;
|
|
|
|
- mutex_lock(&net->nft.commit_mutex);
|
|
- chain = __nft_offload_get_chain(dev);
|
|
+ nft_net = net_generic(net, nf_tables_net_id);
|
|
+ mutex_lock(&nft_net->commit_mutex);
|
|
+ chain = __nft_offload_get_chain(nft_net, dev);
|
|
if (chain)
|
|
nft_offload_chain_clean(chain);
|
|
- mutex_unlock(&net->nft.commit_mutex);
|
|
+ mutex_unlock(&nft_net->commit_mutex);
|
|
|
|
return NOTIFY_DONE;
|
|
}
|
|
diff --git a/net/netfilter/nf_tables_trace.c b/net/netfilter/nf_tables_trace.c
|
|
index 87b36da5cd985..0cf3278007ba5 100644
|
|
--- a/net/netfilter/nf_tables_trace.c
|
|
+++ b/net/netfilter/nf_tables_trace.c
|
|
@@ -183,7 +183,6 @@ static bool nft_trace_have_verdict_chain(struct nft_traceinfo *info)
|
|
void nft_trace_notify(struct nft_traceinfo *info)
|
|
{
|
|
const struct nft_pktinfo *pkt = info->pkt;
|
|
- struct nfgenmsg *nfmsg;
|
|
struct nlmsghdr *nlh;
|
|
struct sk_buff *skb;
|
|
unsigned int size;
|
|
@@ -219,15 +218,11 @@ void nft_trace_notify(struct nft_traceinfo *info)
|
|
return;
|
|
|
|
event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, NFT_MSG_TRACE);
|
|
- nlh = nlmsg_put(skb, 0, 0, event, sizeof(struct nfgenmsg), 0);
|
|
+ nlh = nfnl_msg_put(skb, 0, 0, event, 0, info->basechain->type->family,
|
|
+ NFNETLINK_V0, 0);
|
|
if (!nlh)
|
|
goto nla_put_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = info->basechain->type->family;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = 0;
|
|
-
|
|
if (nla_put_be32(skb, NFTA_TRACE_NFPROTO, htonl(nft_pf(pkt))))
|
|
goto nla_put_failure;
|
|
|
|
diff --git a/net/netfilter/nfnetlink_acct.c b/net/netfilter/nfnetlink_acct.c
|
|
index 2481470dec368..4b46421c5e17a 100644
|
|
--- a/net/netfilter/nfnetlink_acct.c
|
|
+++ b/net/netfilter/nfnetlink_acct.c
|
|
@@ -132,21 +132,16 @@ nfnl_acct_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
|
|
int event, struct nf_acct *acct)
|
|
{
|
|
struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
unsigned int flags = portid ? NLM_F_MULTI : 0;
|
|
u64 pkts, bytes;
|
|
u32 old_flags;
|
|
|
|
event = nfnl_msg_type(NFNL_SUBSYS_ACCT, event);
|
|
- nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, event, flags, AF_UNSPEC,
|
|
+ NFNETLINK_V0, 0);
|
|
+ if (!nlh)
|
|
goto nlmsg_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = AF_UNSPEC;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = 0;
|
|
-
|
|
if (nla_put_string(skb, NFACCT_NAME, acct->name))
|
|
goto nla_put_failure;
|
|
|
|
diff --git a/net/netfilter/nfnetlink_cthelper.c b/net/netfilter/nfnetlink_cthelper.c
|
|
index 3d5fc07b2530b..cb9c2e9559962 100644
|
|
--- a/net/netfilter/nfnetlink_cthelper.c
|
|
+++ b/net/netfilter/nfnetlink_cthelper.c
|
|
@@ -530,20 +530,15 @@ nfnl_cthelper_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
|
|
int event, struct nf_conntrack_helper *helper)
|
|
{
|
|
struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
unsigned int flags = portid ? NLM_F_MULTI : 0;
|
|
int status;
|
|
|
|
event = nfnl_msg_type(NFNL_SUBSYS_CTHELPER, event);
|
|
- nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, event, flags, AF_UNSPEC,
|
|
+ NFNETLINK_V0, 0);
|
|
+ if (!nlh)
|
|
goto nlmsg_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = AF_UNSPEC;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = 0;
|
|
-
|
|
if (nla_put_string(skb, NFCTH_NAME, helper->name))
|
|
goto nla_put_failure;
|
|
|
|
diff --git a/net/netfilter/nfnetlink_cttimeout.c b/net/netfilter/nfnetlink_cttimeout.c
|
|
index da915c224a82d..a854e1560cc1f 100644
|
|
--- a/net/netfilter/nfnetlink_cttimeout.c
|
|
+++ b/net/netfilter/nfnetlink_cttimeout.c
|
|
@@ -160,22 +160,17 @@ ctnl_timeout_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
|
|
int event, struct ctnl_timeout *timeout)
|
|
{
|
|
struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
unsigned int flags = portid ? NLM_F_MULTI : 0;
|
|
const struct nf_conntrack_l4proto *l4proto = timeout->timeout.l4proto;
|
|
struct nlattr *nest_parms;
|
|
int ret;
|
|
|
|
event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK_TIMEOUT, event);
|
|
- nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, event, flags, AF_UNSPEC,
|
|
+ NFNETLINK_V0, 0);
|
|
+ if (!nlh)
|
|
goto nlmsg_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = AF_UNSPEC;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = 0;
|
|
-
|
|
if (nla_put_string(skb, CTA_TIMEOUT_NAME, timeout->name) ||
|
|
nla_put_be16(skb, CTA_TIMEOUT_L3PROTO,
|
|
htons(timeout->timeout.l3num)) ||
|
|
@@ -382,21 +377,16 @@ cttimeout_default_fill_info(struct net *net, struct sk_buff *skb, u32 portid,
|
|
const unsigned int *timeouts)
|
|
{
|
|
struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
unsigned int flags = portid ? NLM_F_MULTI : 0;
|
|
struct nlattr *nest_parms;
|
|
int ret;
|
|
|
|
event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK_TIMEOUT, event);
|
|
- nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, event, flags, AF_UNSPEC,
|
|
+ NFNETLINK_V0, 0);
|
|
+ if (!nlh)
|
|
goto nlmsg_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = AF_UNSPEC;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = 0;
|
|
-
|
|
if (nla_put_be16(skb, CTA_TIMEOUT_L3PROTO, htons(l3num)) ||
|
|
nla_put_u8(skb, CTA_TIMEOUT_L4PROTO, l4proto->l4proto))
|
|
goto nla_put_failure;
|
|
diff --git a/net/netfilter/nfnetlink_log.c b/net/netfilter/nfnetlink_log.c
|
|
index 33c13edbca4bb..f087baa95b07b 100644
|
|
--- a/net/netfilter/nfnetlink_log.c
|
|
+++ b/net/netfilter/nfnetlink_log.c
|
|
@@ -452,20 +452,15 @@ __build_packet_message(struct nfnl_log_net *log,
|
|
{
|
|
struct nfulnl_msg_packet_hdr pmsg;
|
|
struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
sk_buff_data_t old_tail = inst->skb->tail;
|
|
struct sock *sk;
|
|
const unsigned char *hwhdrp;
|
|
|
|
- nlh = nlmsg_put(inst->skb, 0, 0,
|
|
- nfnl_msg_type(NFNL_SUBSYS_ULOG, NFULNL_MSG_PACKET),
|
|
- sizeof(struct nfgenmsg), 0);
|
|
+ nlh = nfnl_msg_put(inst->skb, 0, 0,
|
|
+ nfnl_msg_type(NFNL_SUBSYS_ULOG, NFULNL_MSG_PACKET),
|
|
+ 0, pf, NFNETLINK_V0, htons(inst->group_num));
|
|
if (!nlh)
|
|
return -1;
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = pf;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = htons(inst->group_num);
|
|
|
|
memset(&pmsg, 0, sizeof(pmsg));
|
|
pmsg.hw_protocol = skb->protocol;
|
|
diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
|
|
index ad88904ee3f90..772f8c69818c6 100644
|
|
--- a/net/netfilter/nfnetlink_queue.c
|
|
+++ b/net/netfilter/nfnetlink_queue.c
|
|
@@ -383,7 +383,6 @@ nfqnl_build_packet_message(struct net *net, struct nfqnl_instance *queue,
|
|
struct nlattr *nla;
|
|
struct nfqnl_msg_packet_hdr *pmsg;
|
|
struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
struct sk_buff *entskb = entry->skb;
|
|
struct net_device *indev;
|
|
struct net_device *outdev;
|
|
@@ -469,18 +468,15 @@ nfqnl_build_packet_message(struct net *net, struct nfqnl_instance *queue,
|
|
goto nlmsg_failure;
|
|
}
|
|
|
|
- nlh = nlmsg_put(skb, 0, 0,
|
|
- nfnl_msg_type(NFNL_SUBSYS_QUEUE, NFQNL_MSG_PACKET),
|
|
- sizeof(struct nfgenmsg), 0);
|
|
+ nlh = nfnl_msg_put(skb, 0, 0,
|
|
+ nfnl_msg_type(NFNL_SUBSYS_QUEUE, NFQNL_MSG_PACKET),
|
|
+ 0, entry->state.pf, NFNETLINK_V0,
|
|
+ htons(queue->queue_num));
|
|
if (!nlh) {
|
|
skb_tx_error(entskb);
|
|
kfree_skb(skb);
|
|
goto nlmsg_failure;
|
|
}
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = entry->state.pf;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = htons(queue->queue_num);
|
|
|
|
nla = __nla_reserve(skb, NFQA_PACKET_HDR, sizeof(*pmsg));
|
|
pmsg = nla_data(nla);
|
|
diff --git a/net/netfilter/nft_byteorder.c b/net/netfilter/nft_byteorder.c
|
|
index 9d5947ab8d4ef..7b0b8fecb2205 100644
|
|
--- a/net/netfilter/nft_byteorder.c
|
|
+++ b/net/netfilter/nft_byteorder.c
|
|
@@ -30,11 +30,11 @@ void nft_byteorder_eval(const struct nft_expr *expr,
|
|
const struct nft_byteorder *priv = nft_expr_priv(expr);
|
|
u32 *src = ®s->data[priv->sreg];
|
|
u32 *dst = ®s->data[priv->dreg];
|
|
- union { u32 u32; u16 u16; } *s, *d;
|
|
+ u16 *s16, *d16;
|
|
unsigned int i;
|
|
|
|
- s = (void *)src;
|
|
- d = (void *)dst;
|
|
+ s16 = (void *)src;
|
|
+ d16 = (void *)dst;
|
|
|
|
switch (priv->size) {
|
|
case 8: {
|
|
@@ -61,11 +61,11 @@ void nft_byteorder_eval(const struct nft_expr *expr,
|
|
switch (priv->op) {
|
|
case NFT_BYTEORDER_NTOH:
|
|
for (i = 0; i < priv->len / 4; i++)
|
|
- d[i].u32 = ntohl((__force __be32)s[i].u32);
|
|
+ dst[i] = ntohl((__force __be32)src[i]);
|
|
break;
|
|
case NFT_BYTEORDER_HTON:
|
|
for (i = 0; i < priv->len / 4; i++)
|
|
- d[i].u32 = (__force __u32)htonl(s[i].u32);
|
|
+ dst[i] = (__force __u32)htonl(src[i]);
|
|
break;
|
|
}
|
|
break;
|
|
@@ -73,11 +73,11 @@ void nft_byteorder_eval(const struct nft_expr *expr,
|
|
switch (priv->op) {
|
|
case NFT_BYTEORDER_NTOH:
|
|
for (i = 0; i < priv->len / 2; i++)
|
|
- d[i].u16 = ntohs((__force __be16)s[i].u16);
|
|
+ d16[i] = ntohs((__force __be16)s16[i]);
|
|
break;
|
|
case NFT_BYTEORDER_HTON:
|
|
for (i = 0; i < priv->len / 2; i++)
|
|
- d[i].u16 = (__force __u16)htons(s[i].u16);
|
|
+ d16[i] = (__force __u16)htons(s16[i]);
|
|
break;
|
|
}
|
|
break;
|
|
diff --git a/net/netfilter/nft_chain_filter.c b/net/netfilter/nft_chain_filter.c
|
|
index b5d5d071d7655..04824d7dcc220 100644
|
|
--- a/net/netfilter/nft_chain_filter.c
|
|
+++ b/net/netfilter/nft_chain_filter.c
|
|
@@ -2,6 +2,7 @@
|
|
#include <linux/kernel.h>
|
|
#include <linux/netdevice.h>
|
|
#include <net/net_namespace.h>
|
|
+#include <net/netns/generic.h>
|
|
#include <net/netfilter/nf_tables.h>
|
|
#include <linux/netfilter_ipv4.h>
|
|
#include <linux/netfilter_ipv6.h>
|
|
@@ -10,6 +11,8 @@
|
|
#include <net/netfilter/nf_tables_ipv4.h>
|
|
#include <net/netfilter/nf_tables_ipv6.h>
|
|
|
|
+extern unsigned int nf_tables_net_id;
|
|
+
|
|
#ifdef CONFIG_NF_TABLES_IPV4
|
|
static unsigned int nft_do_chain_ipv4(void *priv,
|
|
struct sk_buff *skb,
|
|
@@ -315,6 +318,7 @@ static int nf_tables_netdev_event(struct notifier_block *this,
|
|
unsigned long event, void *ptr)
|
|
{
|
|
struct net_device *dev = netdev_notifier_info_to_dev(ptr);
|
|
+ struct nftables_pernet *nft_net;
|
|
struct nft_table *table;
|
|
struct nft_chain *chain, *nr;
|
|
struct nft_ctx ctx = {
|
|
@@ -325,8 +329,9 @@ static int nf_tables_netdev_event(struct notifier_block *this,
|
|
event != NETDEV_CHANGENAME)
|
|
return NOTIFY_DONE;
|
|
|
|
- mutex_lock(&ctx.net->nft.commit_mutex);
|
|
- list_for_each_entry(table, &ctx.net->nft.tables, list) {
|
|
+ nft_net = net_generic(ctx.net, nf_tables_net_id);
|
|
+ mutex_lock(&nft_net->commit_mutex);
|
|
+ list_for_each_entry(table, &nft_net->tables, list) {
|
|
if (table->family != NFPROTO_NETDEV)
|
|
continue;
|
|
|
|
@@ -340,7 +345,7 @@ static int nf_tables_netdev_event(struct notifier_block *this,
|
|
nft_netdev_event(event, dev, &ctx);
|
|
}
|
|
}
|
|
- mutex_unlock(&ctx.net->nft.commit_mutex);
|
|
+ mutex_unlock(&nft_net->commit_mutex);
|
|
|
|
return NOTIFY_DONE;
|
|
}
|
|
diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
|
|
index bbe03b9a03b12..1c975e1d3fea2 100644
|
|
--- a/net/netfilter/nft_compat.c
|
|
+++ b/net/netfilter/nft_compat.c
|
|
@@ -591,19 +591,14 @@ nfnl_compat_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
|
|
int rev, int target)
|
|
{
|
|
struct nlmsghdr *nlh;
|
|
- struct nfgenmsg *nfmsg;
|
|
unsigned int flags = portid ? NLM_F_MULTI : 0;
|
|
|
|
event = nfnl_msg_type(NFNL_SUBSYS_NFT_COMPAT, event);
|
|
- nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
|
|
- if (nlh == NULL)
|
|
+ nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
|
|
+ NFNETLINK_V0, 0);
|
|
+ if (!nlh)
|
|
goto nlmsg_failure;
|
|
|
|
- nfmsg = nlmsg_data(nlh);
|
|
- nfmsg->nfgen_family = family;
|
|
- nfmsg->version = NFNETLINK_V0;
|
|
- nfmsg->res_id = 0;
|
|
-
|
|
if (nla_put_string(skb, NFTA_COMPAT_NAME, name) ||
|
|
nla_put_be32(skb, NFTA_COMPAT_REV, htonl(rev)) ||
|
|
nla_put_be32(skb, NFTA_COMPAT_TYPE, htonl(target)))
|
|
diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
|
|
index 9f064f7b31d6d..8aca2fdc0664c 100644
|
|
--- a/net/netfilter/nft_dynset.c
|
|
+++ b/net/netfilter/nft_dynset.c
|
|
@@ -11,6 +11,9 @@
|
|
#include <linux/netfilter/nf_tables.h>
|
|
#include <net/netfilter/nf_tables.h>
|
|
#include <net/netfilter/nf_tables_core.h>
|
|
+#include <net/netns/generic.h>
|
|
+
|
|
+extern unsigned int nf_tables_net_id;
|
|
|
|
struct nft_dynset {
|
|
struct nft_set *set;
|
|
@@ -129,13 +132,14 @@ static int nft_dynset_init(const struct nft_ctx *ctx,
|
|
const struct nft_expr *expr,
|
|
const struct nlattr * const tb[])
|
|
{
|
|
+ struct nftables_pernet *nft_net = net_generic(ctx->net, nf_tables_net_id);
|
|
struct nft_dynset *priv = nft_expr_priv(expr);
|
|
u8 genmask = nft_genmask_next(ctx->net);
|
|
struct nft_set *set;
|
|
u64 timeout;
|
|
int err;
|
|
|
|
- lockdep_assert_held(&ctx->net->nft.commit_mutex);
|
|
+ lockdep_assert_held(&nft_net->commit_mutex);
|
|
|
|
if (tb[NFTA_DYNSET_SET_NAME] == NULL ||
|
|
tb[NFTA_DYNSET_OP] == NULL ||
|
|
diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
|
|
index bf7e300e8c25d..29eabd45b832a 100644
|
|
--- a/net/netlink/af_netlink.c
|
|
+++ b/net/netlink/af_netlink.c
|
|
@@ -1601,6 +1601,7 @@ out:
|
|
int netlink_set_err(struct sock *ssk, u32 portid, u32 group, int code)
|
|
{
|
|
struct netlink_set_err_data info;
|
|
+ unsigned long flags;
|
|
struct sock *sk;
|
|
int ret = 0;
|
|
|
|
@@ -1610,12 +1611,12 @@ int netlink_set_err(struct sock *ssk, u32 portid, u32 group, int code)
|
|
/* sk->sk_err wants a positive error value */
|
|
info.code = -code;
|
|
|
|
- read_lock(&nl_table_lock);
|
|
+ read_lock_irqsave(&nl_table_lock, flags);
|
|
|
|
sk_for_each_bound(sk, &nl_table[ssk->sk_protocol].mc_list)
|
|
ret += do_one_set_err(sk, &info);
|
|
|
|
- read_unlock(&nl_table_lock);
|
|
+ read_unlock_irqrestore(&nl_table_lock, flags);
|
|
return ret;
|
|
}
|
|
EXPORT_SYMBOL(netlink_set_err);
|
|
diff --git a/net/netlink/diag.c b/net/netlink/diag.c
|
|
index c6255eac305c7..e4f21b1067bcc 100644
|
|
--- a/net/netlink/diag.c
|
|
+++ b/net/netlink/diag.c
|
|
@@ -94,6 +94,7 @@ static int __netlink_diag_dump(struct sk_buff *skb, struct netlink_callback *cb,
|
|
struct net *net = sock_net(skb->sk);
|
|
struct netlink_diag_req *req;
|
|
struct netlink_sock *nlsk;
|
|
+ unsigned long flags;
|
|
struct sock *sk;
|
|
int num = 2;
|
|
int ret = 0;
|
|
@@ -152,7 +153,7 @@ static int __netlink_diag_dump(struct sk_buff *skb, struct netlink_callback *cb,
|
|
num++;
|
|
|
|
mc_list:
|
|
- read_lock(&nl_table_lock);
|
|
+ read_lock_irqsave(&nl_table_lock, flags);
|
|
sk_for_each_bound(sk, &tbl->mc_list) {
|
|
if (sk_hashed(sk))
|
|
continue;
|
|
@@ -167,13 +168,13 @@ mc_list:
|
|
NETLINK_CB(cb->skb).portid,
|
|
cb->nlh->nlmsg_seq,
|
|
NLM_F_MULTI,
|
|
- sock_i_ino(sk)) < 0) {
|
|
+ __sock_i_ino(sk)) < 0) {
|
|
ret = 1;
|
|
break;
|
|
}
|
|
num++;
|
|
}
|
|
- read_unlock(&nl_table_lock);
|
|
+ read_unlock_irqrestore(&nl_table_lock, flags);
|
|
|
|
done:
|
|
cb->args[0] = num;
|
|
diff --git a/net/nfc/core.c b/net/nfc/core.c
|
|
index 2d4729d1f0eb9..fef112fb49930 100644
|
|
--- a/net/nfc/core.c
|
|
+++ b/net/nfc/core.c
|
|
@@ -634,7 +634,7 @@ error:
|
|
return rc;
|
|
}
|
|
|
|
-int nfc_set_remote_general_bytes(struct nfc_dev *dev, u8 *gb, u8 gb_len)
|
|
+int nfc_set_remote_general_bytes(struct nfc_dev *dev, const u8 *gb, u8 gb_len)
|
|
{
|
|
pr_debug("dev_name=%s gb_len=%d\n", dev_name(&dev->dev), gb_len);
|
|
|
|
@@ -663,7 +663,7 @@ int nfc_tm_data_received(struct nfc_dev *dev, struct sk_buff *skb)
|
|
EXPORT_SYMBOL(nfc_tm_data_received);
|
|
|
|
int nfc_tm_activated(struct nfc_dev *dev, u32 protocol, u8 comm_mode,
|
|
- u8 *gb, size_t gb_len)
|
|
+ const u8 *gb, size_t gb_len)
|
|
{
|
|
int rc;
|
|
|
|
diff --git a/net/nfc/hci/llc_shdlc.c b/net/nfc/hci/llc_shdlc.c
|
|
index 0eb4ddc056e78..02909e3e91ef1 100644
|
|
--- a/net/nfc/hci/llc_shdlc.c
|
|
+++ b/net/nfc/hci/llc_shdlc.c
|
|
@@ -123,7 +123,7 @@ static bool llc_shdlc_x_lteq_y_lt_z(int x, int y, int z)
|
|
return ((y >= x) || (y < z)) ? true : false;
|
|
}
|
|
|
|
-static struct sk_buff *llc_shdlc_alloc_skb(struct llc_shdlc *shdlc,
|
|
+static struct sk_buff *llc_shdlc_alloc_skb(const struct llc_shdlc *shdlc,
|
|
int payload_len)
|
|
{
|
|
struct sk_buff *skb;
|
|
@@ -137,7 +137,7 @@ static struct sk_buff *llc_shdlc_alloc_skb(struct llc_shdlc *shdlc,
|
|
}
|
|
|
|
/* immediately sends an S frame. */
|
|
-static int llc_shdlc_send_s_frame(struct llc_shdlc *shdlc,
|
|
+static int llc_shdlc_send_s_frame(const struct llc_shdlc *shdlc,
|
|
enum sframe_type sframe_type, int nr)
|
|
{
|
|
int r;
|
|
@@ -159,7 +159,7 @@ static int llc_shdlc_send_s_frame(struct llc_shdlc *shdlc,
|
|
}
|
|
|
|
/* immediately sends an U frame. skb may contain optional payload */
|
|
-static int llc_shdlc_send_u_frame(struct llc_shdlc *shdlc,
|
|
+static int llc_shdlc_send_u_frame(const struct llc_shdlc *shdlc,
|
|
struct sk_buff *skb,
|
|
enum uframe_modifier uframe_modifier)
|
|
{
|
|
@@ -361,7 +361,7 @@ static void llc_shdlc_connect_complete(struct llc_shdlc *shdlc, int r)
|
|
wake_up(shdlc->connect_wq);
|
|
}
|
|
|
|
-static int llc_shdlc_connect_initiate(struct llc_shdlc *shdlc)
|
|
+static int llc_shdlc_connect_initiate(const struct llc_shdlc *shdlc)
|
|
{
|
|
struct sk_buff *skb;
|
|
|
|
@@ -377,7 +377,7 @@ static int llc_shdlc_connect_initiate(struct llc_shdlc *shdlc)
|
|
return llc_shdlc_send_u_frame(shdlc, skb, U_FRAME_RSET);
|
|
}
|
|
|
|
-static int llc_shdlc_connect_send_ua(struct llc_shdlc *shdlc)
|
|
+static int llc_shdlc_connect_send_ua(const struct llc_shdlc *shdlc)
|
|
{
|
|
struct sk_buff *skb;
|
|
|
|
diff --git a/net/nfc/llcp.h b/net/nfc/llcp.h
|
|
index 97853c9cefc70..a81893bc06ce8 100644
|
|
--- a/net/nfc/llcp.h
|
|
+++ b/net/nfc/llcp.h
|
|
@@ -202,7 +202,6 @@ void nfc_llcp_sock_link(struct llcp_sock_list *l, struct sock *s);
|
|
void nfc_llcp_sock_unlink(struct llcp_sock_list *l, struct sock *s);
|
|
void nfc_llcp_socket_remote_param_init(struct nfc_llcp_sock *sock);
|
|
struct nfc_llcp_local *nfc_llcp_find_local(struct nfc_dev *dev);
|
|
-struct nfc_llcp_local *nfc_llcp_local_get(struct nfc_llcp_local *local);
|
|
int nfc_llcp_local_put(struct nfc_llcp_local *local);
|
|
u8 nfc_llcp_get_sdp_ssap(struct nfc_llcp_local *local,
|
|
struct nfc_llcp_sock *sock);
|
|
@@ -221,15 +220,15 @@ struct sock *nfc_llcp_accept_dequeue(struct sock *sk, struct socket *newsock);
|
|
|
|
/* TLV API */
|
|
int nfc_llcp_parse_gb_tlv(struct nfc_llcp_local *local,
|
|
- u8 *tlv_array, u16 tlv_array_len);
|
|
+ const u8 *tlv_array, u16 tlv_array_len);
|
|
int nfc_llcp_parse_connection_tlv(struct nfc_llcp_sock *sock,
|
|
- u8 *tlv_array, u16 tlv_array_len);
|
|
+ const u8 *tlv_array, u16 tlv_array_len);
|
|
|
|
/* Commands API */
|
|
void nfc_llcp_recv(void *data, struct sk_buff *skb, int err);
|
|
-u8 *nfc_llcp_build_tlv(u8 type, u8 *value, u8 value_length, u8 *tlv_length);
|
|
+u8 *nfc_llcp_build_tlv(u8 type, const u8 *value, u8 value_length, u8 *tlv_length);
|
|
struct nfc_llcp_sdp_tlv *nfc_llcp_build_sdres_tlv(u8 tid, u8 sap);
|
|
-struct nfc_llcp_sdp_tlv *nfc_llcp_build_sdreq_tlv(u8 tid, char *uri,
|
|
+struct nfc_llcp_sdp_tlv *nfc_llcp_build_sdreq_tlv(u8 tid, const char *uri,
|
|
size_t uri_len);
|
|
void nfc_llcp_free_sdp_tlv(struct nfc_llcp_sdp_tlv *sdp);
|
|
void nfc_llcp_free_sdp_tlv_list(struct hlist_head *sdp_head);
|
|
diff --git a/net/nfc/llcp_commands.c b/net/nfc/llcp_commands.c
|
|
index 475061c79c442..5b8754ae7d3af 100644
|
|
--- a/net/nfc/llcp_commands.c
|
|
+++ b/net/nfc/llcp_commands.c
|
|
@@ -15,7 +15,7 @@
|
|
#include "nfc.h"
|
|
#include "llcp.h"
|
|
|
|
-static u8 llcp_tlv_length[LLCP_TLV_MAX] = {
|
|
+static const u8 llcp_tlv_length[LLCP_TLV_MAX] = {
|
|
0,
|
|
1, /* VERSION */
|
|
2, /* MIUX */
|
|
@@ -29,7 +29,7 @@ static u8 llcp_tlv_length[LLCP_TLV_MAX] = {
|
|
|
|
};
|
|
|
|
-static u8 llcp_tlv8(u8 *tlv, u8 type)
|
|
+static u8 llcp_tlv8(const u8 *tlv, u8 type)
|
|
{
|
|
if (tlv[0] != type || tlv[1] != llcp_tlv_length[tlv[0]])
|
|
return 0;
|
|
@@ -37,7 +37,7 @@ static u8 llcp_tlv8(u8 *tlv, u8 type)
|
|
return tlv[2];
|
|
}
|
|
|
|
-static u16 llcp_tlv16(u8 *tlv, u8 type)
|
|
+static u16 llcp_tlv16(const u8 *tlv, u8 type)
|
|
{
|
|
if (tlv[0] != type || tlv[1] != llcp_tlv_length[tlv[0]])
|
|
return 0;
|
|
@@ -46,37 +46,37 @@ static u16 llcp_tlv16(u8 *tlv, u8 type)
|
|
}
|
|
|
|
|
|
-static u8 llcp_tlv_version(u8 *tlv)
|
|
+static u8 llcp_tlv_version(const u8 *tlv)
|
|
{
|
|
return llcp_tlv8(tlv, LLCP_TLV_VERSION);
|
|
}
|
|
|
|
-static u16 llcp_tlv_miux(u8 *tlv)
|
|
+static u16 llcp_tlv_miux(const u8 *tlv)
|
|
{
|
|
return llcp_tlv16(tlv, LLCP_TLV_MIUX) & 0x7ff;
|
|
}
|
|
|
|
-static u16 llcp_tlv_wks(u8 *tlv)
|
|
+static u16 llcp_tlv_wks(const u8 *tlv)
|
|
{
|
|
return llcp_tlv16(tlv, LLCP_TLV_WKS);
|
|
}
|
|
|
|
-static u16 llcp_tlv_lto(u8 *tlv)
|
|
+static u16 llcp_tlv_lto(const u8 *tlv)
|
|
{
|
|
return llcp_tlv8(tlv, LLCP_TLV_LTO);
|
|
}
|
|
|
|
-static u8 llcp_tlv_opt(u8 *tlv)
|
|
+static u8 llcp_tlv_opt(const u8 *tlv)
|
|
{
|
|
return llcp_tlv8(tlv, LLCP_TLV_OPT);
|
|
}
|
|
|
|
-static u8 llcp_tlv_rw(u8 *tlv)
|
|
+static u8 llcp_tlv_rw(const u8 *tlv)
|
|
{
|
|
return llcp_tlv8(tlv, LLCP_TLV_RW) & 0xf;
|
|
}
|
|
|
|
-u8 *nfc_llcp_build_tlv(u8 type, u8 *value, u8 value_length, u8 *tlv_length)
|
|
+u8 *nfc_llcp_build_tlv(u8 type, const u8 *value, u8 value_length, u8 *tlv_length)
|
|
{
|
|
u8 *tlv, length;
|
|
|
|
@@ -130,7 +130,7 @@ struct nfc_llcp_sdp_tlv *nfc_llcp_build_sdres_tlv(u8 tid, u8 sap)
|
|
return sdres;
|
|
}
|
|
|
|
-struct nfc_llcp_sdp_tlv *nfc_llcp_build_sdreq_tlv(u8 tid, char *uri,
|
|
+struct nfc_llcp_sdp_tlv *nfc_llcp_build_sdreq_tlv(u8 tid, const char *uri,
|
|
size_t uri_len)
|
|
{
|
|
struct nfc_llcp_sdp_tlv *sdreq;
|
|
@@ -190,9 +190,10 @@ void nfc_llcp_free_sdp_tlv_list(struct hlist_head *head)
|
|
}
|
|
|
|
int nfc_llcp_parse_gb_tlv(struct nfc_llcp_local *local,
|
|
- u8 *tlv_array, u16 tlv_array_len)
|
|
+ const u8 *tlv_array, u16 tlv_array_len)
|
|
{
|
|
- u8 *tlv = tlv_array, type, length, offset = 0;
|
|
+ const u8 *tlv = tlv_array;
|
|
+ u8 type, length, offset = 0;
|
|
|
|
pr_debug("TLV array length %d\n", tlv_array_len);
|
|
|
|
@@ -239,9 +240,10 @@ int nfc_llcp_parse_gb_tlv(struct nfc_llcp_local *local,
|
|
}
|
|
|
|
int nfc_llcp_parse_connection_tlv(struct nfc_llcp_sock *sock,
|
|
- u8 *tlv_array, u16 tlv_array_len)
|
|
+ const u8 *tlv_array, u16 tlv_array_len)
|
|
{
|
|
- u8 *tlv = tlv_array, type, length, offset = 0;
|
|
+ const u8 *tlv = tlv_array;
|
|
+ u8 type, length, offset = 0;
|
|
|
|
pr_debug("TLV array length %d\n", tlv_array_len);
|
|
|
|
@@ -295,7 +297,7 @@ static struct sk_buff *llcp_add_header(struct sk_buff *pdu,
|
|
return pdu;
|
|
}
|
|
|
|
-static struct sk_buff *llcp_add_tlv(struct sk_buff *pdu, u8 *tlv,
|
|
+static struct sk_buff *llcp_add_tlv(struct sk_buff *pdu, const u8 *tlv,
|
|
u8 tlv_length)
|
|
{
|
|
/* XXX Add an skb length check */
|
|
@@ -359,6 +361,7 @@ int nfc_llcp_send_symm(struct nfc_dev *dev)
|
|
struct sk_buff *skb;
|
|
struct nfc_llcp_local *local;
|
|
u16 size = 0;
|
|
+ int err;
|
|
|
|
pr_debug("Sending SYMM\n");
|
|
|
|
@@ -370,8 +373,10 @@ int nfc_llcp_send_symm(struct nfc_dev *dev)
|
|
size += dev->tx_headroom + dev->tx_tailroom + NFC_HEADER_SIZE;
|
|
|
|
skb = alloc_skb(size, GFP_KERNEL);
|
|
- if (skb == NULL)
|
|
- return -ENOMEM;
|
|
+ if (skb == NULL) {
|
|
+ err = -ENOMEM;
|
|
+ goto out;
|
|
+ }
|
|
|
|
skb_reserve(skb, dev->tx_headroom + NFC_HEADER_SIZE);
|
|
|
|
@@ -381,17 +386,22 @@ int nfc_llcp_send_symm(struct nfc_dev *dev)
|
|
|
|
nfc_llcp_send_to_raw_sock(local, skb, NFC_DIRECTION_TX);
|
|
|
|
- return nfc_data_exchange(dev, local->target_idx, skb,
|
|
+ err = nfc_data_exchange(dev, local->target_idx, skb,
|
|
nfc_llcp_recv, local);
|
|
+out:
|
|
+ nfc_llcp_local_put(local);
|
|
+ return err;
|
|
}
|
|
|
|
int nfc_llcp_send_connect(struct nfc_llcp_sock *sock)
|
|
{
|
|
struct nfc_llcp_local *local;
|
|
struct sk_buff *skb;
|
|
- u8 *service_name_tlv = NULL, service_name_tlv_length;
|
|
- u8 *miux_tlv = NULL, miux_tlv_length;
|
|
- u8 *rw_tlv = NULL, rw_tlv_length, rw;
|
|
+ const u8 *service_name_tlv = NULL;
|
|
+ const u8 *miux_tlv = NULL;
|
|
+ const u8 *rw_tlv = NULL;
|
|
+ u8 service_name_tlv_length = 0;
|
|
+ u8 miux_tlv_length, rw_tlv_length, rw;
|
|
int err;
|
|
u16 size = 0;
|
|
__be16 miux;
|
|
@@ -465,8 +475,9 @@ int nfc_llcp_send_cc(struct nfc_llcp_sock *sock)
|
|
{
|
|
struct nfc_llcp_local *local;
|
|
struct sk_buff *skb;
|
|
- u8 *miux_tlv = NULL, miux_tlv_length;
|
|
- u8 *rw_tlv = NULL, rw_tlv_length, rw;
|
|
+ const u8 *miux_tlv = NULL;
|
|
+ const u8 *rw_tlv = NULL;
|
|
+ u8 miux_tlv_length, rw_tlv_length, rw;
|
|
int err;
|
|
u16 size = 0;
|
|
__be16 miux;
|
|
diff --git a/net/nfc/llcp_core.c b/net/nfc/llcp_core.c
|
|
index edadebb3efd2a..ddfd159f64e13 100644
|
|
--- a/net/nfc/llcp_core.c
|
|
+++ b/net/nfc/llcp_core.c
|
|
@@ -17,6 +17,8 @@
|
|
static u8 llcp_magic[3] = {0x46, 0x66, 0x6d};
|
|
|
|
static LIST_HEAD(llcp_devices);
|
|
+/* Protects llcp_devices list */
|
|
+static DEFINE_SPINLOCK(llcp_devices_lock);
|
|
|
|
static void nfc_llcp_rx_skb(struct nfc_llcp_local *local, struct sk_buff *skb);
|
|
|
|
@@ -143,7 +145,7 @@ static void nfc_llcp_socket_release(struct nfc_llcp_local *local, bool device,
|
|
write_unlock(&local->raw_sockets.lock);
|
|
}
|
|
|
|
-struct nfc_llcp_local *nfc_llcp_local_get(struct nfc_llcp_local *local)
|
|
+static struct nfc_llcp_local *nfc_llcp_local_get(struct nfc_llcp_local *local)
|
|
{
|
|
kref_get(&local->ref);
|
|
|
|
@@ -171,7 +173,6 @@ static void local_release(struct kref *ref)
|
|
|
|
local = container_of(ref, struct nfc_llcp_local, ref);
|
|
|
|
- list_del(&local->list);
|
|
local_cleanup(local);
|
|
kfree(local);
|
|
}
|
|
@@ -284,12 +285,33 @@ static void nfc_llcp_sdreq_timer(struct timer_list *t)
|
|
struct nfc_llcp_local *nfc_llcp_find_local(struct nfc_dev *dev)
|
|
{
|
|
struct nfc_llcp_local *local;
|
|
+ struct nfc_llcp_local *res = NULL;
|
|
|
|
+ spin_lock(&llcp_devices_lock);
|
|
list_for_each_entry(local, &llcp_devices, list)
|
|
- if (local->dev == dev)
|
|
+ if (local->dev == dev) {
|
|
+ res = nfc_llcp_local_get(local);
|
|
+ break;
|
|
+ }
|
|
+ spin_unlock(&llcp_devices_lock);
|
|
+
|
|
+ return res;
|
|
+}
|
|
+
|
|
+static struct nfc_llcp_local *nfc_llcp_remove_local(struct nfc_dev *dev)
|
|
+{
|
|
+ struct nfc_llcp_local *local, *tmp;
|
|
+
|
|
+ spin_lock(&llcp_devices_lock);
|
|
+ list_for_each_entry_safe(local, tmp, &llcp_devices, list)
|
|
+ if (local->dev == dev) {
|
|
+ list_del(&local->list);
|
|
+ spin_unlock(&llcp_devices_lock);
|
|
return local;
|
|
+ }
|
|
+ spin_unlock(&llcp_devices_lock);
|
|
|
|
- pr_debug("No device found\n");
|
|
+ pr_warn("Shutting down device not found\n");
|
|
|
|
return NULL;
|
|
}
|
|
@@ -302,7 +324,7 @@ static char *wks[] = {
|
|
"urn:nfc:sn:snep",
|
|
};
|
|
|
|
-static int nfc_llcp_wks_sap(char *service_name, size_t service_name_len)
|
|
+static int nfc_llcp_wks_sap(const char *service_name, size_t service_name_len)
|
|
{
|
|
int sap, num_wks;
|
|
|
|
@@ -326,7 +348,7 @@ static int nfc_llcp_wks_sap(char *service_name, size_t service_name_len)
|
|
|
|
static
|
|
struct nfc_llcp_sock *nfc_llcp_sock_from_sn(struct nfc_llcp_local *local,
|
|
- u8 *sn, size_t sn_len)
|
|
+ const u8 *sn, size_t sn_len)
|
|
{
|
|
struct sock *sk;
|
|
struct nfc_llcp_sock *llcp_sock, *tmp_sock;
|
|
@@ -523,7 +545,7 @@ static int nfc_llcp_build_gb(struct nfc_llcp_local *local)
|
|
{
|
|
u8 *gb_cur, version, version_length;
|
|
u8 lto_length, wks_length, miux_length;
|
|
- u8 *version_tlv = NULL, *lto_tlv = NULL,
|
|
+ const u8 *version_tlv = NULL, *lto_tlv = NULL,
|
|
*wks_tlv = NULL, *miux_tlv = NULL;
|
|
__be16 wks = cpu_to_be16(local->local_wks);
|
|
u8 gb_len = 0;
|
|
@@ -610,12 +632,15 @@ u8 *nfc_llcp_general_bytes(struct nfc_dev *dev, size_t *general_bytes_len)
|
|
|
|
*general_bytes_len = local->gb_len;
|
|
|
|
+ nfc_llcp_local_put(local);
|
|
+
|
|
return local->gb;
|
|
}
|
|
|
|
-int nfc_llcp_set_remote_gb(struct nfc_dev *dev, u8 *gb, u8 gb_len)
|
|
+int nfc_llcp_set_remote_gb(struct nfc_dev *dev, const u8 *gb, u8 gb_len)
|
|
{
|
|
struct nfc_llcp_local *local;
|
|
+ int err;
|
|
|
|
if (gb_len < 3 || gb_len > NFC_MAX_GT_LEN)
|
|
return -EINVAL;
|
|
@@ -632,35 +657,39 @@ int nfc_llcp_set_remote_gb(struct nfc_dev *dev, u8 *gb, u8 gb_len)
|
|
|
|
if (memcmp(local->remote_gb, llcp_magic, 3)) {
|
|
pr_err("MAC does not support LLCP\n");
|
|
- return -EINVAL;
|
|
+ err = -EINVAL;
|
|
+ goto out;
|
|
}
|
|
|
|
- return nfc_llcp_parse_gb_tlv(local,
|
|
+ err = nfc_llcp_parse_gb_tlv(local,
|
|
&local->remote_gb[3],
|
|
local->remote_gb_len - 3);
|
|
+out:
|
|
+ nfc_llcp_local_put(local);
|
|
+ return err;
|
|
}
|
|
|
|
-static u8 nfc_llcp_dsap(struct sk_buff *pdu)
|
|
+static u8 nfc_llcp_dsap(const struct sk_buff *pdu)
|
|
{
|
|
return (pdu->data[0] & 0xfc) >> 2;
|
|
}
|
|
|
|
-static u8 nfc_llcp_ptype(struct sk_buff *pdu)
|
|
+static u8 nfc_llcp_ptype(const struct sk_buff *pdu)
|
|
{
|
|
return ((pdu->data[0] & 0x03) << 2) | ((pdu->data[1] & 0xc0) >> 6);
|
|
}
|
|
|
|
-static u8 nfc_llcp_ssap(struct sk_buff *pdu)
|
|
+static u8 nfc_llcp_ssap(const struct sk_buff *pdu)
|
|
{
|
|
return pdu->data[1] & 0x3f;
|
|
}
|
|
|
|
-static u8 nfc_llcp_ns(struct sk_buff *pdu)
|
|
+static u8 nfc_llcp_ns(const struct sk_buff *pdu)
|
|
{
|
|
return pdu->data[2] >> 4;
|
|
}
|
|
|
|
-static u8 nfc_llcp_nr(struct sk_buff *pdu)
|
|
+static u8 nfc_llcp_nr(const struct sk_buff *pdu)
|
|
{
|
|
return pdu->data[2] & 0xf;
|
|
}
|
|
@@ -802,7 +831,7 @@ out:
|
|
}
|
|
|
|
static struct nfc_llcp_sock *nfc_llcp_sock_get_sn(struct nfc_llcp_local *local,
|
|
- u8 *sn, size_t sn_len)
|
|
+ const u8 *sn, size_t sn_len)
|
|
{
|
|
struct nfc_llcp_sock *llcp_sock;
|
|
|
|
@@ -816,9 +845,10 @@ static struct nfc_llcp_sock *nfc_llcp_sock_get_sn(struct nfc_llcp_local *local,
|
|
return llcp_sock;
|
|
}
|
|
|
|
-static u8 *nfc_llcp_connect_sn(struct sk_buff *skb, size_t *sn_len)
|
|
+static const u8 *nfc_llcp_connect_sn(const struct sk_buff *skb, size_t *sn_len)
|
|
{
|
|
- u8 *tlv = &skb->data[2], type, length;
|
|
+ u8 type, length;
|
|
+ const u8 *tlv = &skb->data[2];
|
|
size_t tlv_array_len = skb->len - LLCP_HEADER_SIZE, offset = 0;
|
|
|
|
while (offset < tlv_array_len) {
|
|
@@ -876,7 +906,7 @@ static void nfc_llcp_recv_ui(struct nfc_llcp_local *local,
|
|
}
|
|
|
|
static void nfc_llcp_recv_connect(struct nfc_llcp_local *local,
|
|
- struct sk_buff *skb)
|
|
+ const struct sk_buff *skb)
|
|
{
|
|
struct sock *new_sk, *parent;
|
|
struct nfc_llcp_sock *sock, *new_sock;
|
|
@@ -894,7 +924,7 @@ static void nfc_llcp_recv_connect(struct nfc_llcp_local *local,
|
|
goto fail;
|
|
}
|
|
} else {
|
|
- u8 *sn;
|
|
+ const u8 *sn;
|
|
size_t sn_len;
|
|
|
|
sn = nfc_llcp_connect_sn(skb, &sn_len);
|
|
@@ -1113,7 +1143,7 @@ static void nfc_llcp_recv_hdlc(struct nfc_llcp_local *local,
|
|
}
|
|
|
|
static void nfc_llcp_recv_disc(struct nfc_llcp_local *local,
|
|
- struct sk_buff *skb)
|
|
+ const struct sk_buff *skb)
|
|
{
|
|
struct nfc_llcp_sock *llcp_sock;
|
|
struct sock *sk;
|
|
@@ -1156,7 +1186,8 @@ static void nfc_llcp_recv_disc(struct nfc_llcp_local *local,
|
|
nfc_llcp_sock_put(llcp_sock);
|
|
}
|
|
|
|
-static void nfc_llcp_recv_cc(struct nfc_llcp_local *local, struct sk_buff *skb)
|
|
+static void nfc_llcp_recv_cc(struct nfc_llcp_local *local,
|
|
+ const struct sk_buff *skb)
|
|
{
|
|
struct nfc_llcp_sock *llcp_sock;
|
|
struct sock *sk;
|
|
@@ -1189,7 +1220,8 @@ static void nfc_llcp_recv_cc(struct nfc_llcp_local *local, struct sk_buff *skb)
|
|
nfc_llcp_sock_put(llcp_sock);
|
|
}
|
|
|
|
-static void nfc_llcp_recv_dm(struct nfc_llcp_local *local, struct sk_buff *skb)
|
|
+static void nfc_llcp_recv_dm(struct nfc_llcp_local *local,
|
|
+ const struct sk_buff *skb)
|
|
{
|
|
struct nfc_llcp_sock *llcp_sock;
|
|
struct sock *sk;
|
|
@@ -1227,12 +1259,13 @@ static void nfc_llcp_recv_dm(struct nfc_llcp_local *local, struct sk_buff *skb)
|
|
}
|
|
|
|
static void nfc_llcp_recv_snl(struct nfc_llcp_local *local,
|
|
- struct sk_buff *skb)
|
|
+ const struct sk_buff *skb)
|
|
{
|
|
struct nfc_llcp_sock *llcp_sock;
|
|
- u8 dsap, ssap, *tlv, type, length, tid, sap;
|
|
+ u8 dsap, ssap, type, length, tid, sap;
|
|
+ const u8 *tlv;
|
|
u16 tlv_len, offset;
|
|
- char *service_name;
|
|
+ const char *service_name;
|
|
size_t service_name_len;
|
|
struct nfc_llcp_sdp_tlv *sdp;
|
|
HLIST_HEAD(llc_sdres_list);
|
|
@@ -1523,6 +1556,8 @@ int nfc_llcp_data_received(struct nfc_dev *dev, struct sk_buff *skb)
|
|
|
|
__nfc_llcp_recv(local, skb);
|
|
|
|
+ nfc_llcp_local_put(local);
|
|
+
|
|
return 0;
|
|
}
|
|
|
|
@@ -1539,6 +1574,8 @@ void nfc_llcp_mac_is_down(struct nfc_dev *dev)
|
|
|
|
/* Close and purge all existing sockets */
|
|
nfc_llcp_socket_release(local, true, 0);
|
|
+
|
|
+ nfc_llcp_local_put(local);
|
|
}
|
|
|
|
void nfc_llcp_mac_is_up(struct nfc_dev *dev, u32 target_idx,
|
|
@@ -1564,6 +1601,8 @@ void nfc_llcp_mac_is_up(struct nfc_dev *dev, u32 target_idx,
|
|
mod_timer(&local->link_timer,
|
|
jiffies + msecs_to_jiffies(local->remote_lto));
|
|
}
|
|
+
|
|
+ nfc_llcp_local_put(local);
|
|
}
|
|
|
|
int nfc_llcp_register_device(struct nfc_dev *ndev)
|
|
@@ -1614,7 +1653,7 @@ int nfc_llcp_register_device(struct nfc_dev *ndev)
|
|
|
|
void nfc_llcp_unregister_device(struct nfc_dev *dev)
|
|
{
|
|
- struct nfc_llcp_local *local = nfc_llcp_find_local(dev);
|
|
+ struct nfc_llcp_local *local = nfc_llcp_remove_local(dev);
|
|
|
|
if (local == NULL) {
|
|
pr_debug("No such device\n");
|
|
diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c
|
|
index bd2174699af97..aea337d817025 100644
|
|
--- a/net/nfc/llcp_sock.c
|
|
+++ b/net/nfc/llcp_sock.c
|
|
@@ -99,7 +99,7 @@ static int llcp_sock_bind(struct socket *sock, struct sockaddr *addr, int alen)
|
|
}
|
|
|
|
llcp_sock->dev = dev;
|
|
- llcp_sock->local = nfc_llcp_local_get(local);
|
|
+ llcp_sock->local = local;
|
|
llcp_sock->nfc_protocol = llcp_addr.nfc_protocol;
|
|
llcp_sock->service_name_len = min_t(unsigned int,
|
|
llcp_addr.service_name_len,
|
|
@@ -181,7 +181,7 @@ static int llcp_raw_sock_bind(struct socket *sock, struct sockaddr *addr,
|
|
}
|
|
|
|
llcp_sock->dev = dev;
|
|
- llcp_sock->local = nfc_llcp_local_get(local);
|
|
+ llcp_sock->local = local;
|
|
llcp_sock->nfc_protocol = llcp_addr.nfc_protocol;
|
|
|
|
nfc_llcp_sock_link(&local->raw_sockets, sk);
|
|
@@ -698,24 +698,22 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr,
|
|
if (dev->dep_link_up == false) {
|
|
ret = -ENOLINK;
|
|
device_unlock(&dev->dev);
|
|
- goto put_dev;
|
|
+ goto sock_llcp_put_local;
|
|
}
|
|
device_unlock(&dev->dev);
|
|
|
|
if (local->rf_mode == NFC_RF_INITIATOR &&
|
|
addr->target_idx != local->target_idx) {
|
|
ret = -ENOLINK;
|
|
- goto put_dev;
|
|
+ goto sock_llcp_put_local;
|
|
}
|
|
|
|
llcp_sock->dev = dev;
|
|
- llcp_sock->local = nfc_llcp_local_get(local);
|
|
+ llcp_sock->local = local;
|
|
llcp_sock->ssap = nfc_llcp_get_local_ssap(local);
|
|
if (llcp_sock->ssap == LLCP_SAP_MAX) {
|
|
- nfc_llcp_local_put(llcp_sock->local);
|
|
- llcp_sock->local = NULL;
|
|
ret = -ENOMEM;
|
|
- goto put_dev;
|
|
+ goto sock_llcp_nullify;
|
|
}
|
|
|
|
llcp_sock->reserved_ssap = llcp_sock->ssap;
|
|
@@ -760,8 +758,13 @@ sock_unlink:
|
|
|
|
sock_llcp_release:
|
|
nfc_llcp_put_ssap(local, llcp_sock->ssap);
|
|
- nfc_llcp_local_put(llcp_sock->local);
|
|
+
|
|
+sock_llcp_nullify:
|
|
llcp_sock->local = NULL;
|
|
+ llcp_sock->dev = NULL;
|
|
+
|
|
+sock_llcp_put_local:
|
|
+ nfc_llcp_local_put(local);
|
|
|
|
put_dev:
|
|
nfc_put_device(dev);
|
|
diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
|
|
index 66ab97131fd24..5b55466fe315a 100644
|
|
--- a/net/nfc/netlink.c
|
|
+++ b/net/nfc/netlink.c
|
|
@@ -1047,11 +1047,14 @@ static int nfc_genl_llc_get_params(struct sk_buff *skb, struct genl_info *info)
|
|
msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
|
|
if (!msg) {
|
|
rc = -ENOMEM;
|
|
- goto exit;
|
|
+ goto put_local;
|
|
}
|
|
|
|
rc = nfc_genl_send_params(msg, local, info->snd_portid, info->snd_seq);
|
|
|
|
+put_local:
|
|
+ nfc_llcp_local_put(local);
|
|
+
|
|
exit:
|
|
device_unlock(&dev->dev);
|
|
|
|
@@ -1113,7 +1116,7 @@ static int nfc_genl_llc_set_params(struct sk_buff *skb, struct genl_info *info)
|
|
if (info->attrs[NFC_ATTR_LLC_PARAM_LTO]) {
|
|
if (dev->dep_link_up) {
|
|
rc = -EINPROGRESS;
|
|
- goto exit;
|
|
+ goto put_local;
|
|
}
|
|
|
|
local->lto = nla_get_u8(info->attrs[NFC_ATTR_LLC_PARAM_LTO]);
|
|
@@ -1125,6 +1128,9 @@ static int nfc_genl_llc_set_params(struct sk_buff *skb, struct genl_info *info)
|
|
if (info->attrs[NFC_ATTR_LLC_PARAM_MIUX])
|
|
local->miux = cpu_to_be16(miux);
|
|
|
|
+put_local:
|
|
+ nfc_llcp_local_put(local);
|
|
+
|
|
exit:
|
|
device_unlock(&dev->dev);
|
|
|
|
@@ -1180,7 +1186,7 @@ static int nfc_genl_llc_sdreq(struct sk_buff *skb, struct genl_info *info)
|
|
|
|
if (rc != 0) {
|
|
rc = -EINVAL;
|
|
- goto exit;
|
|
+ goto put_local;
|
|
}
|
|
|
|
if (!sdp_attrs[NFC_SDP_ATTR_URI])
|
|
@@ -1199,7 +1205,7 @@ static int nfc_genl_llc_sdreq(struct sk_buff *skb, struct genl_info *info)
|
|
sdreq = nfc_llcp_build_sdreq_tlv(tid, uri, uri_len);
|
|
if (sdreq == NULL) {
|
|
rc = -ENOMEM;
|
|
- goto exit;
|
|
+ goto put_local;
|
|
}
|
|
|
|
tlvs_len += sdreq->tlv_len;
|
|
@@ -1209,10 +1215,14 @@ static int nfc_genl_llc_sdreq(struct sk_buff *skb, struct genl_info *info)
|
|
|
|
if (hlist_empty(&sdreq_list)) {
|
|
rc = -EINVAL;
|
|
- goto exit;
|
|
+ goto put_local;
|
|
}
|
|
|
|
rc = nfc_llcp_send_snl_sdreq(local, &sdreq_list, tlvs_len);
|
|
+
|
|
+put_local:
|
|
+ nfc_llcp_local_put(local);
|
|
+
|
|
exit:
|
|
device_unlock(&dev->dev);
|
|
|
|
diff --git a/net/nfc/nfc.h b/net/nfc/nfc.h
|
|
index 889fefd64e56b..0b1e6466f4fbf 100644
|
|
--- a/net/nfc/nfc.h
|
|
+++ b/net/nfc/nfc.h
|
|
@@ -48,10 +48,11 @@ void nfc_llcp_mac_is_up(struct nfc_dev *dev, u32 target_idx,
|
|
u8 comm_mode, u8 rf_mode);
|
|
int nfc_llcp_register_device(struct nfc_dev *dev);
|
|
void nfc_llcp_unregister_device(struct nfc_dev *dev);
|
|
-int nfc_llcp_set_remote_gb(struct nfc_dev *dev, u8 *gb, u8 gb_len);
|
|
+int nfc_llcp_set_remote_gb(struct nfc_dev *dev, const u8 *gb, u8 gb_len);
|
|
u8 *nfc_llcp_general_bytes(struct nfc_dev *dev, size_t *general_bytes_len);
|
|
int nfc_llcp_data_received(struct nfc_dev *dev, struct sk_buff *skb);
|
|
struct nfc_llcp_local *nfc_llcp_find_local(struct nfc_dev *dev);
|
|
+int nfc_llcp_local_put(struct nfc_llcp_local *local);
|
|
int __init nfc_llcp_init(void);
|
|
void nfc_llcp_exit(void);
|
|
void nfc_llcp_free_sdp_tlv(struct nfc_llcp_sdp_tlv *sdp);
|
|
diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c
|
|
index f095a0fb75c6d..bf74f3f4c7522 100644
|
|
--- a/net/sched/act_pedit.c
|
|
+++ b/net/sched/act_pedit.c
|
|
@@ -26,6 +26,7 @@ static struct tc_action_ops act_pedit_ops;
|
|
|
|
static const struct nla_policy pedit_policy[TCA_PEDIT_MAX + 1] = {
|
|
[TCA_PEDIT_PARMS] = { .len = sizeof(struct tc_pedit) },
|
|
+ [TCA_PEDIT_PARMS_EX] = { .len = sizeof(struct tc_pedit) },
|
|
[TCA_PEDIT_KEYS_EX] = { .type = NLA_NESTED },
|
|
};
|
|
|
|
diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
|
|
index f21c97f02d361..c92318f68f92d 100644
|
|
--- a/net/sched/cls_flower.c
|
|
+++ b/net/sched/cls_flower.c
|
|
@@ -719,7 +719,8 @@ static void fl_set_key_val(struct nlattr **tb,
|
|
}
|
|
|
|
static int fl_set_key_port_range(struct nlattr **tb, struct fl_flow_key *key,
|
|
- struct fl_flow_key *mask)
|
|
+ struct fl_flow_key *mask,
|
|
+ struct netlink_ext_ack *extack)
|
|
{
|
|
fl_set_key_val(tb, &key->tp_range.tp_min.dst,
|
|
TCA_FLOWER_KEY_PORT_DST_MIN, &mask->tp_range.tp_min.dst,
|
|
@@ -734,13 +735,32 @@ static int fl_set_key_port_range(struct nlattr **tb, struct fl_flow_key *key,
|
|
TCA_FLOWER_KEY_PORT_SRC_MAX, &mask->tp_range.tp_max.src,
|
|
TCA_FLOWER_UNSPEC, sizeof(key->tp_range.tp_max.src));
|
|
|
|
- if ((mask->tp_range.tp_min.dst && mask->tp_range.tp_max.dst &&
|
|
- htons(key->tp_range.tp_max.dst) <=
|
|
- htons(key->tp_range.tp_min.dst)) ||
|
|
- (mask->tp_range.tp_min.src && mask->tp_range.tp_max.src &&
|
|
- htons(key->tp_range.tp_max.src) <=
|
|
- htons(key->tp_range.tp_min.src)))
|
|
+ if (mask->tp_range.tp_min.dst != mask->tp_range.tp_max.dst) {
|
|
+ NL_SET_ERR_MSG(extack,
|
|
+ "Both min and max destination ports must be specified");
|
|
return -EINVAL;
|
|
+ }
|
|
+ if (mask->tp_range.tp_min.src != mask->tp_range.tp_max.src) {
|
|
+ NL_SET_ERR_MSG(extack,
|
|
+ "Both min and max source ports must be specified");
|
|
+ return -EINVAL;
|
|
+ }
|
|
+ if (mask->tp_range.tp_min.dst && mask->tp_range.tp_max.dst &&
|
|
+ htons(key->tp_range.tp_max.dst) <=
|
|
+ htons(key->tp_range.tp_min.dst)) {
|
|
+ NL_SET_ERR_MSG_ATTR(extack,
|
|
+ tb[TCA_FLOWER_KEY_PORT_DST_MIN],
|
|
+ "Invalid destination port range (min must be strictly smaller than max)");
|
|
+ return -EINVAL;
|
|
+ }
|
|
+ if (mask->tp_range.tp_min.src && mask->tp_range.tp_max.src &&
|
|
+ htons(key->tp_range.tp_max.src) <=
|
|
+ htons(key->tp_range.tp_min.src)) {
|
|
+ NL_SET_ERR_MSG_ATTR(extack,
|
|
+ tb[TCA_FLOWER_KEY_PORT_SRC_MIN],
|
|
+ "Invalid source port range (min must be strictly smaller than max)");
|
|
+ return -EINVAL;
|
|
+ }
|
|
|
|
return 0;
|
|
}
|
|
@@ -1211,7 +1231,7 @@ static int fl_set_key(struct net *net, struct nlattr **tb,
|
|
if (key->basic.ip_proto == IPPROTO_TCP ||
|
|
key->basic.ip_proto == IPPROTO_UDP ||
|
|
key->basic.ip_proto == IPPROTO_SCTP) {
|
|
- ret = fl_set_key_port_range(tb, key, mask);
|
|
+ ret = fl_set_key_port_range(tb, key, mask, extack);
|
|
if (ret)
|
|
return ret;
|
|
}
|
|
diff --git a/net/sched/cls_fw.c b/net/sched/cls_fw.c
|
|
index ec945294626a8..41f0898a5a565 100644
|
|
--- a/net/sched/cls_fw.c
|
|
+++ b/net/sched/cls_fw.c
|
|
@@ -210,11 +210,6 @@ static int fw_set_parms(struct net *net, struct tcf_proto *tp,
|
|
if (err < 0)
|
|
return err;
|
|
|
|
- if (tb[TCA_FW_CLASSID]) {
|
|
- f->res.classid = nla_get_u32(tb[TCA_FW_CLASSID]);
|
|
- tcf_bind_filter(tp, &f->res, base);
|
|
- }
|
|
-
|
|
if (tb[TCA_FW_INDEV]) {
|
|
int ret;
|
|
ret = tcf_change_indev(net, tb[TCA_FW_INDEV], extack);
|
|
@@ -231,6 +226,11 @@ static int fw_set_parms(struct net *net, struct tcf_proto *tp,
|
|
} else if (head->mask != 0xFFFFFFFF)
|
|
return err;
|
|
|
|
+ if (tb[TCA_FW_CLASSID]) {
|
|
+ f->res.classid = nla_get_u32(tb[TCA_FW_CLASSID]);
|
|
+ tcf_bind_filter(tp, &f->res, base);
|
|
+ }
|
|
+
|
|
return 0;
|
|
}
|
|
|
|
diff --git a/net/sctp/socket.c b/net/sctp/socket.c
|
|
index bf3fed5b91d2b..7cff1a031f761 100644
|
|
--- a/net/sctp/socket.c
|
|
+++ b/net/sctp/socket.c
|
|
@@ -362,9 +362,9 @@ static void sctp_auto_asconf_init(struct sctp_sock *sp)
|
|
struct net *net = sock_net(&sp->inet.sk);
|
|
|
|
if (net->sctp.default_auto_asconf) {
|
|
- spin_lock(&net->sctp.addr_wq_lock);
|
|
+ spin_lock_bh(&net->sctp.addr_wq_lock);
|
|
list_add_tail(&sp->auto_asconf_list, &net->sctp.auto_asconf_splist);
|
|
- spin_unlock(&net->sctp.addr_wq_lock);
|
|
+ spin_unlock_bh(&net->sctp.addr_wq_lock);
|
|
sp->do_auto_asconf = 1;
|
|
}
|
|
}
|
|
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
|
|
index d52abde51f1b4..aeb0b3e48ad59 100644
|
|
--- a/net/sunrpc/svcsock.c
|
|
+++ b/net/sunrpc/svcsock.c
|
|
@@ -728,12 +728,6 @@ static void svc_tcp_listen_data_ready(struct sock *sk)
|
|
dprintk("svc: socket %p TCP (listen) state change %d\n",
|
|
sk, sk->sk_state);
|
|
|
|
- if (svsk) {
|
|
- /* Refer to svc_setup_socket() for details. */
|
|
- rmb();
|
|
- svsk->sk_odata(sk);
|
|
- }
|
|
-
|
|
/*
|
|
* This callback may called twice when a new connection
|
|
* is established as a child socket inherits everything
|
|
@@ -742,15 +736,20 @@ static void svc_tcp_listen_data_ready(struct sock *sk)
|
|
* when one of child sockets become ESTABLISHED.
|
|
* 2) data_ready method of the child socket may be called
|
|
* when it receives data before the socket is accepted.
|
|
- * In case of 2, we should ignore it silently.
|
|
+ * In case of 2, we should ignore it silently and DO NOT
|
|
+ * dereference svsk.
|
|
*/
|
|
- if (sk->sk_state == TCP_LISTEN) {
|
|
- if (svsk) {
|
|
- set_bit(XPT_CONN, &svsk->sk_xprt.xpt_flags);
|
|
- svc_xprt_enqueue(&svsk->sk_xprt);
|
|
- } else
|
|
- printk("svc: socket %p: no user data\n", sk);
|
|
- }
|
|
+ if (sk->sk_state != TCP_LISTEN)
|
|
+ return;
|
|
+
|
|
+ if (svsk) {
|
|
+ /* Refer to svc_setup_socket() for details. */
|
|
+ rmb();
|
|
+ svsk->sk_odata(sk);
|
|
+ set_bit(XPT_CONN, &svsk->sk_xprt.xpt_flags);
|
|
+ svc_xprt_enqueue(&svsk->sk_xprt);
|
|
+ } else
|
|
+ printk("svc: socket %p: no user data\n", sk);
|
|
}
|
|
|
|
/*
|
|
diff --git a/net/wireless/scan.c b/net/wireless/scan.c
|
|
index c4c124cb5332b..e35c54ba2fd56 100644
|
|
--- a/net/wireless/scan.c
|
|
+++ b/net/wireless/scan.c
|
|
@@ -223,117 +223,152 @@ bool cfg80211_is_element_inherited(const struct element *elem,
|
|
}
|
|
EXPORT_SYMBOL(cfg80211_is_element_inherited);
|
|
|
|
-static size_t cfg80211_gen_new_ie(const u8 *ie, size_t ielen,
|
|
- const u8 *subelement, size_t subie_len,
|
|
- u8 *new_ie, gfp_t gfp)
|
|
+static size_t cfg80211_copy_elem_with_frags(const struct element *elem,
|
|
+ const u8 *ie, size_t ie_len,
|
|
+ u8 **pos, u8 *buf, size_t buf_len)
|
|
{
|
|
- u8 *pos, *tmp;
|
|
- const u8 *tmp_old, *tmp_new;
|
|
- const struct element *non_inherit_elem;
|
|
- u8 *sub_copy;
|
|
+ if (WARN_ON((u8 *)elem < ie || elem->data > ie + ie_len ||
|
|
+ elem->data + elem->datalen > ie + ie_len))
|
|
+ return 0;
|
|
|
|
- /* copy subelement as we need to change its content to
|
|
- * mark an ie after it is processed.
|
|
- */
|
|
- sub_copy = kmemdup(subelement, subie_len, gfp);
|
|
- if (!sub_copy)
|
|
+ if (elem->datalen + 2 > buf + buf_len - *pos)
|
|
return 0;
|
|
|
|
- pos = &new_ie[0];
|
|
+ memcpy(*pos, elem, elem->datalen + 2);
|
|
+ *pos += elem->datalen + 2;
|
|
|
|
- /* set new ssid */
|
|
- tmp_new = cfg80211_find_ie(WLAN_EID_SSID, sub_copy, subie_len);
|
|
- if (tmp_new) {
|
|
- memcpy(pos, tmp_new, tmp_new[1] + 2);
|
|
- pos += (tmp_new[1] + 2);
|
|
+ /* Finish if it is not fragmented */
|
|
+ if (elem->datalen != 255)
|
|
+ return *pos - buf;
|
|
+
|
|
+ ie_len = ie + ie_len - elem->data - elem->datalen;
|
|
+ ie = (const u8 *)elem->data + elem->datalen;
|
|
+
|
|
+ for_each_element(elem, ie, ie_len) {
|
|
+ if (elem->id != WLAN_EID_FRAGMENT)
|
|
+ break;
|
|
+
|
|
+ if (elem->datalen + 2 > buf + buf_len - *pos)
|
|
+ return 0;
|
|
+
|
|
+ memcpy(*pos, elem, elem->datalen + 2);
|
|
+ *pos += elem->datalen + 2;
|
|
+
|
|
+ if (elem->datalen != 255)
|
|
+ break;
|
|
}
|
|
|
|
- /* get non inheritance list if exists */
|
|
- non_inherit_elem =
|
|
- cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,
|
|
- sub_copy, subie_len);
|
|
+ return *pos - buf;
|
|
+}
|
|
|
|
- /* go through IEs in ie (skip SSID) and subelement,
|
|
- * merge them into new_ie
|
|
+static size_t cfg80211_gen_new_ie(const u8 *ie, size_t ielen,
|
|
+ const u8 *subie, size_t subie_len,
|
|
+ u8 *new_ie, size_t new_ie_len)
|
|
+{
|
|
+ const struct element *non_inherit_elem, *parent, *sub;
|
|
+ u8 *pos = new_ie;
|
|
+ u8 id, ext_id;
|
|
+ unsigned int match_len;
|
|
+
|
|
+ non_inherit_elem = cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,
|
|
+ subie, subie_len);
|
|
+
|
|
+ /* We copy the elements one by one from the parent to the generated
|
|
+ * elements.
|
|
+ * If they are not inherited (included in subie or in the non
|
|
+ * inheritance element), then we copy all occurrences the first time
|
|
+ * we see this element type.
|
|
*/
|
|
- tmp_old = cfg80211_find_ie(WLAN_EID_SSID, ie, ielen);
|
|
- tmp_old = (tmp_old) ? tmp_old + tmp_old[1] + 2 : ie;
|
|
-
|
|
- while (tmp_old + 2 - ie <= ielen &&
|
|
- tmp_old + tmp_old[1] + 2 - ie <= ielen) {
|
|
- if (tmp_old[0] == 0) {
|
|
- tmp_old++;
|
|
+ for_each_element(parent, ie, ielen) {
|
|
+ if (parent->id == WLAN_EID_FRAGMENT)
|
|
continue;
|
|
+
|
|
+ if (parent->id == WLAN_EID_EXTENSION) {
|
|
+ if (parent->datalen < 1)
|
|
+ continue;
|
|
+
|
|
+ id = WLAN_EID_EXTENSION;
|
|
+ ext_id = parent->data[0];
|
|
+ match_len = 1;
|
|
+ } else {
|
|
+ id = parent->id;
|
|
+ match_len = 0;
|
|
}
|
|
|
|
- if (tmp_old[0] == WLAN_EID_EXTENSION)
|
|
- tmp = (u8 *)cfg80211_find_ext_ie(tmp_old[2], sub_copy,
|
|
- subie_len);
|
|
- else
|
|
- tmp = (u8 *)cfg80211_find_ie(tmp_old[0], sub_copy,
|
|
- subie_len);
|
|
+ /* Find first occurrence in subie */
|
|
+ sub = cfg80211_find_elem_match(id, subie, subie_len,
|
|
+ &ext_id, match_len, 0);
|
|
|
|
- if (!tmp) {
|
|
- const struct element *old_elem = (void *)tmp_old;
|
|
+ /* Copy from parent if not in subie and inherited */
|
|
+ if (!sub &&
|
|
+ cfg80211_is_element_inherited(parent, non_inherit_elem)) {
|
|
+ if (!cfg80211_copy_elem_with_frags(parent,
|
|
+ ie, ielen,
|
|
+ &pos, new_ie,
|
|
+ new_ie_len))
|
|
+ return 0;
|
|
|
|
- /* ie in old ie but not in subelement */
|
|
- if (cfg80211_is_element_inherited(old_elem,
|
|
- non_inherit_elem)) {
|
|
- memcpy(pos, tmp_old, tmp_old[1] + 2);
|
|
- pos += tmp_old[1] + 2;
|
|
- }
|
|
- } else {
|
|
- /* ie in transmitting ie also in subelement,
|
|
- * copy from subelement and flag the ie in subelement
|
|
- * as copied (by setting eid field to WLAN_EID_SSID,
|
|
- * which is skipped anyway).
|
|
- * For vendor ie, compare OUI + type + subType to
|
|
- * determine if they are the same ie.
|
|
- */
|
|
- if (tmp_old[0] == WLAN_EID_VENDOR_SPECIFIC) {
|
|
- if (tmp_old[1] >= 5 && tmp[1] >= 5 &&
|
|
- !memcmp(tmp_old + 2, tmp + 2, 5)) {
|
|
- /* same vendor ie, copy from
|
|
- * subelement
|
|
- */
|
|
- memcpy(pos, tmp, tmp[1] + 2);
|
|
- pos += tmp[1] + 2;
|
|
- tmp[0] = WLAN_EID_SSID;
|
|
- } else {
|
|
- memcpy(pos, tmp_old, tmp_old[1] + 2);
|
|
- pos += tmp_old[1] + 2;
|
|
- }
|
|
- } else {
|
|
- /* copy ie from subelement into new ie */
|
|
- memcpy(pos, tmp, tmp[1] + 2);
|
|
- pos += tmp[1] + 2;
|
|
- tmp[0] = WLAN_EID_SSID;
|
|
- }
|
|
+ continue;
|
|
}
|
|
|
|
- if (tmp_old + tmp_old[1] + 2 - ie == ielen)
|
|
- break;
|
|
+ /* Already copied if an earlier element had the same type */
|
|
+ if (cfg80211_find_elem_match(id, ie, (u8 *)parent - ie,
|
|
+ &ext_id, match_len, 0))
|
|
+ continue;
|
|
|
|
- tmp_old += tmp_old[1] + 2;
|
|
+ /* Not inheriting, copy all similar elements from subie */
|
|
+ while (sub) {
|
|
+ if (!cfg80211_copy_elem_with_frags(sub,
|
|
+ subie, subie_len,
|
|
+ &pos, new_ie,
|
|
+ new_ie_len))
|
|
+ return 0;
|
|
+
|
|
+ sub = cfg80211_find_elem_match(id,
|
|
+ sub->data + sub->datalen,
|
|
+ subie_len + subie -
|
|
+ (sub->data +
|
|
+ sub->datalen),
|
|
+ &ext_id, match_len, 0);
|
|
+ }
|
|
}
|
|
|
|
- /* go through subelement again to check if there is any ie not
|
|
- * copied to new ie, skip ssid, capability, bssid-index ie
|
|
+ /* The above misses elements that are included in subie but not in the
|
|
+ * parent, so do a pass over subie and append those.
|
|
+ * Skip the non-tx BSSID caps and non-inheritance element.
|
|
*/
|
|
- tmp_new = sub_copy;
|
|
- while (tmp_new + 2 - sub_copy <= subie_len &&
|
|
- tmp_new + tmp_new[1] + 2 - sub_copy <= subie_len) {
|
|
- if (!(tmp_new[0] == WLAN_EID_NON_TX_BSSID_CAP ||
|
|
- tmp_new[0] == WLAN_EID_SSID)) {
|
|
- memcpy(pos, tmp_new, tmp_new[1] + 2);
|
|
- pos += tmp_new[1] + 2;
|
|
+ for_each_element(sub, subie, subie_len) {
|
|
+ if (sub->id == WLAN_EID_NON_TX_BSSID_CAP)
|
|
+ continue;
|
|
+
|
|
+ if (sub->id == WLAN_EID_FRAGMENT)
|
|
+ continue;
|
|
+
|
|
+ if (sub->id == WLAN_EID_EXTENSION) {
|
|
+ if (sub->datalen < 1)
|
|
+ continue;
|
|
+
|
|
+ id = WLAN_EID_EXTENSION;
|
|
+ ext_id = sub->data[0];
|
|
+ match_len = 1;
|
|
+
|
|
+ if (ext_id == WLAN_EID_EXT_NON_INHERITANCE)
|
|
+ continue;
|
|
+ } else {
|
|
+ id = sub->id;
|
|
+ match_len = 0;
|
|
}
|
|
- if (tmp_new + tmp_new[1] + 2 - sub_copy == subie_len)
|
|
- break;
|
|
- tmp_new += tmp_new[1] + 2;
|
|
+
|
|
+ /* Processed if one was included in the parent */
|
|
+ if (cfg80211_find_elem_match(id, ie, ielen,
|
|
+ &ext_id, match_len, 0))
|
|
+ continue;
|
|
+
|
|
+ if (!cfg80211_copy_elem_with_frags(sub, subie, subie_len,
|
|
+ &pos, new_ie, new_ie_len))
|
|
+ return 0;
|
|
}
|
|
|
|
- kfree(sub_copy);
|
|
return pos - new_ie;
|
|
}
|
|
|
|
@@ -1659,7 +1694,7 @@ static void cfg80211_parse_mbssid_data(struct wiphy *wiphy,
|
|
new_ie_len = cfg80211_gen_new_ie(ie, ielen,
|
|
profile,
|
|
profile_len, new_ie,
|
|
- gfp);
|
|
+ IEEE80211_MAX_DATA_LEN);
|
|
if (!new_ie_len)
|
|
continue;
|
|
|
|
diff --git a/net/wireless/wext-core.c b/net/wireless/wext-core.c
|
|
index 76a80a41615be..a57f54bc0e1a7 100644
|
|
--- a/net/wireless/wext-core.c
|
|
+++ b/net/wireless/wext-core.c
|
|
@@ -796,6 +796,12 @@ static int ioctl_standard_iw_point(struct iw_point *iwp, unsigned int cmd,
|
|
}
|
|
}
|
|
|
|
+ /* Sanity-check to ensure we never end up _allocating_ zero
|
|
+ * bytes of data for extra.
|
|
+ */
|
|
+ if (extra_size <= 0)
|
|
+ return -EFAULT;
|
|
+
|
|
/* kzalloc() ensures NULL-termination for essid_compat. */
|
|
extra = kzalloc(extra_size, GFP_KERNEL);
|
|
if (!extra)
|
|
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
|
|
index 2bc0d6e3e124c..d04a2345bc3f5 100644
|
|
--- a/net/xdp/xsk.c
|
|
+++ b/net/xdp/xsk.c
|
|
@@ -613,6 +613,7 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
|
|
struct sock *sk = sock->sk;
|
|
struct xdp_sock *xs = xdp_sk(sk);
|
|
struct net_device *dev;
|
|
+ int bound_dev_if;
|
|
u32 flags, qid;
|
|
int err = 0;
|
|
|
|
@@ -626,6 +627,10 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
|
|
XDP_USE_NEED_WAKEUP))
|
|
return -EINVAL;
|
|
|
|
+ bound_dev_if = READ_ONCE(sk->sk_bound_dev_if);
|
|
+ if (bound_dev_if && bound_dev_if != sxdp->sxdp_ifindex)
|
|
+ return -EINVAL;
|
|
+
|
|
rtnl_lock();
|
|
mutex_lock(&xs->mutex);
|
|
if (xs->state != XSK_READY) {
|
|
diff --git a/samples/bpf/tcp_basertt_kern.c b/samples/bpf/tcp_basertt_kern.c
|
|
index 9dba48c2b9207..66dd58f78d528 100644
|
|
--- a/samples/bpf/tcp_basertt_kern.c
|
|
+++ b/samples/bpf/tcp_basertt_kern.c
|
|
@@ -47,7 +47,7 @@ int bpf_basertt(struct bpf_sock_ops *skops)
|
|
case BPF_SOCK_OPS_BASE_RTT:
|
|
n = bpf_getsockopt(skops, SOL_TCP, TCP_CONGESTION,
|
|
cong, sizeof(cong));
|
|
- if (!n && !__builtin_memcmp(cong, nv, sizeof(nv)+1)) {
|
|
+ if (!n && !__builtin_memcmp(cong, nv, sizeof(nv))) {
|
|
/* Set base_rtt to 80us */
|
|
rv = 80;
|
|
} else if (n) {
|
|
diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
|
|
index e5aeaf72dcdb8..53e276bb24acd 100644
|
|
--- a/scripts/mod/modpost.c
|
|
+++ b/scripts/mod/modpost.c
|
|
@@ -1325,6 +1325,10 @@ static Elf_Sym *find_elf_symbol(struct elf_info *elf, Elf64_Sword addr,
|
|
if (relsym->st_name != 0)
|
|
return relsym;
|
|
|
|
+ /*
|
|
+ * Strive to find a better symbol name, but the resulting name may not
|
|
+ * match the symbol referenced in the original code.
|
|
+ */
|
|
relsym_secindex = get_secindex(elf, relsym);
|
|
for (sym = elf->symtab_start; sym < elf->symtab_stop; sym++) {
|
|
if (get_secindex(elf, sym) != relsym_secindex)
|
|
@@ -1629,7 +1633,7 @@ static void default_mismatch_handler(const char *modname, struct elf_info *elf,
|
|
|
|
static int is_executable_section(struct elf_info* elf, unsigned int section_index)
|
|
{
|
|
- if (section_index > elf->num_sections)
|
|
+ if (section_index >= elf->num_sections)
|
|
fatal("section_index is outside elf->num_sections!\n");
|
|
|
|
return ((elf->sechdrs[section_index].sh_flags & SHF_EXECINSTR) == SHF_EXECINSTR);
|
|
@@ -1808,19 +1812,33 @@ static int addend_386_rel(struct elf_info *elf, Elf_Shdr *sechdr, Elf_Rela *r)
|
|
#define R_ARM_THM_JUMP19 51
|
|
#endif
|
|
|
|
+static int32_t sign_extend32(int32_t value, int index)
|
|
+{
|
|
+ uint8_t shift = 31 - index;
|
|
+
|
|
+ return (int32_t)(value << shift) >> shift;
|
|
+}
|
|
+
|
|
static int addend_arm_rel(struct elf_info *elf, Elf_Shdr *sechdr, Elf_Rela *r)
|
|
{
|
|
unsigned int r_typ = ELF_R_TYPE(r->r_info);
|
|
+ Elf_Sym *sym = elf->symtab_start + ELF_R_SYM(r->r_info);
|
|
+ void *loc = reloc_location(elf, sechdr, r);
|
|
+ uint32_t inst;
|
|
+ int32_t offset;
|
|
|
|
switch (r_typ) {
|
|
case R_ARM_ABS32:
|
|
- /* From ARM ABI: (S + A) | T */
|
|
- r->r_addend = (int)(long)
|
|
- (elf->symtab_start + ELF_R_SYM(r->r_info));
|
|
+ inst = TO_NATIVE(*(uint32_t *)loc);
|
|
+ r->r_addend = inst + sym->st_value;
|
|
break;
|
|
case R_ARM_PC24:
|
|
case R_ARM_CALL:
|
|
case R_ARM_JUMP24:
|
|
+ inst = TO_NATIVE(*(uint32_t *)loc);
|
|
+ offset = sign_extend32((inst & 0x00ffffff) << 2, 25);
|
|
+ r->r_addend = offset + sym->st_value + 8;
|
|
+ break;
|
|
case R_ARM_THM_CALL:
|
|
case R_ARM_THM_JUMP24:
|
|
case R_ARM_THM_JUMP19:
|
|
diff --git a/scripts/tags.sh b/scripts/tags.sh
|
|
index 4e18ae5282a69..f667527ddff5c 100755
|
|
--- a/scripts/tags.sh
|
|
+++ b/scripts/tags.sh
|
|
@@ -28,6 +28,13 @@ fi
|
|
# ignore userspace tools
|
|
ignore="$ignore ( -path ${tree}tools ) -prune -o"
|
|
|
|
+# gtags(1) refuses to index any file outside of its current working dir.
|
|
+# If gtags indexing is requested and the build output directory is not
|
|
+# the kernel source tree, index all files in absolute-path form.
|
|
+if [[ "$1" == "gtags" && -n "${tree}" ]]; then
|
|
+ tree=$(realpath "$tree")/
|
|
+fi
|
|
+
|
|
# Detect if ALLSOURCE_ARCHS is set. If not, we assume SRCARCH
|
|
if [ "${ALLSOURCE_ARCHS}" = "" ]; then
|
|
ALLSOURCE_ARCHS=${SRCARCH}
|
|
@@ -134,7 +141,7 @@ docscope()
|
|
|
|
dogtags()
|
|
{
|
|
- all_target_sources | gtags -i -f -
|
|
+ all_target_sources | gtags -i -C "${tree:-.}" -f - "$PWD"
|
|
}
|
|
|
|
# Basic regular expressions with an optional /kind-spec/ for ctags and
|
|
diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
|
|
index b82291d10e730..cc7e4e4439b0f 100644
|
|
--- a/security/integrity/evm/evm_main.c
|
|
+++ b/security/integrity/evm/evm_main.c
|
|
@@ -471,7 +471,9 @@ void evm_inode_post_removexattr(struct dentry *dentry, const char *xattr_name)
|
|
|
|
/**
|
|
* evm_inode_setattr - prevent updating an invalid EVM extended attribute
|
|
+ * @idmap: idmap of the mount
|
|
* @dentry: pointer to the affected dentry
|
|
+ * @attr: iattr structure containing the new file attributes
|
|
*
|
|
* Permit update of file attributes when files have a valid EVM signature,
|
|
* except in the case of them having an immutable portable signature.
|
|
diff --git a/security/integrity/iint.c b/security/integrity/iint.c
|
|
index 0b9cb639a0ed0..ff37143000b4c 100644
|
|
--- a/security/integrity/iint.c
|
|
+++ b/security/integrity/iint.c
|
|
@@ -43,12 +43,10 @@ static struct integrity_iint_cache *__integrity_iint_find(struct inode *inode)
|
|
else if (inode > iint->inode)
|
|
n = n->rb_right;
|
|
else
|
|
- break;
|
|
+ return iint;
|
|
}
|
|
- if (!n)
|
|
- return NULL;
|
|
|
|
- return iint;
|
|
+ return NULL;
|
|
}
|
|
|
|
/*
|
|
@@ -121,10 +119,15 @@ struct integrity_iint_cache *integrity_inode_get(struct inode *inode)
|
|
parent = *p;
|
|
test_iint = rb_entry(parent, struct integrity_iint_cache,
|
|
rb_node);
|
|
- if (inode < test_iint->inode)
|
|
+ if (inode < test_iint->inode) {
|
|
p = &(*p)->rb_left;
|
|
- else
|
|
+ } else if (inode > test_iint->inode) {
|
|
p = &(*p)->rb_right;
|
|
+ } else {
|
|
+ write_unlock(&integrity_iint_lock);
|
|
+ kmem_cache_free(iint_cache, iint);
|
|
+ return test_iint;
|
|
+ }
|
|
}
|
|
|
|
iint->inode = inode;
|
|
diff --git a/security/integrity/ima/ima_modsig.c b/security/integrity/ima/ima_modsig.c
|
|
index d106885cc4955..5fb971efc6e10 100644
|
|
--- a/security/integrity/ima/ima_modsig.c
|
|
+++ b/security/integrity/ima/ima_modsig.c
|
|
@@ -109,6 +109,9 @@ int ima_read_modsig(enum ima_hooks func, const void *buf, loff_t buf_len,
|
|
|
|
/**
|
|
* ima_collect_modsig - Calculate the file hash without the appended signature.
|
|
+ * @modsig: parsed module signature
|
|
+ * @buf: data to verify the signature on
|
|
+ * @size: data size
|
|
*
|
|
* Since the modsig is part of the file contents, the hash used in its signature
|
|
* isn't the same one ordinarily calculated by IMA. Therefore PKCS7 code
|
|
diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
|
|
index 6df0436462ab7..e749403f07a8b 100644
|
|
--- a/security/integrity/ima/ima_policy.c
|
|
+++ b/security/integrity/ima/ima_policy.c
|
|
@@ -500,6 +500,7 @@ static int get_subaction(struct ima_rule_entry *rule, enum ima_hooks func)
|
|
* @secid: LSM secid of the task to be validated
|
|
* @func: IMA hook identifier
|
|
* @mask: requested action (MAY_READ | MAY_WRITE | MAY_APPEND | MAY_EXEC)
|
|
+ * @flags: IMA actions to consider (e.g. IMA_MEASURE | IMA_APPRAISE)
|
|
* @pcr: set the pcr to extend
|
|
* @template_desc: the template that should be used for this rule
|
|
*
|
|
@@ -1266,7 +1267,7 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
|
|
|
|
/**
|
|
* ima_parse_add_rule - add a rule to ima_policy_rules
|
|
- * @rule - ima measurement policy rule
|
|
+ * @rule: ima measurement policy rule
|
|
*
|
|
* Avoid locking by allowing just one writer at a time in ima_write_policy()
|
|
* Returns the length of the rule parsed, an error code on failure
|
|
diff --git a/sound/core/jack.c b/sound/core/jack.c
|
|
index e7ac82d468216..c2022b13fddc9 100644
|
|
--- a/sound/core/jack.c
|
|
+++ b/sound/core/jack.c
|
|
@@ -364,6 +364,7 @@ void snd_jack_report(struct snd_jack *jack, int status)
|
|
{
|
|
struct snd_jack_kctl *jack_kctl;
|
|
#ifdef CONFIG_SND_JACK_INPUT_DEV
|
|
+ struct input_dev *idev;
|
|
int i;
|
|
#endif
|
|
|
|
@@ -375,30 +376,28 @@ void snd_jack_report(struct snd_jack *jack, int status)
|
|
status & jack_kctl->mask_bits);
|
|
|
|
#ifdef CONFIG_SND_JACK_INPUT_DEV
|
|
- mutex_lock(&jack->input_dev_lock);
|
|
- if (!jack->input_dev) {
|
|
- mutex_unlock(&jack->input_dev_lock);
|
|
+ idev = input_get_device(jack->input_dev);
|
|
+ if (!idev)
|
|
return;
|
|
- }
|
|
|
|
for (i = 0; i < ARRAY_SIZE(jack->key); i++) {
|
|
int testbit = SND_JACK_BTN_0 >> i;
|
|
|
|
if (jack->type & testbit)
|
|
- input_report_key(jack->input_dev, jack->key[i],
|
|
+ input_report_key(idev, jack->key[i],
|
|
status & testbit);
|
|
}
|
|
|
|
for (i = 0; i < ARRAY_SIZE(jack_switch_types); i++) {
|
|
int testbit = 1 << i;
|
|
if (jack->type & testbit)
|
|
- input_report_switch(jack->input_dev,
|
|
+ input_report_switch(idev,
|
|
jack_switch_types[i],
|
|
status & testbit);
|
|
}
|
|
|
|
- input_sync(jack->input_dev);
|
|
- mutex_unlock(&jack->input_dev_lock);
|
|
+ input_sync(idev);
|
|
+ input_put_device(idev);
|
|
#endif /* CONFIG_SND_JACK_INPUT_DEV */
|
|
}
|
|
EXPORT_SYMBOL(snd_jack_report);
|
|
diff --git a/sound/pci/ac97/ac97_codec.c b/sound/pci/ac97/ac97_codec.c
|
|
index 83bb086bf9757..b920c739d6863 100644
|
|
--- a/sound/pci/ac97/ac97_codec.c
|
|
+++ b/sound/pci/ac97/ac97_codec.c
|
|
@@ -2006,8 +2006,8 @@ int snd_ac97_mixer(struct snd_ac97_bus *bus, struct snd_ac97_template *template,
|
|
.dev_disconnect = snd_ac97_dev_disconnect,
|
|
};
|
|
|
|
- if (rac97)
|
|
- *rac97 = NULL;
|
|
+ if (!rac97)
|
|
+ return -EINVAL;
|
|
if (snd_BUG_ON(!bus || !template))
|
|
return -EINVAL;
|
|
if (snd_BUG_ON(template->num >= 4))
|
|
diff --git a/sound/soc/codecs/es8316.c b/sound/soc/codecs/es8316.c
|
|
index efeffa0bf2d78..131f41cccbe65 100644
|
|
--- a/sound/soc/codecs/es8316.c
|
|
+++ b/sound/soc/codecs/es8316.c
|
|
@@ -52,7 +52,12 @@ static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(dac_vol_tlv, -9600, 50, 1);
|
|
static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(adc_vol_tlv, -9600, 50, 1);
|
|
static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(alc_max_gain_tlv, -650, 150, 0);
|
|
static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(alc_min_gain_tlv, -1200, 150, 0);
|
|
-static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(alc_target_tlv, -1650, 150, 0);
|
|
+
|
|
+static const SNDRV_CTL_TLVD_DECLARE_DB_RANGE(alc_target_tlv,
|
|
+ 0, 10, TLV_DB_SCALE_ITEM(-1650, 150, 0),
|
|
+ 11, 11, TLV_DB_SCALE_ITEM(-150, 0, 0),
|
|
+);
|
|
+
|
|
static const SNDRV_CTL_TLVD_DECLARE_DB_RANGE(hpmixer_gain_tlv,
|
|
0, 4, TLV_DB_SCALE_ITEM(-1200, 150, 0),
|
|
8, 11, TLV_DB_SCALE_ITEM(-450, 150, 0),
|
|
@@ -115,7 +120,7 @@ static const struct snd_kcontrol_new es8316_snd_controls[] = {
|
|
alc_max_gain_tlv),
|
|
SOC_SINGLE_TLV("ALC Capture Min Volume", ES8316_ADC_ALC2, 0, 28, 0,
|
|
alc_min_gain_tlv),
|
|
- SOC_SINGLE_TLV("ALC Capture Target Volume", ES8316_ADC_ALC3, 4, 10, 0,
|
|
+ SOC_SINGLE_TLV("ALC Capture Target Volume", ES8316_ADC_ALC3, 4, 11, 0,
|
|
alc_target_tlv),
|
|
SOC_SINGLE("ALC Capture Hold Time", ES8316_ADC_ALC3, 0, 10, 0),
|
|
SOC_SINGLE("ALC Capture Decay Time", ES8316_ADC_ALC4, 4, 10, 0),
|
|
@@ -364,13 +369,11 @@ static int es8316_set_dai_sysclk(struct snd_soc_dai *codec_dai,
|
|
int count = 0;
|
|
|
|
es8316->sysclk = freq;
|
|
+ es8316->sysclk_constraints.list = NULL;
|
|
+ es8316->sysclk_constraints.count = 0;
|
|
|
|
- if (freq == 0) {
|
|
- es8316->sysclk_constraints.list = NULL;
|
|
- es8316->sysclk_constraints.count = 0;
|
|
-
|
|
+ if (freq == 0)
|
|
return 0;
|
|
- }
|
|
|
|
ret = clk_set_rate(es8316->mclk, freq);
|
|
if (ret)
|
|
@@ -386,8 +389,10 @@ static int es8316_set_dai_sysclk(struct snd_soc_dai *codec_dai,
|
|
es8316->allowed_rates[count++] = freq / ratio;
|
|
}
|
|
|
|
- es8316->sysclk_constraints.list = es8316->allowed_rates;
|
|
- es8316->sysclk_constraints.count = count;
|
|
+ if (count) {
|
|
+ es8316->sysclk_constraints.list = es8316->allowed_rates;
|
|
+ es8316->sysclk_constraints.count = count;
|
|
+ }
|
|
|
|
return 0;
|
|
}
|
|
diff --git a/sound/soc/fsl/imx-audmix.c b/sound/soc/fsl/imx-audmix.c
|
|
index 71590ca6394b9..08c044a72250a 100644
|
|
--- a/sound/soc/fsl/imx-audmix.c
|
|
+++ b/sound/soc/fsl/imx-audmix.c
|
|
@@ -230,6 +230,8 @@ static int imx_audmix_probe(struct platform_device *pdev)
|
|
|
|
dai_name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s%s",
|
|
fe_name_pref, args.np->full_name + 1);
|
|
+ if (!dai_name)
|
|
+ return -ENOMEM;
|
|
|
|
dev_info(pdev->dev.parent, "DAI FE name:%s\n", dai_name);
|
|
|
|
@@ -238,6 +240,8 @@ static int imx_audmix_probe(struct platform_device *pdev)
|
|
capture_dai_name =
|
|
devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s %s",
|
|
dai_name, "CPU-Capture");
|
|
+ if (!capture_dai_name)
|
|
+ return -ENOMEM;
|
|
}
|
|
|
|
priv->dai[i].cpus = &dlc[0];
|
|
@@ -268,6 +272,8 @@ static int imx_audmix_probe(struct platform_device *pdev)
|
|
"AUDMIX-Playback-%d", i);
|
|
be_cp = devm_kasprintf(&pdev->dev, GFP_KERNEL,
|
|
"AUDMIX-Capture-%d", i);
|
|
+ if (!be_name || !be_pb || !be_cp)
|
|
+ return -ENOMEM;
|
|
|
|
priv->dai[num_dai + i].cpus = &dlc[3];
|
|
priv->dai[num_dai + i].codecs = &dlc[4];
|
|
@@ -295,6 +301,9 @@ static int imx_audmix_probe(struct platform_device *pdev)
|
|
priv->dapm_routes[i].source =
|
|
devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s %s",
|
|
dai_name, "CPU-Playback");
|
|
+ if (!priv->dapm_routes[i].source)
|
|
+ return -ENOMEM;
|
|
+
|
|
priv->dapm_routes[i].sink = be_pb;
|
|
priv->dapm_routes[num_dai + i].source = be_pb;
|
|
priv->dapm_routes[num_dai + i].sink = be_cp;
|
|
diff --git a/tools/perf/tests/shell/test_uprobe_from_different_cu.sh b/tools/perf/tests/shell/test_uprobe_from_different_cu.sh
|
|
new file mode 100644
|
|
index 0000000000000..00d2e0e2e0c28
|
|
--- /dev/null
|
|
+++ b/tools/perf/tests/shell/test_uprobe_from_different_cu.sh
|
|
@@ -0,0 +1,77 @@
|
|
+#!/bin/bash
|
|
+# test perf probe of function from different CU
|
|
+# SPDX-License-Identifier: GPL-2.0
|
|
+
|
|
+set -e
|
|
+
|
|
+temp_dir=$(mktemp -d /tmp/perf-uprobe-different-cu-sh.XXXXXXXXXX)
|
|
+
|
|
+cleanup()
|
|
+{
|
|
+ trap - EXIT TERM INT
|
|
+ if [[ "${temp_dir}" =~ ^/tmp/perf-uprobe-different-cu-sh.*$ ]]; then
|
|
+ echo "--- Cleaning up ---"
|
|
+ perf probe -x ${temp_dir}/testfile -d foo
|
|
+ rm -f "${temp_dir}/"*
|
|
+ rmdir "${temp_dir}"
|
|
+ fi
|
|
+}
|
|
+
|
|
+trap_cleanup()
|
|
+{
|
|
+ cleanup
|
|
+ exit 1
|
|
+}
|
|
+
|
|
+trap trap_cleanup EXIT TERM INT
|
|
+
|
|
+cat > ${temp_dir}/testfile-foo.h << EOF
|
|
+struct t
|
|
+{
|
|
+ int *p;
|
|
+ int c;
|
|
+};
|
|
+
|
|
+extern int foo (int i, struct t *t);
|
|
+EOF
|
|
+
|
|
+cat > ${temp_dir}/testfile-foo.c << EOF
|
|
+#include "testfile-foo.h"
|
|
+
|
|
+int
|
|
+foo (int i, struct t *t)
|
|
+{
|
|
+ int j, res = 0;
|
|
+ for (j = 0; j < i && j < t->c; j++)
|
|
+ res += t->p[j];
|
|
+
|
|
+ return res;
|
|
+}
|
|
+EOF
|
|
+
|
|
+cat > ${temp_dir}/testfile-main.c << EOF
|
|
+#include "testfile-foo.h"
|
|
+
|
|
+static struct t g;
|
|
+
|
|
+int
|
|
+main (int argc, char **argv)
|
|
+{
|
|
+ int i;
|
|
+ int j[argc];
|
|
+ g.c = argc;
|
|
+ g.p = j;
|
|
+ for (i = 0; i < argc; i++)
|
|
+ j[i] = (int) argv[i][0];
|
|
+ return foo (3, &g);
|
|
+}
|
|
+EOF
|
|
+
|
|
+gcc -g -Og -flto -c ${temp_dir}/testfile-foo.c -o ${temp_dir}/testfile-foo.o
|
|
+gcc -g -Og -c ${temp_dir}/testfile-main.c -o ${temp_dir}/testfile-main.o
|
|
+gcc -g -Og -o ${temp_dir}/testfile ${temp_dir}/testfile-foo.o ${temp_dir}/testfile-main.o
|
|
+
|
|
+perf probe -x ${temp_dir}/testfile --funcs foo
|
|
+perf probe -x ${temp_dir}/testfile foo
|
|
+
|
|
+cleanup
|
|
diff --git a/tools/perf/util/dwarf-aux.c b/tools/perf/util/dwarf-aux.c
|
|
index f1e2f566ce6fc..1d51aa88f4cb6 100644
|
|
--- a/tools/perf/util/dwarf-aux.c
|
|
+++ b/tools/perf/util/dwarf-aux.c
|
|
@@ -1007,7 +1007,7 @@ int die_get_varname(Dwarf_Die *vr_die, struct strbuf *buf)
|
|
ret = die_get_typename(vr_die, buf);
|
|
if (ret < 0) {
|
|
pr_debug("Failed to get type, make it unknown.\n");
|
|
- ret = strbuf_add(buf, " (unknown_type)", 14);
|
|
+ ret = strbuf_add(buf, "(unknown_type)", 14);
|
|
}
|
|
|
|
return ret < 0 ? ret : strbuf_addf(buf, "\t%s", dwarf_diename(vr_die));
|
|
diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh
|
|
index 911c549f186fb..3b929e031f59c 100755
|
|
--- a/tools/testing/selftests/net/rtnetlink.sh
|
|
+++ b/tools/testing/selftests/net/rtnetlink.sh
|
|
@@ -833,6 +833,7 @@ EOF
|
|
fi
|
|
|
|
# clean up any leftovers
|
|
+ echo 0 > /sys/bus/netdevsim/del_device
|
|
$probed && rmmod netdevsim
|
|
|
|
if [ $ret -ne 0 ]; then
|
|
diff --git a/tools/testing/selftests/tc-testing/settings b/tools/testing/selftests/tc-testing/settings
|
|
new file mode 100644
|
|
index 0000000000000..e2206265f67c7
|
|
--- /dev/null
|
|
+++ b/tools/testing/selftests/tc-testing/settings
|
|
@@ -0,0 +1 @@
|
|
+timeout=900
|