4 Commits

Author SHA1 Message Date
Neil Armstrong
8ec059c5ce interconnect: add support for the SM8650 SoC
Add the SM8650 Interconnect nodes definitions, this is heavily based
on the Linux driver without the QoS definitions.

Link: https://patch.msgid.link/20251120-topic-interconnect-next-v5-5-e8a82720da5d@linaro.org
Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
2025-11-20 09:17:58 +01:00
Neil Armstrong
591b9e1419 interconnect: add support for the Qualcomm RPMh helpers
The Qualcomm SoCs votes for common resources via the RPMh subsystem.

Implement the necessary helpers for Interconnect providers to add the
nodes and vote via the RPPh "BCM" voters, which are vote endpoints for
each SoC subsystems. The APPS (ARM subsystem) has a dedicated endpoint.

The BCM voter will aggregate all the bandwidth for all the nodes
associated with a BCM voter, and internally the RPMh with also
aggregate all the votes from all the SoC subsystems for the same
BCM voter.

Link: https://patch.msgid.link/20251120-topic-interconnect-next-v5-4-e8a82720da5d@linaro.org
Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
2025-11-20 09:17:58 +01:00
Neil Armstrong
9ab7163710 interconnect: add DM test suite
Add a test suite exercising the whole lifetime and callbacks
of interconnect with a fake 5 providers with a split node graph.

The test suite checks the calculus are right and goes to the correct
nodes, and the lifetime of the node is correct.

Link: https://patch.msgid.link/20251120-topic-interconnect-next-v5-2-e8a82720da5d@linaro.org
Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
2025-11-20 09:17:58 +01:00
Neil Armstrong
60a99d5ca3 Introduce the Generic System Interconnect Subsystem
Let's introduce the Generic System Interconnect subsystem based on
the counterpart Linux framework which is used to vote for bandwidth
across multiple SoC busses.

Documentation for the Linux Generic System Interconnect Subsystem can
be found at [1].

Each bus endpoints are materialised as "nodes" which are linked together,
and the DT will specify a pair of nodes to enable and set a bandwidth
on the route between those endpoints.

The hardware resources that provide those nodes and provides the way
to vote for the bandwidth are called "providers".

The Interconnect uclass code is heavily based on the Linux one, with
some small differences:
- nodes are allocated as udevices instead of Linux idr_alloc()
- tag management is minimal, only normal xlate is supported
- getting nodes states at probe is not implemented
- providers are probed on demand while the nodes links are traversed
- nodes are populated on bind
- id management is simplified, static IDs and dynamics IDs can be used
- identical consume API as Linux, only implementation differs

Fully tested with associated DM test suite.

[1] https://docs.kernel.org/driver-api/interconnect.html

Link: https://patch.msgid.link/20251120-topic-interconnect-next-v5-1-e8a82720da5d@linaro.org
Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
2025-11-20 09:17:58 +01:00