This commit does not change the order or meaning of any eventbus activity, it
only updates the way the plumbing is set up.
Updates #15160
Change-Id: I06860ac4e43952a9bb4d85366138c9d9a17fd9cd
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
For debugging purposes, add a new C2N endpoint returning the current
netmap. Optionally, coordination server can send a new "candidate" map
response, which the client will generate a separate netmap for.
Coordination server can later compare two netmaps, detecting unexpected
changes to the client state.
Updates tailscale/corp#32095
Signed-off-by: Anton Tolchanov <anton@tailscale.com>
Previously, seamless key renewal was an opt-in feature. Customers had
to set a `seamless-key-renewal` node attribute in their policy file.
This patch enables seamless key renewal by default for all clients.
It includes a `disable-seamless-key-renewal` node attribute we can set
in Control, so we can manage the rollout and disable the feature for
clients with known bugs. This new attribute makes the feature opt-out.
Updates tailscale/corp#31479
Signed-off-by: Alex Chan <alexc@tailscale.com>
This makes things work slightly better over the eventbus.
Also switches ipnlocal to use the event over the eventbus instead of the
direct callback.
Updates #15160
Signed-off-by: Claus Lensbøl <claus@tailscale.com>
Instead of waiting for a designated subscription to close as a canary for the
bus being stopped, use the bus Client's own signal for closure added in #17118.
Updates #cleanup
Change-Id: I384ea39f3f1f6a030a6282356f7b5bdcdf8d7102
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
The Tracker was using direct callbacks to ipnlocal. This PR moves those
to be triggered via the eventbus.
Additionally, the eventbus is now closed on exit from tailscaled
explicitly, and health is now a SubSystem in tsd.
Updates #15160
Signed-off-by: Claus Lensbøl <claus@tailscale.com>
This is a small introduction of the eventbus into controlclient that
communicates with mainly ipnlocal. While ipnlocal is a complicated part
of the codebase, the subscribers here are from the perspective of
ipnlocal already called async.
Updates #15160
Signed-off-by: Claus Lensbøl <claus@tailscale.com>
As of this commit (per the issue), the Taildrive code remains where it
was, but in new files that are protected by the new ts_omit_drive
build tag. Future commits will move it.
Updates #17058
Change-Id: Idf0a51db59e41ae8da6ea2b11d238aefc48b219e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This is step 4 of making syspolicy a build-time feature.
This adds a policyclient.Get() accessor to return the correct
implementation to use: either the real one, or the no-op one. (A third
type, a static one for testing, also exists, so in general a
policyclient.Client should be plumbed around and not always fetched
via policyclient.Get whenever possible, especially if tests need to use
alternate syspolicy)
Updates #16998
Updates #12614
Change-Id: Iaf19670744a596d5918acfa744f5db4564272978
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Step 4 of N. See earlier commits in the series (via the issue) for the
plan.
This adds the missing methods to policyclient.Client and then uses it
everywhere in ipn/ipnlocal and locks it in with a new dep test.
Still plenty of users of the global syspolicy elsewhere in the tree,
but this is a lot of them.
Updates #16998
Updates #12614
Change-Id: I25b136539ae1eedbcba80124de842970db0ca314
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This is step 2 of ~4, breaking up #14720 into reviewable chunks, with
the aim to make syspolicy be a build-time configurable feature.
Step 1 was #16984.
In this second step, the util/syspolicy/policyclient package is added
with the policyclient.Client interface. This is the interface that's
always present (regardless of build tags), and is what code around the
tree uses to ask syspolicy/MDM questions.
There are two implementations of policyclient.Client for now:
1) NoPolicyClient, which only returns default values.
2) the unexported, temporary 'globalSyspolicy', which is implemented
in terms of the global functions we wish to later eliminate.
This then starts to plumb around the policyclient.Client to most callers.
Future changes will plumb it more. When the last of the global func
callers are gone, then we can unexport the global functions and make a
proper policyclient.Client type and constructor in the syspolicy
package, removing the globalSyspolicy impl out of tsd.
The final change will sprinkle build tags in a few more places and
lock it in with dependency tests to make sure the dependencies don't
later creep back in.
Updates #16998
Updates #12614
Change-Id: Ib2c93d15c15c1f2b981464099177cd492d50391c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This is step 1 of ~3, breaking up #14720 into reviewable chunks, with
the aim to make syspolicy be a build-time configurable feature.
In this first (very noisy) step, all the syspolicy string key
constants move to a new constant-only (code-free) package. This will
make future steps more reviewable, without this movement noise.
There are no code or behavior changes here.
The future steps of this series can be seen in #14720: removing global
funcs from syspolicy resolution and using an interface that's plumbed
around instead. Then adding build tags.
Updates #12614
Change-Id: If73bf2c28b9c9b1a408fe868b0b6a25b03eeabd1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
fixestailscale/corp#26369
The suggested exit node is currently only calculated during a localAPI request.
For older UIs, this wasn't a bad choice - we could just fetch it on-demand when a menu
presented itself. For newer incarnations however, this is an always-visible field
that needs to react to changes in the suggested exit node's value.
This change recalculates the suggested exit node ID on netmap updates and
broadcasts it on the IPN bus. The localAPI version of this remains intact for the
time being.
Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
Pull the lock-bearing code into a closure, and use a clone rather than a
shallow copy of the hostinfo record.
Updates #11649
Change-Id: I4f1d42c42ce45e493b204baae0d50b1cbf82b102
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
The early unlock on this branch was required because the "send" method goes on
to acquire the mutex itself. Rather than release the lock just to acquire it
again, call the underlying locked helper directly.
Updates #11649
Change-Id: I50d81864a00150fc41460b7486a9c65655f282f5
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
In places where we are locking the LocakBackend and immediately deferring an
unlock, and where there is no shortcut path in the control flow below the
deferral, we do not need the unlockOnce helper. Replace all these with use of
the lock directly.
Updates #11649
Change-Id: I3e6a7110dfc9ec6c1d38d2585c5367a0d4e76514
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
There are several methods within the LocalBackend that used an unusual and
error-prone lock discipline whereby they require the caller to hold the backend
mutex on entry, but release it on the way out.
In #11650 we added some support code to make this pattern more visible.
Now it is time to eliminate the pattern (at least within this package).
This is intended to produce no semantic changes, though I am relying on
integration tests and careful inspection to achieve that.
To the extent possible I preserved the existing control flow. In a few places,
however, I replaced this with an unlock/lock closure. This means we will
sometimes reacquire a lock only to release it again one frame up the stack, but
these operations are not performance sensitive and the legibility gain seems
worthwhile.
We can probably also pull some of these out into separate methods, but I did
not do that here so as to avoid other variable scope changes that might be hard
to see. I would like to do some more cleanup separately.
As a follow-up, we could also remove the unlockOnce helper, but I did not do
that here either.
Updates #11649
Change-Id: I4c92d4536eca629cfcd6187528381c33f4d64e20
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
In the components where an event bus is already plumbed through, remove the
exceptions that allow it to be omitted, and update all the tests that relied on
those workarounds execute properly.
This change applies only to the places where we're already using the bus; it
does not enforce the existence of a bus in other components (yet),
Updates #15160
Change-Id: Iebb92243caba82b5eb420c49fc3e089a77454f65
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
In #16625, I introduced a mechanism for sending the selected exit node
to Control via tailcfg.Hostinfo.ExitNodeID as part of the MapRequest.
@nickkhyl pointed out that LocalBackend.doSetHostinfoFilterServices
needs to be triggered in order to actually send this update. This
patch adds that command. It also prevents the client from sending
"auto:any" in that field, because that’s not a real exit node ID.
This patch also fills in some missing checks in TestConfigureExitNode.
Updates tailscale/corp#30536
Signed-off-by: Simon Law <sfllaw@tailscale.com>
When a client selects a particular exit node, Control may use that as
a signal for deciding other routes.
This patch causes the client to report whenever the current exit node
changes, through tailcfg.Hostinfo.ExitNodeID. It relies on a properly
set ipn.Prefs.ExitNodeID, which should already be resolved by
`tailscale set`.
Updates tailscale/corp#30536
Signed-off-by: Simon Law <sfllaw@tailscale.com>
With auto exit nodes enabled, the client picks exit nodes from the
ones advertised in the network map. Usually, it picks the one with the
highest priority score, but when the top spot is tied, it used to pick
randomly. Then, once it made a selection, it would strongly prefer to
stick with that exit node. It wouldn’t even consider another exit node
unless the client was shutdown or the exit node went offline. This is
to prevent flapping, where a client constantly chooses a different
random exit node.
The major problem with this algorithm is that new exit nodes don’t get
selected as often as they should. In fact, they wouldn’t even move
over if a higher scoring exit node appeared.
Let’s say that you have an exit node and it’s overloaded. So you spin
up a new exit node, right beside your existing one, in the hopes that
the traffic will be split across them. But since the client had this
strong affinity, they stick with the exit node they know and love.
Using rendezvous hashing, we can have different clients spread
their selections equally across their top scoring exit nodes. When an
exit node shuts down, its clients will spread themselves evenly to
their other equal options. When an exit node starts, a proportional
number of clients will migrate to their new best option.
Read more: https://en.wikipedia.org/wiki/Rendezvous_hashing
The trade-off is that starting up a new exit node may cause some
clients to move over, interrupting their existing network connections.
So this change is only enabled for tailnets with `traffic-steering`
enabled.
Updates tailscale/corp#29966
Fixes#16551
Signed-off-by: Simon Law <sfllaw@tailscale.com>
@nickkyl added an peer.Online check to suggestExitNodeUsingDERP, so it
should also check when running suggestExitNodeUsingTrafficSteering.
Updates tailscale/corp#29966
Signed-off-by: Simon Law <sfllaw@tailscale.com>
Thanks to @nickkhyl for pointing out that NetMap.Peers doesn’t get
incremental updates since the last full NetMap update. Instead, he
recommends using ipn/ipnlocal.nodeBackend.AppendMatchingPeers.
Updates #cleanup
Signed-off-by: Simon Law <sfllaw@tailscale.com>
When `tailscale exit-node suggest` contacts the LocalAPI for a
suggested exit node, the client consults its netmap for peers that
contain the `suggest-exit-node` peercap. It currently uses a series of
heuristics to determine the exit node to suggest.
When the `traffic-steering` feature flag is enabled on its tailnet,
the client will defer to Control’s priority scores for a particular
peer. These scores, in `tailcfg.Hostinfo.Location.Priority`, were
historically only used for Mullvad exit nodes, but they have now been
extended to score any peer that could host a redundant resource.
Client capability version 119 is the earliest client that understands
these traffic steering scores. Control tells the client to switch to
rely on these scores by adding `tailcfg.NodeAttrTrafficSteering` to
its `AllCaps`.
Updates tailscale/corp#29966
Signed-off-by: Simon Law <sfllaw@tailscale.com>
If the GUI receives a new exit node ID before the new netmap, it may treat the node as offline or invalid
if the previous netmap didn't include the peer at all, or if the peer was offline or not advertised as an exit node.
This may result in briefly issuing and dismissing a warning, or a similar issue, which isn't ideal.
In this PR, we change the operation order to send the new netmap to clients first before selecting the new exit node
and notifying them of the Exit Node change.
Updates tailscale/corp#30252 (an old issue discovered during testing this)
Signed-off-by: Nick Khyl <nickk@tailscale.com>
We already check this for cases where ipn.Prefs.AutoExitNode is configured via syspolicy.
Configuring it directly through EditPrefs should behave the same, so we add a test for that as well.
Additionally, we clarify the implementation and future extensibility in (*LocalBackend).resolveAutoExitNodeLocked,
where the AutoExitNode is actually enforced.
Updates tailscale/corp#29969
Updates #cleanup
Signed-off-by: Nick Khyl <nickk@tailscale.com>
So it can be used from the CLI without importing ipnlocal.
While there, also remove isAutoExitNodeID, a wrapper around parseAutoExitNodeID
that's no longer used.
Updates tailscale/corp#29969
Updates #cleanup
Signed-off-by: Nick Khyl <nickk@tailscale.com>
When the policy setting is enabled, it allows users to override the exit node enforced by the ExitNodeID
or ExitNodeIP policy. It's primarily intended for use when ExitNodeID is set to auto:any, but it can also
be used with specific exit nodes. It does not allow disabling exit node usage entirely.
Once the exit node policy is overridden, it will not be enforced again until the policy changes,
the user connects or disconnects Tailscale, switches profiles, or disables the override.
Updates tailscale/corp#29969
Signed-off-by: Nick Khyl <nickk@tailscale.com>
Now that applySysPolicy is only called by (*LocalBackend).reconcilePrefsLocked,
we can make it a method to avoid passing state via parameters and to support
future extensibility.
Also factor out exit node-specific logic into applyExitNodeSysPolicyLocked.
Updates tailscale/corp#29969
Signed-off-by: Nick Khyl <nickk@tailscale.com>
Now that resolveExitNodeInPrefsLocked is the only caller of setExitNodeID,
and setExitNodeID is the only caller of resolveExitNodeIP, we can restructure
the code with resolveExitNodeInPrefsLocked now calling both
resolveAutoExitNodeLocked and resolveExitNodeIPLocked directly.
This prepares for factoring out resolveAutoExitNodeLocked and related
auto-exit-node logic into an ipnext extension in a future commit.
While there, we also update exit node by IP lookup to use (*nodeBackend).NodeByAddr
and (*nodeBackend).NodeByID instead of iterating over all peers in the most recent netmap.
Updates tailscale/corp#29969
Signed-off-by: Nick Khyl <nickk@tailscale.com>
In this PR, we start passing a LocalAPI actor to (*LocalBackend).Logout to make it subject
to the same access check as disconnects made via tailscale down or the GUI.
We then update the CLI to allow `tailscale logout` to accept a reason, similar to `tailscale down`.
Updates tailscale/corp#26249
Signed-off-by: Nick Khyl <nickk@tailscale.com>
We extract checkEditPrefsAccessLocked, adjustEditPrefsLocked, and onEditPrefsLocked from the EditPrefs
execution path, defining when each step is performed and what behavior is allowed at each stage.
Currently, this is primarily used to support Always On mode, to handle the Exit Node enablement toggle,
and to report prefs edit metrics.
We then use it to enforce Exit Node policy settings by preventing users from setting an exit node
and making EditPrefs return an error when an exit node is restricted by policy. This enforcement is also
extended to the Exit Node toggle.
These changes prepare for supporting Exit Node overrides when permitted by policy and preventing logout
while Always On mode is enabled.
In the future, implementation of these methods can be delegated to ipnext extensions via the feature hooks.
Updates tailscale/corp#29969
Updates tailscale/corp#26249
Signed-off-by: Nick Khyl <nickk@tailscale.com>
We have several places where we call applySysPolicy, suggestExitNodeLocked, and setExitNodeID.
While there are cases where we want to resolve the exit node specifically, such as when network
conditions change or a new netmap is received, we typically need to perform all three steps.
For example, enforcing policy settings may enable auto exit nodes or set an ExitNodeIP,
which in turn requires picking a suggested exit node or resolving the IP to an ID, respectively.
In this PR, we introduce (*LocalBackend).resolveExitNodeInPrefsLocked and (*LocalBackend).reconcilePrefsLocked,
with the latter calling both applySysPolicy and resolveExitNodeInPrefsLocked.
Consolidating these steps into a single extensibility point would also make it easier to support
future hooks registered by ipnext extensions.
Updates tailscale/corp#29969
Signed-off-by: Nick Khyl <nickk@tailscale.com>
In this PR, we update setExitNodeID to retain the existing exit node if auto exit node is enabled,
the current exit node is allowed by policy, and no suggested exit node is available yet.
Updates tailscale/corp#29969
Signed-off-by: Nick Khyl <nickk@tailscale.com>
Now that (*LocalBackend).suggestExitNodeLocked is never called with a non-current netmap
(the netMap parameter is always nil, indicating that the current netmap should be used),
we can remove the unused parameter.
Additionally, instead of suggestExitNodeLocked passing the most recent full netmap to suggestExitNode,
we now pass the current nodeBackend so it can access peers with delta updates applied.
Finally, with that fixed, we no longer need to skip TestUpdateNetmapDeltaAutoExitNode.
Updates tailscale/corp#29969
Fixes#16455
Signed-off-by: Nick Khyl <nickk@tailscale.com>
In this PR, we add (*LocalBackend).RefreshExitNode which determines which exit node
to use based on the current prefs and netmap and switches to it if needed. It supports
both scenarios when an exit node is specified by IP (rather than ID) and needs to be resolved
once the netmap is ready as well as auto exit nodes.
We then use it in (*LocalBackend).SetControlClientStatus when the netmap changes,
and wherever (*LocalBackend).pickNewAutoExitNode was previously used.
Updates tailscale/corp#29969
Signed-off-by: Nick Khyl <nickk@tailscale.com>
With this change, policy enforcement and exit node resolution can happen in separate steps,
since enforcement no longer depends on resolving the suggested exit node. This keeps policy
enforcement synchronous (e.g., when switching profiles), while allowing exit node resolution
to be asynchronous on netmap updates, link changes, etc.
Additionally, the new preference will be used to let GUIs and CLIs switch back to "auto" mode
after a manual exit node override, which is necessary for tailscale/corp#29969.
Updates tailscale/corp#29969
Updates #16459
Signed-off-by: Nick Khyl <nickk@tailscale.com>
TestSetControlClientStatusAutoExitNode is broken similarly to TestUpdateNetmapDeltaAutoExitNode
as suggestExitNode didn't previously check the online status of exit nodes, and similarly to the other test
it succeeded because the test itself is also broken.
However, it is easier to fix as it sends out a full netmap update rather than a delta peer update,
so it doesn't depend on the same refactoring as TestSetControlClientStatusAutoExitNode.
Updates #16455
Updates tailscale/corp#29969
Signed-off-by: Nick Khyl <nickk@tailscale.com>
(*profileManager).CurrentPrefs() is always valid. Additionally, there's no value in cloning
and passing the full ipn.Prefs when editing preferences. Instead, ipn.MaskedPrefs should
only have ExitNodeID set.
Updates tailscale/corp#29969
Signed-off-by: Nick Khyl <nickk@tailscale.com>