- Refactor cluster.json to support internal/external nodes ('controller' and 'storage')
- Bootstrap embedded partitions when 'storage' nodes not present
- Update onos-gen-config script to generate cluster.json based on environment variables
- Update setup scenario to ignore missing $OCC# environment variables
Change-Id: Ia93b64e13d7a7c35ed712da4c681425e3ccf9fe9
* Upgrade Raft primitives to Atomix 3.0
* Replace cluster store and messaging implementations with Atomix cluster management/messaging
* Add test scripts for installing/starting Atomix cluster
* Replace core primitives with Atomix primitives.
Change-Id: I7623653c81292a34f21b01f5f38ca11b5ef15cad
Clone to CPU is available only for packets processed via multicast
groups. Can be changed in the future when implementation for clone
session APIs is available in PI and P4 targets.
Also:
- compile "fabric-full" profile and generate constants from it
- use interpreter to map logical ports to data plane port IDs
Change-Id: I7db30c08dcf69ed9c870748cce8a797bbd5d6f78
Assuming we execute two put operations with the same key but different value, i.e. put {key, value1} and put {key, value2}
Both put should return true
Change-Id: Iad8d68fa68e7b4ce37cdd3634d36144aa1b21afe
(cherry picked from commit fb92a5a50b6ce2abe94ca807937207210fd094cb)
This device type is now advertized by the server
device driver. Also, the ONOS UI maps this new device
type to a glyph.
Change-Id: Ib4147676474b43202bbdff595a0fa0520b70fe91
Signed-off-by: Georgios Katsikas <katsikas.gp@gmail.com>
Most notably, we fix a bug in which some nodes were not able to find
pipeconf-specific behaviors for a given device. The problem is not
completelly solved but it's mitigated.
There's a race condition caused by the fact that the GDP updates the cfg
with the merged driver name before advertising the device to the core.
Some nodes might receive the cfg update after the device has been
advertised. We mitigate the problem by performing the pipeline deploy
(slow operation) after the cfg update, giving more time for nodes
to catch up. Perhaps we should listen for cfg update events before
advertising the device to the core?
Also:
- NPE when getting P4Runtime client
- Detect if a base driver is already merged in pipeconf manager
- Longer timeouts in P4Runtime driver and protocol (for slow networks)
- Configurable timeout in P4Runtime driver and GDP
- NPE when adding/removing device agent listeners in P4Rtunime handshaker
- Various exceptions due to race conditions in GDP when disconnecting
devices (by serializing disconnect tasks per device)
- NPE when cancelling polling tasks in GDP
- Refactored PipeconfService to distinguish between driver merge,
pipeconf map update, and cfg update (now performed in the GDP)
- Fixed PipeconfManagerTest, not testing driver behaviours
- Use Guava striped locks when possible (more memory-efficient than maps,
and with strict atomicity guarantees w.r.t. to caches).
Change-Id: I30f3887541ba0fd44439a86885e9821ac565b64c
Issue was caused by race condition in GDP between the first connection
task, and the periodic one (checking reachability of devices in the cfg).
The issue is fixed by serializing such tasks for the same device.
Moreover, this patch brings better error reporting and handling of
completable futures.
Change-Id: I8c3a685c368541d33395945159b45a5740a5a0c3
The P4Runtime client was hanging (deadlock) on a master arbitration
request. As such, all other requests (e.g. table write) were waiting
for the client's request lock to become available.
Apart from fixing those deadlocks, this patch brings a number of
improvements that all together allow to run networks of 100+ P4Runtime
devices on a single ONOS instance (before only ~20 devices)
Includes:
- Asynchrounous mastership handling in DevicHandshaker (as defined in
the P4Runtime and OpenFlow spec)
- Refactored arbitration handling in the P4RuntimeClient
to be consistent with the P4Runtime spec
- Report suspect deadlocks in P4RuntimeClientImpl
- Exploit write errors in P4RuntimeClient to quickly report
channel/mastership errors to upper layers
- Complete all futures with deadlines in P4Runtime driver
- Dump all tables in one request
- Re-purposed ChannelEvent to DeviceAgentEvent to carry also mastership
response events
- Fixed IntelliJ warnings
- Various code and log clean-ups
Change-Id: I9376793a9fe69d8eddf7e8ac2ef0ee4c14fbd198