diff --git a/doc/source/images/compute-node.png b/doc/source/images/compute-node.png
deleted file mode 100644
index 50efa04f..00000000
Binary files a/doc/source/images/compute-node.png and /dev/null differ
diff --git a/doc/source/images/compute-node.svg b/doc/source/images/compute-node.svg
deleted file mode 100644
index 01b6389f..00000000
--- a/doc/source/images/compute-node.svg
+++ /dev/null
@@ -1,722 +0,0 @@
-
-
-
-
diff --git a/doc/source/images/filtering-broadcast.png b/doc/source/images/filtering-broadcast.png
deleted file mode 100644
index 4c32e802..00000000
Binary files a/doc/source/images/filtering-broadcast.png and /dev/null differ
diff --git a/doc/source/images/filtering-broadcast.svg b/doc/source/images/filtering-broadcast.svg
deleted file mode 100644
index 5a088a77..00000000
--- a/doc/source/images/filtering-broadcast.svg
+++ /dev/null
@@ -1,891 +0,0 @@
-
-
-
-
diff --git a/doc/source/images/filtering-incoming.png b/doc/source/images/filtering-incoming.png
deleted file mode 100644
index af9cf7c4..00000000
Binary files a/doc/source/images/filtering-incoming.png and /dev/null differ
diff --git a/doc/source/images/filtering-incoming.svg b/doc/source/images/filtering-incoming.svg
deleted file mode 100644
index d9811349..00000000
--- a/doc/source/images/filtering-incoming.svg
+++ /dev/null
@@ -1,966 +0,0 @@
-
-
-
-
diff --git a/doc/source/images/filtering-outgoing.png b/doc/source/images/filtering-outgoing.png
deleted file mode 100644
index 25887256..00000000
Binary files a/doc/source/images/filtering-outgoing.png and /dev/null differ
diff --git a/doc/source/images/filtering-outgoing.svg b/doc/source/images/filtering-outgoing.svg
deleted file mode 100644
index 5ece5c77..00000000
--- a/doc/source/images/filtering-outgoing.svg
+++ /dev/null
@@ -1,955 +0,0 @@
-
-
-
-
diff --git a/doc/source/images/internal-gre-tunnel.png b/doc/source/images/internal-gre-tunnel.png
deleted file mode 100644
index 8ef2a2ab..00000000
Binary files a/doc/source/images/internal-gre-tunnel.png and /dev/null differ
diff --git a/doc/source/images/internal-gre-tunnel.svg b/doc/source/images/internal-gre-tunnel.svg
deleted file mode 100644
index 324a70bd..00000000
--- a/doc/source/images/internal-gre-tunnel.svg
+++ /dev/null
@@ -1,1941 +0,0 @@
-
-
\ No newline at end of file
diff --git a/doc/source/images/internal-live-migration.png b/doc/source/images/internal-live-migration.png
deleted file mode 100644
index 75bf9c36..00000000
Binary files a/doc/source/images/internal-live-migration.png and /dev/null differ
diff --git a/doc/source/images/internal-live-migration.svg b/doc/source/images/internal-live-migration.svg
deleted file mode 100644
index 94180aba..00000000
--- a/doc/source/images/internal-live-migration.svg
+++ /dev/null
@@ -1,382 +0,0 @@
-
-
-
-
diff --git a/doc/source/images/internal-quantum-bootup.png b/doc/source/images/internal-quantum-bootup.png
deleted file mode 100644
index 92db629f..00000000
Binary files a/doc/source/images/internal-quantum-bootup.png and /dev/null differ
diff --git a/doc/source/images/internal-quantum-bootup.svg b/doc/source/images/internal-quantum-bootup.svg
deleted file mode 100644
index fe2e23bc..00000000
--- a/doc/source/images/internal-quantum-bootup.svg
+++ /dev/null
@@ -1,411 +0,0 @@
-
-
-
-
diff --git a/doc/source/images/internal-quantum-gre-flow-table.png b/doc/source/images/internal-quantum-gre-flow-table.png
deleted file mode 100644
index 70007264..00000000
Binary files a/doc/source/images/internal-quantum-gre-flow-table.png and /dev/null differ
diff --git a/doc/source/images/internal-quantum-gre-flow-table.svg b/doc/source/images/internal-quantum-gre-flow-table.svg
deleted file mode 100644
index 53504c17..00000000
--- a/doc/source/images/internal-quantum-gre-flow-table.svg
+++ /dev/null
@@ -1,589 +0,0 @@
-
-
-
-
diff --git a/doc/source/images/internal-quantum-instance-create.png b/doc/source/images/internal-quantum-instance-create.png
deleted file mode 100644
index c3b55e00..00000000
Binary files a/doc/source/images/internal-quantum-instance-create.png and /dev/null differ
diff --git a/doc/source/images/internal-quantum-instance-create.svg b/doc/source/images/internal-quantum-instance-create.svg
deleted file mode 100644
index 801e6d4e..00000000
--- a/doc/source/images/internal-quantum-instance-create.svg
+++ /dev/null
@@ -1,481 +0,0 @@
-
-
-
-
diff --git a/doc/source/images/internal-quantum-network-creation.png b/doc/source/images/internal-quantum-network-creation.png
deleted file mode 100644
index 4ff5602b..00000000
Binary files a/doc/source/images/internal-quantum-network-creation.png and /dev/null differ
diff --git a/doc/source/images/internal-quantum-network-creation.svg b/doc/source/images/internal-quantum-network-creation.svg
deleted file mode 100644
index ef1b5031..00000000
--- a/doc/source/images/internal-quantum-network-creation.svg
+++ /dev/null
@@ -1,236 +0,0 @@
-
-
-
-
diff --git a/doc/source/images/internal-quantum-overview.png b/doc/source/images/internal-quantum-overview.png
deleted file mode 100644
index 0df0058d..00000000
Binary files a/doc/source/images/internal-quantum-overview.png and /dev/null differ
diff --git a/doc/source/images/internal-quantum-overview.svg b/doc/source/images/internal-quantum-overview.svg
deleted file mode 100644
index 4ce416d9..00000000
--- a/doc/source/images/internal-quantum-overview.svg
+++ /dev/null
@@ -1,1813 +0,0 @@
-
-
\ No newline at end of file
diff --git a/doc/source/images/internal-tunnel-live-migration-after.png b/doc/source/images/internal-tunnel-live-migration-after.png
deleted file mode 100644
index 1b6219e0..00000000
Binary files a/doc/source/images/internal-tunnel-live-migration-after.png and /dev/null differ
diff --git a/doc/source/images/internal-tunnel-live-migration-after.svg b/doc/source/images/internal-tunnel-live-migration-after.svg
deleted file mode 100644
index 83496893..00000000
--- a/doc/source/images/internal-tunnel-live-migration-after.svg
+++ /dev/null
@@ -1,1497 +0,0 @@
-
-
\ No newline at end of file
diff --git a/doc/source/images/internal-tunnel-live-migration-before.png b/doc/source/images/internal-tunnel-live-migration-before.png
deleted file mode 100644
index 6d53bc43..00000000
Binary files a/doc/source/images/internal-tunnel-live-migration-before.png and /dev/null differ
diff --git a/doc/source/images/internal-tunnel-live-migration-before.svg b/doc/source/images/internal-tunnel-live-migration-before.svg
deleted file mode 100644
index 916d370f..00000000
--- a/doc/source/images/internal-tunnel-live-migration-before.svg
+++ /dev/null
@@ -1,1445 +0,0 @@
-
-
\ No newline at end of file
diff --git a/doc/source/images/internal-tunnel-live-migration-during.png b/doc/source/images/internal-tunnel-live-migration-during.png
deleted file mode 100644
index aa1e9836..00000000
Binary files a/doc/source/images/internal-tunnel-live-migration-during.png and /dev/null differ
diff --git a/doc/source/images/internal-tunnel-live-migration-during.svg b/doc/source/images/internal-tunnel-live-migration-during.svg
deleted file mode 100644
index 0362e66f..00000000
--- a/doc/source/images/internal-tunnel-live-migration-during.svg
+++ /dev/null
@@ -1,1578 +0,0 @@
-
-
\ No newline at end of file
diff --git a/doc/source/images/logical-view.png b/doc/source/images/logical-view.png
deleted file mode 100644
index f3a87bbf..00000000
Binary files a/doc/source/images/logical-view.png and /dev/null differ
diff --git a/doc/source/images/logical-view.svg b/doc/source/images/logical-view.svg
deleted file mode 100644
index 6a6a97a2..00000000
--- a/doc/source/images/logical-view.svg
+++ /dev/null
@@ -1,623 +0,0 @@
-
-
-
-
diff --git a/doc/source/images/mac-learning.png b/doc/source/images/mac-learning.png
deleted file mode 100644
index 8ce3e75c..00000000
Binary files a/doc/source/images/mac-learning.png and /dev/null differ
diff --git a/doc/source/images/mac-learning.svg b/doc/source/images/mac-learning.svg
deleted file mode 100644
index 49f600ea..00000000
--- a/doc/source/images/mac-learning.svg
+++ /dev/null
@@ -1,759 +0,0 @@
-
-
-
-
diff --git a/doc/source/images/minimul-setup.png b/doc/source/images/minimul-setup.png
deleted file mode 100644
index 67f52f78..00000000
Binary files a/doc/source/images/minimul-setup.png and /dev/null differ
diff --git a/doc/source/images/minimul-setup.svg b/doc/source/images/minimul-setup.svg
deleted file mode 100644
index 21319546..00000000
--- a/doc/source/images/minimul-setup.svg
+++ /dev/null
@@ -1,903 +0,0 @@
-
-
-
-
diff --git a/doc/source/images/network-id.svg b/doc/source/images/network-id.svg
deleted file mode 100644
index ad41e97d..00000000
--- a/doc/source/images/network-id.svg
+++ /dev/null
@@ -1,1434 +0,0 @@
-
-
-
-
diff --git a/doc/source/images/physical-view.png b/doc/source/images/physical-view.png
deleted file mode 100644
index 33354379..00000000
Binary files a/doc/source/images/physical-view.png and /dev/null differ
diff --git a/doc/source/images/physical-view.svg b/doc/source/images/physical-view.svg
deleted file mode 100644
index ad41e97d..00000000
--- a/doc/source/images/physical-view.svg
+++ /dev/null
@@ -1,1434 +0,0 @@
-
-
-
-
diff --git a/doc/source/images/trace-route.png b/doc/source/images/trace-route.png
deleted file mode 100644
index 694cf1c9..00000000
Binary files a/doc/source/images/trace-route.png and /dev/null differ
diff --git a/doc/source/images/trace-route.svg b/doc/source/images/trace-route.svg
deleted file mode 100644
index a6ab9696..00000000
--- a/doc/source/images/trace-route.svg
+++ /dev/null
@@ -1,1000 +0,0 @@
-
-
-
-
diff --git a/doc/source/index.rst b/doc/source/index.rst
index f323bb26..3de6f330 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -13,7 +13,6 @@ Contents:
:maxdepth: 2
getting_started.rst
- openstack.rst
developing.rst
configuration.rst
diff --git a/doc/source/internals_l2_isolation.rst b/doc/source/internals_l2_isolation.rst
deleted file mode 100644
index 03e2ad0e..00000000
--- a/doc/source/internals_l2_isolation.rst
+++ /dev/null
@@ -1,193 +0,0 @@
-.. _internals_l2_isolation:
-
-****************
-Ryu L2 isolation
-****************
-This section describes how Ryu cooperates with Openstack Quantum and
-how its L2 isolation works.
-
-Overview
-========
-Ryu provides REST API by which Quantum server tells necessary informations.
-Quantum Server manages the association networks(uuid) to actual key value in
-addition to normal Quantum management informations.
-(Here key value is an integer for VLAN ID, GRE key and so on.
-The quantum only have to know the range of key which depends on the isolation
-technology. For example, 12 bit in VLAN case, 24 bit in GRE case.)
-Quantum Ryu plugin doesn't know about what technology Ryu uses for L2
-isolation.
-
- .. image:: /images/internal-quantum-overview.png
-
-Quantum doesn't necessarily knows all the informations Ryu needs like
-mac address attached to the interface. Ryu can gather those informations
-by accessing directly to OVSDB. When tunnel ports on OVS needs to be created
-on compute-node, Ryu will directly accesses to OVSDB and creates/deletes
-ports.
-
-
-Cooperate with Openstack Quantum
-================================
-Ryu reacts to Qauntnum events, compute-node boots up, network
-creation/deletion, and VM instance creation/deletion.
-When VM instance is created, corresponding quantum port is created.
-
-compute-node boot up
---------------------
-When a compute note boots up, minimal initalization work is done by
-Ryu-quantum-agent which tell necessary informations to Ryu.
-Then Ryu set up OVS such that OVS connects to Ryu via OpenFlow.
-There are 2 steps of OVS initialization. By agent and by Ryu.
-This is to keep Ryu agent logic minimal and independent from what actual Ryu
-needs to set. Even if Ryu is enhanced for new feature and some additional
-configuration to OVS is needed (for example multi-controller for HA),
-ryu agent doesn't need to be modified due to 2 step initialization.
-
- .. image:: /images/internal-quantum-bootup.png
-
-network creation
-----------------
-When network is created, quantum Ryu plugin assigns Key value to
-a created network, and tell the association to Ryu
-
- .. image:: /images/internal-quantum-network-creation.png
-
-VM instance creation
---------------------
-When VM instance is created, quantum port is created. Quantum Ryu
-plugin tells the association of (network uuid, port uuid) to Ryu, and
-then OVS port is created. Ryu finds the port creation via OpenFlow
-and get the information of the created port for (port uuid, attached
-mac address) via OVSDB protocoal, and then sets up network
-configuration to OVS.
-
- .. image:: /images/internal-quantum-instance-create.png
-
-quantum_adapater RyuApp
------------------------
-This application watches port creation/deletion by OF protocol.
-When it detects the creation of ports, it tries to retrieve related
-informations(port uuid, mac address) by OVSDB protocol,
-tries to determine if the port corresponds to Qauntnum VM port, and then
-stores those informations into the in-memory, which generates the event of
-VMPort creation. Then Ryu app of isolation (simple_vlan or gre_tunnel)
-will be notified.
-
-live-migration
---------------
-live-migration is popular feature with virtualization, so as OpenStack.
-As of this writing, there is no hooks in quantum. So no notification/callback
-are triggered when live-migration starts/on-going/ends/error-abort.
-Traditional live-migration uses GARP to tell switches the used mac address
-is moved.
-
- .. image:: /images/internal-live-migration.png
-
-VLAN
-====
-OVS supports port vlan with setting tag value in OVSDB.
-Ryu utilizes it for L2 isolation.
-
-simple_vlan RyuApp
-------------------
-When port is created, it sets tag value to key assigned to a given network uuid.
-And sets flow entry to output:normal.
-
-live-migration
---------------
-As flows includes output:normal action, packets are processed by
-OVS builtin mac-learning.
-
-#. When destination VM port is created, same rule is inserted on OVS
- on the destination.
- But the port on the destination is not used until the first GARP packet
- is sent
-#. When VM is resumed on the destination, a GARP packet is sent.
- Then, mac learning tables on each switch are updated.
- So the port on the source will be unused.
-#. When the VM on the source is destroyed, the port on the source is also
- destroyed.
-
-
-GRE tunneling
-=============
-OVS supports tunneling and Ryu utilizes it for L2 isolation as follows.
-
- .. image:: /images/internal-gre-tunnel.png
-
-tunnel_port_updator RyuApp
---------------------------
-This application watches the VM port creation/deletion, and creates/deletes
-tunnel port on OVS when necessary.
-That is, it creates tunnel port between compute-nodes which have VMs of same
-tenant. it deletes tunnel ports when compute-nodes have no VMs of same
-tenant.
-
-gre_tunnel RyuApp
------------------
-This application watches VM/tunnel port creation/deletion, and
-installs/removes flow entries based on port creation/deletion.
-
-Flow Entries
-------------
-Ryu installs following flow entries.
-
- .. image:: /images/internal-quantum-gre-flow-table.png
-
-live-migration
---------------
-As flows are aware of mac address of each ports, Ryu updates flow table
-for live-migration on each compute-nodes.
-When the port of same mac address is added on another compute-node,
-Ryu installs flows that duplicates packet so that packets destined to
-the mac address will be duplicated and sent to both ports.
-GARP from hypervisor isn't used.
-
- .. image:: /images/internal-tunnel-live-migration-before.png
- .. image:: /images/internal-tunnel-live-migration-during.png
- .. image:: /images/internal-tunnel-live-migration-after.png
-
-Mac address based L2 isolation
-==============================
-Ryu also supports mac address based L2 isolation.
-In this case key isn't used.
-
-mac learing
------------
-When VM sends packets, Ryu determins network uuid from OVS port and then
-associates src mac address to network uuid.
-
- .. image:: /images/mac-learning.png
-
-
-packet filtering(L2 unicast case)
----------------------------------
-* When VM sending L2-unicast packet, Ryu checks if the destination mac
- address belongs to the same netowrk id of the source mac address which
- is same to the network uuid that the OVS port is associated to.
-* If no, the packet is dropped.
-* If yes, send the packet is sent to ports which belongs to the same
- network uuid and external port.
-
- .. image:: /images/filtering-outgoing.png
- .. image:: /images/filtering-incoming.png
-
-
-packet filtering(L2 broadcast case)
------------------------------------
-* When VM sending L2-broadcast/multicaset packet, Ryu checks if the source
- mac address.
-* send the packet to all external ports and all OVS ports that belongs
- to the same network uuid of the source mac address.
-* When receiving broacast/multicast packet from the external ports,
- Ryu checks if the source mac address belongs to known network uuid.
-
- * If yes, send the packet to the external ports except incoming one
- and the all OVS ports that belongs to the network uuid
- * if no, drop the packet.
-
- .. image:: /images/filtering-broadcast.png
-
-live-migration
---------------
-As of this writing, simple isolation doesn't support live-migration.
diff --git a/doc/source/openstack.rst b/doc/source/openstack.rst
deleted file mode 100644
index e0b9f571..00000000
--- a/doc/source/openstack.rst
+++ /dev/null
@@ -1,12 +0,0 @@
-*********************
-OpenStack Integration
-*********************
-
-Ryu provides tenant isolation feature in OpenStack.
-
-.. toctree::
- :maxdepth: 1
-
- using_with_openstack.rst
- step_by_step.rst
- internals_l2_isolation.rst
diff --git a/doc/source/step_by_step.rst b/doc/source/step_by_step.rst
deleted file mode 100644
index 54f99ffb..00000000
--- a/doc/source/step_by_step.rst
+++ /dev/null
@@ -1,374 +0,0 @@
-.. step_by_step_example
-
-***************************************************
-Step-by-step example for testing ryu with OpenStack
-***************************************************
-
-Overview
-========
-Here is the step-by-step to test if ryu plugin/segregation works with openstack.
-In this example,
-
-#. create one user account as an admin and an user
-#. create two projects and create a network tenant for each project
-#. run VM instances for each projects
-#. open vga console via virt-manager
-#. try to ping to each VMs
-
-Note: in this section, nova/quantum/ryu installation isn't explained.
-If you don't have any experience with openstack nova, it is strongly
-recommended to try plain nova and quantum with ovs plugin.
-
-Conventions
-===========
-The following variable is used to show values which depends on the
-configuration.
-
-* $username: nova user account name which is used as admin and user
- Probably you man want to create two account to separate admin
- and user. In this example, only single account is used for
- simplicity.
-
- e.g. yamahata
-
-* $tenant0: nova project name and tenant name.
- This name is used as both nova project name and nova network
- tenant name.
- Here we abuse nova project name as network tenant name for
- simplicity. If you'd like to more complex setting, please refer
- to nova documentation.
-
- e.g. yamahata-project-0
-
-* $iprange0: IP ranges which is used for $tenant0
- e.g. 172.17.220.0/25
-
-* $tenant1: another project name
- e.g. yamahata-project-1
-
-* $iprange1: another IP ranges for $tenant1
- e.g. 172.17.221.0/25
-
-
-step-by-step testing
-====================
-In this example, euca2ools is used because it's handy.
-The more openstack way is possible, though.
-
-#. setup nova data base
-
- Run the following on a nova node::
-
- $ sudo nova-manage db sync
-
-#. setup quantum data base
-
- Use mysql command to connect mysql server::
-
- $ mysql -u -p
-
- Then create the quantum db and allow the agents to access it::
-
- mysql> CREATE DATABASE ovs_quantum;
- mysql> GRANT USAGE ON *.* to @'yourremotehost' IDENTIFIED BY 'newpassword';
- mysql> FLUSH PRIVILEGES;
-
- Where the database name, ovs_quantum, the user name, , and
- its password, newpassword, are the one defined in the ryu plugin
- configuration file, ryu.ini.
-
- If you are using multiple compute nodes, the GRANT sentence needs to
- be repeated. Or wildcard, %, can be used like::
-
- mysql> GRANT USAGE ON *.* to @'%' IDENTIFIED BY 'newpassword';
-
-#. Make sure all nova, quantum, ryu and other openstack components are
- installed and running
-
- Especially
-
- * On nova compute/network node
-
- * Ryu must be installed
- * ryu quantum agent(ryu_quantum_agent.py) is put somewhere and
- it must be running
- * ovs bridge is configured
-
- * on machine quantum-server is running
-
- * Ryu must be installed
-
- * the db server is accessible from all related servers
-
-#. create a user on a nova node
-
- Run the following on a nova node::
-
- $ sudo nova-manage --flagfile=/etc/nova/nova.conf user admin $username
-
-
-#. Create project, get the zipfile for the project, unextract it and create
-
- ssh key for $tenant0
- Run the following::
-
- $ sudo nova-manage --flagfile /etc/nova/nova.conf project create $tenant0 --user=$username
- $ sudo nova-manage --flagfile=/etc/nova/nova.conf project create $tenant0 $username ./$tenant0.zip
- $ sudo unzip ./$tenant0.zip -d $tenant0
- $ source ./$tenant0/novarc
- $ euca-add-keypair mykey-$tenant0 > mykey-$tenant0.priv
-
-#. do the same of the above step for $tenant1
-
-#. create networks for each projects
-
- Run the followings::
-
- $ sudo nova-manage --flagfile=/etc/nova/nova.conf network create --label=$tenant0 --fixed_range_v4=$iprange0 --project_id=$tenant0
- $ sudo nova-manage --flagfile=/etc/nova/nova.conf network create --label=$tenant1 --fixed_range_v4=$iprange1 --project_id=$tenant1
-
-#. register image file
-
- Get the vm image from somewhere (or create it by yourself) and register it.
- The easiest way is to get the image someone has already created. You can find
- links from the below.
-
- * `Getting Images that Work with OpenStack `_.
-
- * `ttylinux by Scott Moser `_.
-
- In this example we use the ttylinux image just because its size is small::
-
- $ wget http://smoser.brickies.net/ubuntu/ttylinux-uec/ttylinux-uec-i686-12.1_2.6.35-22_1.tar.gz
- $ cloud-publish-tarball ttylinux-uec-i686-12.1_2.6.35-22_1.tar.gz
- $ euca-register /ttylinux-uec-amd64-12.1_2.6.35-22_1.img.manifest.xml
-
- Now you get the image id, ari-xxx, aki-xxx and ami-xxx, where xxx is
- replaced with some id number.
- Depending on which distribution you use, you need to use other command like
- uec-publish-tarball.
- If you customize images, you may have to use commands like euca-bundle-image,
- euca-upload-image, euca-register.
-
- Or you want to go for more openstack way, glance command is your friend
- to create/register image.
-
-#. run instances
-
- boot instances for each projects.
- In order to test network segregation, 2 or more VM instances need to
- be created:
-
-::
-
- $ source ./$tenant0/novarc
- $ euca-run-instances ami- -k mykey-$tenant0 -t m1.tiny
- # repeat euca-run-instances for some times.
- $ source ./$tenant1/novarc
- $ euca-run-instances ami- -k mykey-$tenant1 -t m1.tiny
-
-
-#. check if VM instances are created
-
- Get the list of VM instances you've created and their assigned IP address::
-
- $ euca-describe-instances
-
-#. login VM instances and try ping/traceroute
-
- In plain nova case, you can login the VM instances by ssh like
- "ssh -i mykey-$tenant0.priv root@$ipaddress"
- However, the VM instances are segregated from the management network. So the
- story differs. the easiest way to login the VM is to use virt-manager
- (or virsh) on each compute nodes.
- Identify on which compute node the VM is running by euca-describe-instances,
- and run virt-manager on the compute node. Show the vga console by
- virt-manager GUI, then you can login the VM instances.
-
- Then try "ping " or "traceroute "
- on each consoles.
-
-#. packet capture (optional)
-
- You can run wireshark or similar tools in order to observe what packets
- are sent.
-
-
-When something goes wrong
-=========================
-Something can go wrong sometimes unfortunately.
-Database tables used by openstack nova/quantum seems very fragile.
-Db can result in broken state easily. If you hit it, the easiest way is
-
-#. stop all the related daemons
-#. drop related DB and re-create them.
-#. clean up OVS related stuff
-
- OVS uses its own data base which is persistent. So reboot doesn't fix it.
- The leaked resources must be released explicitly by hand.
- The following command would help.::
-
- # ip link delete
- # tunctl -d
- # ovs-vsctl del-port
- # ovs-vsctl del-port
-
-#. restart the daemons
-#. set up from the scratch.
-
-Although you can fix it by issuing SQL manually, you have to know what you're
-doing with db tables.
-
-Appendix
-========
-configuration file examples
----------------------------
-This section includes sample configuration files I use for convenience.
-Some values need to be changed depending on your setup. For example
-IP addresses/port numbers.
-
-* /etc/nova/nova.conf for api, compute, network, volume, object-store and scheduler node
-
-Here is the nova.conf on which all nova servers are running::
-
- --verbose
- # For debugging
-
- --logdir=/var/log/nova
- --state_path=/var/lib/nova
- --lock_path=/var/lock/nova
- # I set those three above for my preference.
- # You don't have to set them if the default works for you
-
- --use_deprecated-auth=true
- # This depends on which authentication method you use.
-
- --sql_connection=mysql://nova:nova@localhost/nova
- # Change this depending on how MySQL(or other db?) is setup
-
- --dhcpbridge_flagfile=/etc/nova/nova.conf
- --dhcpbridge=/usr/local/bin/nova-dhcpbridge
- # This path depends on where you install nova.
-
- --fixed_range=172.17.220.0/16
- # You have to change this parameter depending on which IPs you uses
-
- --network_size=128
- # This depends on which IPs you uses for one tenant
-
- --network_manager=nova.network.quantum.manager.QuantumManager
- --quantum_connection_host=127.0.0.1 #
- # Change this according to your set up
-
- --connection_type=libvirt
- --libvirt_type=kvm
- --firewall_driver=quantum.plugins.ryu.nova.firewall.NopFirewallDriver
- --libvirt_ovs_integration_bridge=br-int
- --libvirt_vif_type=ethernet
- --libvirt_vif_driver=quantum.plugins.ryu.nova.vif.LibvirtOpenVswitchOFPRyuDriver
- --libvirt_ovs_ryu_api_host=:
- # default 172.0.0.1:8080
-
- --linuxnet_interface_driver=quantum.plugins.ryu.nova.linux_net.LinuxOVSRyuInterfaceDriver
- --linuxnet_ovs_ryu_api_host=:
- # default 172.0.0.1:8080
- # usually same to libvirt_ovs_ryu_api_host
-
- --quantum_use_dhcp
-
-
-* /etc/nova/nova.conf on compute nodes
-
-I copied the above to compute node and modified it.
-So it includes unnecessary values for network node. Since they don't harm,
-I didn't scrub them.::
-
- --verbose
-
- --logdir=/var/log/nova
- --state_path=/var/lib/nova
- --lock_path=/var/lock/nova
-
- --use_deprecated_auth
-
- --sql_connection=mysql://nova:nova@/nova
-
- --dhcpbridge_flagfile=/etc/nova/nova.conf
- --dhcpbridge=/usr/bin/nova-dhcpbridge
-
- --fixed_range=172.17.220.0/16
- --network_size=128
-
- --network_manager=nova.network.quantum.manager.QuantumManager
- --quantum_connection_host=
- --connection_type=libvirt
- --libvirt_type=kvm
- --libvirt_ovs_integration_bridge=br-int
- --libvirt_vif_type=ethernet
- --libvirt_vif_driver=quantum.plugins.ryu.nova.vif.LibvirtOpenVswitchOFPRyuDriver
- --libvirt_ovs_ryu_api_host=:
- --linuxnet_interface_driver=quantum.plugins.ryu.nova.linux_net.LinuxOVSRyuInterfaceDriver
- --linuxnet_ovs_ryu_api_host=:
- --firewall_driver=quantum.plugins.ryu.nova.firewall.NopFirewallDriver
- --quantum_use_dhcp
-
- --rabbit_host=
- --glance_api_servers=:
- --ec2_host=
- --osapi_host=
- --s3_host=
- --metadata_host=
-
-
-* /etc/quantum/plugins.ini
-
-This file needs to be installed on which quantum-server is running.
-This file defines which quantum plugin is used::
-
- [PLUGIN]
- # Quantum plugin provider module
- provider = quantum.plugins.ryu.ryu_quantum_plugin.RyuQuantumPlugin
-
-
-* /etc/quantum/quantum.conf
-
-This file needs to be installed on which quantum-server is running.
-A configuration file for quantum server. I use this file as is.
-
-* /etc/quantum/plugins/ryu/ryu.ini
-
-This files needs to be installed on nova-compute node, nova-network node
-and quantum-server node.
-This file defines several setting ryu quantum plugin/agent uses::
-
- [DATABASE]
- # This line MUST be changed to actually run the plugin.
- # Example: sql_connection = mysql://root:nova@127.0.0.1:3306/ovs_quantum
- #sql_connection = mysql://:@:/
- sql_connection = mysql://quantum:quantum@172.0.0.1:3306/ovs_quantum
-
- [OVS]
- integration-bridge = br-int
-
- # openflow-controller = :
- # openflow-rest-api = :
- openflow-controller = :
- # default 127.0.0.1:6633
- # This corresponds to : in ryu.conf
-
- openflow-rest-api = :
- # default 127.0.0.1:8080
- # This corresponds to : in ryu.conf
-
-* /etc/ryu/ryu.conf
-
-This file needs to be installed on which ryu-manager is running.
-If you use default configurations, you don't have to modify it.
-Just leave it blank::
-
- # Sample configuration file
- [DEFAULT]
- #wsapi_host=
- #wsapi_port=
- #ofp_listen_host=
- #ofp_listen_port=
diff --git a/doc/source/using_with_openstack.rst b/doc/source/using_with_openstack.rst
index 60f1a4bf..badb0446 100644
--- a/doc/source/using_with_openstack.rst
+++ b/doc/source/using_with_openstack.rst
@@ -3,262 +3,16 @@
************************************************************************
Using Ryu Network Operating System with OpenStack as OpenFlow controller
************************************************************************
-This section describes how to setup openstack (nova, quantum) and
-ryu-manager.
-It is assumed that kvm with libvirt is used and each host machines that run
-nova-compute/nova-network has two physical NICs.
-It would be possible to deploy it with single NIC machines as described at
-the last section.
-NOTE: How to use nova isn't described in this document.
+Ryu cooperates with OpenStack using Quantum Ryu plugin. The plugin is
+available in the official Quantum releases.
-Overview
-========
+For more information, please visit http://github.com/osrg/ryu/wiki/OpenStack .
+We described instructions of the installation / configuration of OpenStack
+with Ryu, and we provide pre-configured VM image to be able to easily try
+OpenStack with Ryu.
-Ryu is designed/implemented with for production use in mind, so it cooperates
-very well with `OpenStack `_ .
-With nova and quantum OVS plugin,
-Ryu provides L2 segregation of Multi-tenants without any switch feature/settings
-like VLAN. So it's very easy to use/experiment/deploy this segregation as
-the below figure.
+----
- .. image:: /images/logical-view.png
-
-
-
-Physical machine setup
-----------------------
-The following figure depicts how physical hosts are connected and each daemons
-are deployed.
-
- .. image:: /images/physical-view.png
-
-Although the nova-api, nova-scheduler, nova-network and related openstack
-daemons are installed in each own physical machines in the above picture,
-they can be installed on a physical machine which also runs nova-compute.
-Each host machine has two nics and one is connected to management LAN
-and other is connected to deployment LAN.
-
-
-How to install/setup
-====================
-If you are not familiar with installing/setting up nova/quantum/openvswitch
-from the source, please refer to OpenStack document and get back here again.
-[
-`OpenStack docs `_ ,
-`Nova `_ ,
-`Quantum `_ ,
-`OpenvSwtich and Quantum Part 1 `_ ,
-`OpenvSwtich and Quantum Part 2 `_ ,
-`OVS Quantum Plugin Documentation `_
-]
-
-* Install ryu and run ryu-manager
- * install ryu from the source code on the hosts on which you run
- * nova-compute,
- * quantum-server and
- * ryu-manager.
-
- This is because quantum-server and ova quantum agent which runs on
- nova-compute node needs ryu-client library to communicate ryu-manager.
-
- Type in ryu source directory::
-
- % python ./setup.py install
-
- * edit /etc/ryu/ryu.conf on the host on which you run ryu-manager
- if necessary
-
- No configuration is needed on hosts that runs quantum and ovs quantum
- agent.
-
- * run ryu network os::
-
- % ryu-manager
-
-
-* get nova source and quantum source from github
- * They are a bit modified from openstack master tree. They are available
- at github for convinience
-
- * https://github.com/osrg/nova/tree/ryu
- * https://github.com/osrg/quantum/tree/ryu
-
- clone them by typing the followings in an appropriate directory::
-
- % git clone git://github.com/osrg/nova.git
- % git clone git://github.com/osrg/quantum.git
-
- If you prefer https, try those::
-
- % git clone https://github.com/osrg/nova.git
- % git clone https://github.com/osrg/quantum.git
-
-
-* Install nova and quantum as usual.
- (And other Openstack related component if necessary. e.g. glance)
-
- Each daemons can be installed in a single machine or in different machines.
- Please refer to Openstack documentation for details.
- You may want to set up multiple nova-compute nodes for interesting use case.
-
-* Setup nova daemons. (Edit nova.conf)
- Specifically configure nova-network and nova-compute
-
- * configure nova-network
- * --fixed_ranges=
- * --network_size=
- * --network_manager=nova.network.quantum.manager.QuantumManager
- * --quantum_connection_host=
- * --firewall_driver=quantum.plugins.ryu.nova.firewall.NopFirewallDriver
- * --quantum_use_dhcp
-
- NOP firewall driver is newly introduced for demonstrating Ryu
- capability.
- If you want, other existing firewall driver can be specified.
- But such specification don't have any effect in fact
- because ryu directly controls packets to VM instance via OVS bypassing
- netfilter/iptables.
-
- * --linuxnet_interface_driver=quantum.plugins.ryu.nova.linux_net.LinuxOVSRyuInterfaceDriver
- * --linuxnet_ovs_ryu_api_host=:
- * set up OVS on each nova-compute node
-
- If Ubuntu is used, you can install it from packages as
- openvswitch-datapath-dkms, openvswitch-common, openvswitch-switch
- If you already use bridge, you may need to edit /etc/modules to load
- openvswitch kernel module, openvswitch_mod and brcompat_mod, before
- bridge module and reboot to unload bridge module.
-
- And then create ovs bridge::
-
- # ovs-vsctl add-br
-
- And if you connect NIC to OVS bridge.::
-
- # ovs-vsctl add-port >
-
- * configure each nova-compute
- * --libvirt_type=kvm
- * --libvirt_ovs_integration_bridge=
- * --libvirt_vif_type=ethernet
- * --libvirt_vif_driver=quantum.plugins.ryu.nova.vif.LibvirtOpenVswitchOFPRyuDriver
- * --libvirt_ovs_ryu_api_host=:
-
-* install quantum server and have quantum to use OVS pluging
- * Edit [PLUGIN] section of /etc/quantum/plugins.ini
- * provider = quantum.plugins.ryu.ryu_quantum_plugin.RyuQuantumPlugin
-
- * Edit [DATABASE] and [OVS] section of /etc/quantum/plugins/ryu/ryu.ini
-
- * [DATABASE] section
-
- * sql_connection =
-
- * [OVS] section
-
- * integration-bridge =
- * openflow-controller = :
- * openflow-rest-api = :
-
- * Run quantum server
-
-* install quantum OVS agent on each nova-compute node
- * Edit /etc/quantum/plugins/ryu/ryu.ini
- * copy the ryu_quantum_agent.py into nova-compute/network node.
-
- The agent isn't installed by setup.py so that you have to copy it manually.
- ryu_quantum_agent.py is located at
- /quantum/plugins/ryu/agent/ryu_quantum_agent.py
-
- * Run ryu agent::
-
- # ryu_quantum_agent.py -v /etc/quantum/plugins/ryu/ryu.ini
-
-* Then as usual openstack nova operation, create user, project, network and
- run instances.
-* Enjoy!
-
-
-Testing
-=======
-Yay, now you have ryu network Operating System set up.
-You would want to really they are L2-segregated.
-
-* create multi projects and run instances.
-* ping/traceroute between them.
-* tcpdump in the instances
-
-The routing between gateway(gw-xxx) of each tenants are disabled
-by nova.network.linux_net.LinuxOVSOFInterfaceDriver by installing iptables
-rule on nova-network host::
-
- # iptable -t filter -A nova-network-FORWARD --in-interface gw-+ --out-interface gw-+
-
-Thus pinging/tracerouting between VMs in distinct tenants doesn't work.
-If you drop the above rule by::
-
- # iptable -t filter -D nova-network-FORWARD --in-interface gw-+ --out-interface gw-+
-
-You will see ping/tracerout works. Please notice that the packets go through
-gw-xxx and gw-yyy, not directly.
-
- .. image:: /images/trace-route.png
-
-
-Caveats
-=======
-* Run the following daemons in this order
- #. Run Ryu network Operating System
- #. Run quantum with Ryu plugin
- #. Run quantum Ryu agent
- #. run your guest instance
-
- For now, ryu-manager doesn't have persistent store, so if it's rebooted,
- all the necessary information must be told again from quantum server/agent.
-
-* nova-manage network delete doesn't work
-
- At this moment, quantum doesn't implement network delete fully yet.
- If you issue the command, it fails. And you need to fix nova/quantum DB
- by hand using SQL.
-
-
-Appendix
-========
-In the above, two physical NIC deployment is described.
-Some people may want to use those settings with single NIC machine or even
-with single machine.
-It would be possible as the following pictures, but we haven't tested those
-setting. If you success it, please report it.
-
-single NIC setup
-----------------
-If your host machines have only single NIC, it would be possible to use
-Ryu network Operating System with Linux bridge. However we haven't tested such
-setups.
-
- .. image:: /images/compute-node.png
-
-
-All-in-One Setup
-----------------
-You can also setup in single physical host as the following picture.
-
- .. image:: /images/minimul-setup.png
-
-You can setup the above environment quickly using DevStack.
-
- #. Install Ubuntu 11.10 (Oneiric)
-
- #. Download Ryu enabled DevStack from github
- ::
-
- % git clone git://github.com/osrg/devstack.git
-
- #. Start the install
- ::
-
- % cd devstack; ./stack.sh
-
- It will take a few minutes.
+* OpenStack: http://www.openstack.org/
+* Quantum: https://github.com/openstack/quantum/