21 | | This section introduces topology setup using the simplest case of a single link between two nodes. |
22 | | |
23 | | == 1.1 Overview == |
24 | | In general, a topology describes the restrictions on traffic flow between multiple nodes. We build a topology by first partitioning the shared switch so that the nodes are isolated from each other, and then introducing relays that can move traffic between these partitions in a controlled manner. The way the traffic is relayed produces the topology. Our base topology, in which all nodes can reach eachother, is a fully connected graph: |
25 | | {{{ |
26 | | (A) A - B |
27 | | \ / |
28 | | C |
29 | | }}} |
30 | | |
31 | | We build the following topology from the one shown above to demonstrate our methods: |
32 | | {{{ |
33 | | (B) |
34 | | A-[r]-B |
35 | | }}} |
36 | | Where A and B are nodes 'trapped' in their partitions, and [r] (C in fig. A) is a relay node that straddles the partitioning on the shared switch. We call A and B ''end nodes'' and [r] a ''network node'' joining together two links; From this logic it follows that the partition is the ''link'' (despite it actually being a logical setup on the shared switch, rather than a wire). Given that the partitions have identifiers, such as IP block addresses, nodes on the same link must share the identifier, and relays must know the identifiers of all partitions that it connects. |
37 | | |
38 | | We cover a handful of ways to realize the setup in Fig. B. |
| 23 | This section introduces topology setup using the simplest case of building a single link between two nodes. |
| 24 | |
| 37 | == 1.1 Overview == |
| 38 | In general, a topology describes the restrictions on traffic flow between one or more nodes (with the simplest case being a single node by itself). We build a topology by first isolating the nodes from each other, and then introducing relays that can move traffic between these nodes in a controlled manner. The way the traffic is relayed produces the topology. Our base topology, in which all nodes can reach each-other through a switch, is logically a fully connected graph: |
| 39 | {{{ |
| 40 | (A) A - B |
| 41 | \ / |
| 42 | C |
| 43 | }}} |
| 44 | |
| 45 | A usual method to isolate the nodes is to configure the switch to place each node on a separate VLAN. Picking node C as a traffic relay (A router-on-a-stick in the case of VLANs) produces the following topology: |
| 46 | {{{ |
| 47 | (B) |
| 48 | A-C-B |
| 49 | }}} |
| 50 | We call A and B ''end nodes'' and C a ''network node'' straddling A and B's VLANs. From this logic it follows that the VLAN is the ''link'' (despite it actually being a logical setup on the shared switch, rather than a wire). Given that the partitions have identifiers, such as IP block addresses, nodes on the same link must share the identifier, and relays must know the identifiers of all partitions that it connects; for the case above, C has knowledge of, and interfaces, both A and B's VLANs, so we have the following links: |
| 51 | {{{ |
| 52 | A-C (VLAN A) |
| 53 | C-B (VLAN B) |
| 54 | }}} |
| 55 | The next section explores several methods to produce similar effects. |
| 56 | |
56 | | In terms of 1., for example - we don't want to use TCP/IP-based schemes such as IP routing if we don't plan on using TCP/IP, or are planning to modify layer 3. 2. is important in that, depending on technique the number of links you can have (the node degree in terms of graphs) will be restricted to how-many-ever interfaces you have. When you only have one interface, you will want to use virtual interfaces to increase the number of links to/from your node. In turn, you may also need to modify the partitioning scheme of the shared switch. |
57 | | |
58 | | A standard way to deploy virtual interfaces is in combination with VLANs and trunking. This is not a bad, since VLANs may be combined with other configuration schemes, are relatively simple to configure, and a good portion of networked devices understand them. Many of the examples here that require virtual interfaces will make use of this standard technique. So, to make things easier, we will quickly describe how to add virtual interfaces and VLAN awareness to a node before moving on to Section 1.1 . |
| 62 | In terms of 1., we aim to keep topology setup and the experiment as two individual problems as opposed to one monolithic one. For example, we may not want to use IP routing if we don't plan on using TCP/IP, or are planning to modify layer 3 ^1^. 2. is important in that, depending on how it's done, the number of links you can have on the node will be restricted to the number of interfaces you have. When you only have one interface, you may want to use virtual interfaces to increase the number of links to/from your node. In turn, you may also need to modify the partitioning scheme of the shared switch. |
| 63 | |
| 64 | A standard way to deploy virtual interfaces is with VLANs and trunking. VLANs may be combined with other configuration schemes, are relatively simple to configure, and a good portion of networked devices understand them. Many of the examples here that require virtual interfaces will make use of this standard technique. So, to make things easier, we will quickly describe how to add virtual interfaces and VLAN awareness to a node before moving on to Section 1.1 . |
163 | | ==== Initialization ==== |
164 | | OVS has three main components that must be initialized: |
165 | | * openvswitch_mod.ko, the OVS kernel module |
166 | | * ovsdb, the database containing configurations |
167 | | * ovs-vswitchd, the OVS switch daemon |
168 | | The latter configures itself using the data provided by the former; `ovs-vsctl` is used to modify the contents of the database in order to configure the OVS switch. |
169 | | |
170 | | 1. Load openVswitch kernel module |
171 | | {{{ |
172 | | cd datapath/linux/ |
173 | | insmod openvswitch_mod.ko |
174 | | }}} |
175 | | Note, OVS and Linux bridging may not be used at the same time. This step will fail if the bridge module (bridge.ko) is loaded. You may need to reboot the node in order to unload bridge.ko.[[BR]] |
176 | | If this is the first time OVS is being run, make am openvswitch directory in /usr/local/etc/ and run `ovsdb-tool` to create the database file: |
177 | | {{{ |
178 | | mkdir -p /usr/local/etc/openvswitch |
179 | | ovsdb-tool create /usr/local/etc/openvswitch/conf.db vswitchd/vswitch.ovsschema |
180 | | }}} |
181 | | 2. Start ovs-db: |
182 | | {{{ |
183 | | ovsdb/ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \ |
184 | | --remote=db:Open_vSwitch,manager_options \ |
185 | | --pidfile --detach |
186 | | }}} |
187 | | 3. Initialize the database: |
188 | | {{{ |
189 | | utilities/ovs-vsctl --no-wait init |
190 | | }}} |
191 | | the `--no-wait` allows the database to be initialized before ovs-vswitchd is invoked. |
192 | | 4. Start ovs-vswitchd: |
193 | | {{{ |
194 | | vswitchd/ovs-vswitchd unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach |
195 | | }}} |
196 | | The 'unix:...db.sock' specifies that the process attach to the socket opened by `ovsdb`. |
197 | | |
198 | | ==== Configuring OVS ==== |