Skip to content
Browse files

- rfproxy fully implemented in POX.

- rfstats is a new POX application to collect statistics and flows.
- rfweb support removed from NOX.
- Improvements in rfweb.
- Unified rftest scripts for NOX and POX.
- Improved scripts behavior in rftest.
- Updated README.
  • Loading branch information...
1 parent c2c0922 commit 1817b65783412de0c4d9aa1388871732b4cbf176 @alnvdl alnvdl committed May 24, 2012
Showing with 965 additions and 2,088 deletions.
  1. +7 −0 .gitignore
  2. +35 −42 README
  3. +2 −2 nox/configure.ac.in
  4. +0 −3 nox/src/nox/coreapps/pyrt/context.i
  5. +0 −9 nox/src/nox/coreapps/pyrt/event.i
  6. +0 −13 nox/src/nox/coreapps/pyrt/pycontext.cc
  7. +0 −3 nox/src/nox/coreapps/pyrt/pycontext.hh
  8. +0 −1 nox/src/nox/coreapps/pyrt/pyevent.hh
  9. +6 −203 nox/src/nox/coreapps/pyrt/pyglue.cc
  10. +0 −8 nox/src/nox/coreapps/pyrt/pyglue.hh
  11. +0 −13 nox/src/nox/coreapps/pyrt/pyrt.cc
  12. +3 −11 nox/src/nox/lib/core.py
  13. +0 −12 nox/src/nox/lib/util.py
  14. +15 −384 nox/src/nox/netapps/switchstats/switchstats.py
  15. +1 −198 nox/src/nox/netapps/topology/topology.cc
  16. +0 −2 nox/src/nox/netapps/topology/topology.hh
  17. +0 −384 nox/src/nox/netapps/topology/topologyOld.cc
  18. +0 −147 nox/src/nox/netapps/topology/topologyOld.hh
  19. +0 −2 pox/ext/README
  20. +83 −69 pox/ext/rfproxy.py
  21. +0 −22 pox/logging.cfg
  22. +10 −10 pox/pox.py
  23. +1 −0 pox/pox/lib/util.py
  24. +31 −10 pox/pox/openflow/flow_table.py
  25. +87 −30 pox/pox/openflow/libopenflow_01.py
  26. +123 −0 pox/pox/openflow/nicira_ext.py
  27. +107 −0 pox/pox/openflow/nx_switch_impl.py
  28. +1 −1 pox/pox/openflow/of_01.py
  29. +87 −10 pox/pox/openflow/switch_impl.py
  30. +4 −2 pox/pox/openflow/topology.py
  31. 0 pox/{tests → }/setup.cfg
  32. +1 −1 pox/tests/unit/openflow/libopenflow_01_test.py
  33. +81 −39 rftest/rftest1
  34. +0 −106 rftest/rftest1_pox
  35. +74 −39 rftest/rftest2
  36. +0 −164 rftest/rftest2_pox
  37. +1 −0 rfweb/css/style.css
  38. +180 −137 rfweb/js/rf/network.js
  39. +8 −0 rfweb/js/rf/utils.js
  40. +17 −11 rfweb/rfweb.py
View
7 .gitignore
@@ -0,0 +1,7 @@
+#*#
+*~
+.#*
+.*.swp
+/build/*
+*.pyo
+*.pyc
View
77 README
@@ -11,7 +11,8 @@ RouteFlow for providing virtualized IP routing services on one or more OpenFlow
switches.
RouteFlow relies on the following technologies:
-- NOX and OpenFlow v1.0 as the communication protocol for controlling switches.
+- NOX/POX and OpenFlow v1.0 as the communication protocol for controlling
+ switches.
- Open vSwitch to provide the connectivity within the virtual environment where
Linux virtual machines may run the Quagga routing engine.
- MongoDB as a central database and IPC.
@@ -35,10 +36,12 @@ the rfproxy to instruct it about when to configure flows and also to
configure the Open vSwitch to maintain the connectivity in the virtual
environment formed by the set of registered VMs.
-- rfproxy is a NOX (OpenFlow controller) application responsible for the
-interactions with the OpenFlow switches (identified by datapaths) via the
-OpenFlow protocol. It listens to instructions from the RFServer and also
-notifies whenever a switch joins or leavens the network.
+- rfproxy is an application (for NOX and POX) responsible for the interactions
+with the OpenFlow switches (identified by datapaths) via the OpenFlow protocol.
+It listens to instructions from the RFServer and also notifies whenever a
+switch joins or leaves the network.
+ We recommend running POX when you are experimenting and testing your network.
+You can use NOX though if you need or want (for production maybe).
There is also a library of common functions (rflib). It defines the IPC, a
central table for RouteFlow state data and utilities like custom types for IP
@@ -64,7 +67,7 @@ M:1 \ RFProtocol
+-------------------+
| rfproxy |
|-------------------|
-| NOX |
+| NOX/POX |
+-------------------+
\
1:N \ OpenFlow Protocol
@@ -75,9 +78,7 @@ M:1 \ RFProtocol
=== Building ===
-These instructions are tested on Ubuntu 11.04. The version of the NOX controller
-we are using does not compile under newer versions of Ubuntu (11.10, 12.04).
-
+These instructions are tested on Ubuntu 11.04.
** Open vSwitch **
1) Install the dependencies:
@@ -168,6 +169,11 @@ $ sudo pip install pymongo
** NOX install instructions **
+These instructions are only necessary if you want to run RouteFlow using NOX.
+The version of the NOX controller we are using does not compile under newer
+versions of Ubuntu (11.10, 12.04). You can use POX, which doesn't require
+compiling.
+
1) Install the dependencies:
$ sudo apt-get install autoconf automake g++ libtool swig make git-core \
libboost-dev libboost-test-dev libboost-filesystem-dev libssl-dev \
@@ -229,25 +235,24 @@ The folder rftest contains all that is needed to create and run two test cases.
First, create the LXC containers that will run as virtual machines:
$ sudo ./create
-Test 1
- $ sudo ./rftest1
+In the tests below, you can choose to run with either NOX or POX by changing
+the command line arguments.
+rftest1
+ $ sudo ./rftest1 --nox
+ To stop running, press CTRL+C.
+
You can then log in to the LXC container b1 and try to ping b2:
$ sudo lxc-console -n b1
And inside b1:
# ping 172.31.2.2
- To stop running:
- $ sudo ./rftest1 clear
-
For more details on this test, see:
http://sites.google.com/site/routeflow/documents/first-routeflow
-Test 2
- $ sudo ./rftest2
-
- To stop running:
- $ sudo ./rftest2 clear
+rftest2
+ $ sudo ./rftest2 --pox
+ To stop running, press CTRL+C.
This test should be run with a Mininet simulated network:
http://yuba.stanford.edu/foswiki/bin/view/OpenFlow/Mininet
@@ -266,29 +271,28 @@ Test 2
Wait for the network to converge (it should take a few seconds), and try to
ping:
mininet> pingall
-
- You'll need to setup the network using virtual machines, and the
- configuration will depend on the chosen technology. If you have questions
- about setting up RouteFlow on a particular technology, we might be able to
- help. See the section "Support".
+
+ By default, this test will use the virtual machines (LXC containers) created
+ by the "create" script mentioned above. You can use other virtualization
+ technologies. If you have experience with or questions about setting up
+ RouteFlow on a particular technology, contact us! See the section "Support".
For more details on this test, see:
http://sites.google.com/site/routeflow/documents/tutorial2-four-routers-with-ospf
-POX
- Experimental support for POX is included. There are versions of the tests
- above written to run using POX (rftest*_pox). Note however that rfweb and
- SNMP support won't work.
-
=== Web interface ===
The module rfweb provides an web application to inspect the network, showing
topology, status and statistics. The application is written in Python using the
WSGI specification: http://wsgi.readthedocs.org/en/latest/
+The web interface only works when running under POX.
+
It's possible to run the application in several servers, and a simple server is
provided in rfweb_server.py. This server is very simple, and you probably don't
-want to use it for anything more serious than playing around and testing.
+want to use it for anything more serious than testing and playing around:
+
+$ python rfweb_server.py
We've also tested the application with gunicorn (http://gunicorn.org/). You can
run rfweb on it using the following command:
@@ -300,25 +304,14 @@ Then to access the main page of the web interface (adapt the address to your
setup), go to:
http://localhost:8080/index.html
-The web interface currently only works with rftest2 running under NOX. Other
-tests should work, but this module still requires improvements.
-
=== Support ===
RouteFlow has a discussion list. You can send your questions on:
https://groups.google.com/group/routeflow-discuss/topics
=== Known Bugs ===
-- rfproxy: random (and somewhat rare) segfaults without any clear reason.
-
-- rftest*: When closing the tests, rfproxy segfaults to no effect.
-
-- rfweb: topology monitoring for rftest1 doesn't work
-
-- rfweb: topology monitoring shows switch8 as active when all switches are down
-
-- rfweb: flow colors are not shown correctly
+- rftest*: When closing the tests, segfaults happen, to no effect.
- See: http://bugs.openflowhub.org/browse/ROUTEFLOW
View
4 nox/configure.ac.in
@@ -68,8 +68,8 @@ ACI_PACKAGE([netapps],[misc network apps],
routing user_event_log tests topology discovery
bindings_storage switchstats flow_fetcher data
switch_management networkstate hoststate tablog
- route lavi
- rfproxy #add netapps component here
+ route lavi rfproxy
+ #add netapps component here
],
[TURN_ON_NETAPPS])
ACI_PACKAGE([webapps],[misc ui apps],
View
3 nox/src/nox/coreapps/pyrt/context.i
@@ -122,9 +122,6 @@ public:
ethernetaddr addr, uint32_t mask, uint32_t config);
void send_aggregate_stats_request(uint64_t datapath_ids, const
struct ofp_match& match, uint8_t table_id);
- void send_flow_stats_request(uint64_t datapath_ids, const
- struct ofp_match& match, uint8_t table_id);
-
private:
PyContext();
View
9 nox/src/nox/coreapps/pyrt/event.i
@@ -174,15 +174,6 @@ private:
Aggregate_stats_in_event();
};
-struct Flow_stats_in_event
- : public Event
-{
- static const std::string static_get_name();
-
-private:
- Flow_stats_in_event();
-};
-
struct Desc_stats_in_event
: public Event
{
View
13 nox/src/nox/coreapps/pyrt/pycontext.cc
@@ -385,19 +385,6 @@ PyContext::send_aggregate_stats_request(uint64_t datapath_id, const struct ofp_m
uint8_t*)&asr, sizeof(struct ofp_aggregate_stats_request));
}
-//Criacao do flow stats request
-void
-PyContext::send_flow_stats_request(uint64_t datapath_id, const ofp_match& match, uint8_t table_id)
-{
- ofp_flow_stats_request fsr;
- memset(&fsr, 0, sizeof fsr);
- fsr.table_id = table_id;
- fsr.out_port = ntohs(OFPP_NONE);
- fsr.match = match;
-
- send_stats_request(datapath_id, OFPST_FLOW, (const
- uint8_t*)&fsr, sizeof(struct ofp_flow_stats_request));
-}
void
PyContext::send_stats_request(uint64_t datapath_id, ofp_stats_types type, const uint8_t* data, size_t data_size )
View
3 nox/src/nox/coreapps/pyrt/pycontext.hh
@@ -150,9 +150,6 @@ public:
ethernetaddr addr, uint32_t mask, uint32_t config);
void send_aggregate_stats_request(uint64_t datapath_ids, const
struct ofp_match& match, uint8_t table_id);
- //protótipo da função de envio de flow stats request
- void send_flow_stats_request(uint64_t datapath_ids,
- const ofp_match& match, uint8_t table_id);
/* C++ context */
const container::Context* ctxt;
View
1 nox/src/nox/coreapps/pyrt/pyevent.hh
@@ -39,7 +39,6 @@
#include "pyglue.hh"
-#include "flow-stats-in.hh"
#include "event.hh"
#include "assert.hh"
View
209 nox/src/nox/coreapps/pyrt/pyglue.cc
@@ -29,7 +29,6 @@
#include "port.hh"
#include "table-stats-in.hh"
#include "port-stats-in.hh"
-#include "queue-stats-in.hh"
#include "pycontext.hh"
#include "pyrt.hh"
#include "vlog.hh"
@@ -186,7 +185,8 @@ from_python(PyObject *pymatch)
if (nw_src) {
match.nw_src = htons(from_python<uint32_t>(nw_src));
match.wildcards &= htonl(~OFPFW_NW_SRC_MASK);
- PyObject* nw_src_n_wild = PyDict_GetItemString(pymatch, "nw_src_n_wild");
+ PyObject* nw_src_n_wild = PyDict_GetItemString(pymatch,
+ "nw_src_n_wild");
if (nw_src_n_wild) {
unsigned int n_wild = from_python<unsigned int>(nw_src_n_wild);
if (n_wild > 0) {
@@ -202,7 +202,8 @@ from_python(PyObject *pymatch)
if (nw_dst) {
match.nw_dst = htons(from_python<uint32_t>(nw_dst));
match.wildcards &= htonl(~OFPFW_NW_DST_MASK);
- PyObject* nw_dst_n_wild = PyDict_GetItemString(pymatch, "nw_dst_n_wild");
+ PyObject* nw_dst_n_wild = PyDict_GetItemString(pymatch,
+ "nw_dst_n_wild");
if (nw_dst_n_wild) {
unsigned int n_wild = from_python<unsigned int>(nw_dst_n_wild);
if (n_wild > 0) {
@@ -366,8 +367,8 @@ to_python(const Table_stats& ts)
ret = Py_BuildValue((char*)"{s:l, s:s#, s:l, s:l, s:S, s:S}",
"table_id", ts.table_id,
"name", ts.name.c_str(), ts.name.size(),
- "max_entries", ts.max_entries,
- "active_count", ts.active_count,
+ "max_entries", ts.max_entries,
+ "active_count", ts.active_count,
"lookup_count", pyo_lookup_count,
"matched_count", pyo_matched_count );
error:
@@ -502,193 +503,12 @@ to_python(const ofp_flow_stats& fs)
return dict;
}
-/* This macro is used to help build the action list for the to_python for
- * Flow_stats */
-
-#define ACTION(__t) if (sizeof(__t) != len) break; \
- bad_length = false; \
- const __t * a = reinterpret_cast<const __t *>(curr_act);
-
template <>
PyObject*
to_python(const Flow_stats& fs)
{
- /* create the same dict as ofp_flow_stats */
PyObject* dict = to_python(static_cast<const ofp_flow_stats&>(fs));
-
- /* form the action list */
- PyObject* action_list = PyList_New(0);
- for (int i = 0; i < fs.v_actions.size(); i++)
- {
- const ofp_action_header *curr_act = &fs.v_actions[i];
-
- if (NULL == curr_act) {
- VLOG_ERR(lg, "Found null action in flow stats action list.");
- continue;
- }
-
- PyObject *action = PyDict_New();
-
- /* type and len common to all actions */
- uint16_t type = ntohs(curr_act->type);
- uint16_t len = ntohs(curr_act->len);
-
- pyglue_setdict_string(action, "type", to_python(type));
- pyglue_setdict_string(action, "len", to_python(len));
-
- bool bad_length = true; // Set to false by ACTION macro
-
- /* form an action dictionary based on the action type */
- switch (type) {
- case OFPAT_OUTPUT:
- {
- ACTION(ofp_action_output);
- uint16_t port = ntohs(a->port);
-
- pyglue_setdict_string(action, "port", to_python(port));
-
- /* max_len is only meaningful if outputting to controller */
- if (OFPP_CONTROLLER == port) {
- pyglue_setdict_string(action, "max_len",
- to_python(ntohs(a->max_len)));
- }
- break;
- }
- case OFPAT_STRIP_VLAN:
- {
- /* no struct beyond the basic header */
- ACTION(ofp_action_header);
- break;
- }
- case OFPAT_SET_VLAN_VID:
- {
- ACTION(ofp_action_vlan_vid);
- pyglue_setdict_string(action, "vlan_vid",
- to_python(ntohs(a->vlan_vid)));
- break;
- }
- case OFPAT_SET_VLAN_PCP:
- {
- ACTION(ofp_action_vlan_pcp);
- pyglue_setdict_string(action, "vlan_pcp",
- to_python(a->vlan_pcp));
- break;
- }
- case OFPAT_SET_DL_SRC:
- case OFPAT_SET_DL_DST:
- {
- ACTION(ofp_action_dl_addr);
- if (type == OFPAT_SET_DL_SRC) {
- pyglue_setdict_string(action, "dl_src", to_python(ethernetaddr(a->dl_addr)));
- } else {
- pyglue_setdict_string(action, "dl_dst", to_python(ethernetaddr(a->dl_addr)));
- }
- break;
- }
- case OFPAT_SET_NW_SRC:
- case OFPAT_SET_NW_DST:
- {
- ACTION(ofp_action_nw_addr);
- if (type == OFPAT_SET_NW_SRC) {
- pyglue_setdict_string(action, "nw_src",
- to_python(ntohl(a->nw_addr)));
- } else {
- pyglue_setdict_string(action, "nw_dst",
- to_python(ntohl(a->nw_addr)));
- }
- break;
- }
- case OFPAT_SET_NW_TOS:
- {
- ACTION(ofp_action_nw_tos);
- pyglue_setdict_string(action, "nw_tos", to_python(a->nw_tos));
- break;
- }
- case OFPAT_SET_TP_SRC:
- {
- ACTION(ofp_action_tp_port);
- pyglue_setdict_string(action, "tp_src", to_python(ntohs(a->tp_port)));
- break;
- }
- case OFPAT_SET_TP_DST:
- {
- ACTION(ofp_action_tp_port);
- pyglue_setdict_string(action, "tp_dst", to_python(ntohs(a->tp_port)));
- break;
- }
- case OFPAT_ENQUEUE:
- {
- ACTION(ofp_action_enqueue);
- pyglue_setdict_string(action, "port", to_python(ntohs(a->port)));
- pyglue_setdict_string(action, "queue_id", to_python(ntohl(a->queue_id)));
- break;
- }
- case OFPAT_VENDOR:
- {
- ACTION(ofp_action_vendor_header);
- pyglue_setdict_string(action, "vendor", to_python(ntohl(a->vendor)));
- break;
- }
- default:
- {
- VLOG_INFO(lg, "Action with unknown type in Flow_stats.");
- break;
- }
- }
-
- if (bad_length) {
- // ... just to silence it. We will improve it in the future.
- //VLOG_ERR(lg, "Action with incorrect length in Flow_stats.");
- Py_XDECREF(action);
- } else {
- /* add the action to the action list */
- if (PyList_Append(action_list, action) < 0) {
- Py_XDECREF(action);
- VLOG_ERR(lg, "Could not add action dict to action list.");
- }
- }
- }
-
- /* add the action list to the flow stats dict */
- pyglue_setdict_string(dict, "actions", action_list);
-
- return dict;
-
- //PyObject* dict = to_python(static_cast<const ofp_flow_stats&>(fs));
/* XXX actions */
- //return dict;
-
-/* PyObject* dict = to_python(static_cast<const ofp_flow_stats&>(fs));
-
- PyObject *list = PyList_New(0);
- for (int i=0; i<fs.v_actions.size(); ++i) {
- PyObject *ts = to_python(fs.v_actions[i]);
- if (PyList_Append(list, ts) < 0) {
- Py_XDECREF(list);
- Py_XDECREF(ts);
- list = NULL;
- }
- Py_XDECREF(ts);
- }
-
- pyglue_setdict_string(dict, "actions", list);
- XXX actions */
- /*return dict;*/
-}
-
-#undef ACTION
-
-template <>
-PyObject*
-to_python(const ofp_action_header& a)
-{
- PyObject* dict = PyDict_New();
- if (!dict) {
- return 0;
- }
- pyglue_setdict_string(dict, "type", to_python(a.type));
- pyglue_setdict_string(dict, "len", to_python(a.len));
-
return dict;
}
@@ -767,23 +587,6 @@ to_python(const std::vector<Table_stats>& p)
template <>
PyObject*
-to_python(const std::vector<Flow_stats>& p)
-{
- PyObject *list = PyList_New(0);
- for (int i=0; i<p.size(); ++i) {
- PyObject *ts = to_python(p[i]);
- if (PyList_Append(list, ts) < 0) {
- Py_XDECREF(list);
- Py_XDECREF(ts);
- return NULL;
- }
- Py_XDECREF(ts);
- }
- return list;
-}
-
-template <>
-PyObject*
to_python(const std::vector<Port_stats>& p)
{
PyObject *list = PyList_New(0);
View
8 nox/src/nox/coreapps/pyrt/pyglue.hh
@@ -241,10 +241,6 @@ to_python(const Flow_stats&);
template <>
PyObject*
-to_python(const ofp_action_header&);
-
-template <>
-PyObject*
to_python(const ofp_match& m);
template <>
@@ -257,10 +253,6 @@ to_python(const std::vector<Table_stats>& p);
template <>
PyObject*
-to_python(const std::vector<Flow_stats>& p);
-
-template <>
-PyObject*
to_python(const std::vector<Port_stats>& p);
template <>
View
13 nox/src/nox/coreapps/pyrt/pyrt.cc
@@ -38,7 +38,6 @@
#include "desc-stats-in.hh"
#include "table-stats-in.hh"
#include "port-stats-in.hh"
-#include "flow-stats-in.hh"
#include "flow-mod-event.hh"
#include "flow-removed.hh"
#include "packet-in.hh"
@@ -212,16 +211,6 @@ static void convert_table_stats_in(const Event& e, PyObject* proxy) {
((Event*)SWIG_Python_GetSwigThis(proxy)->ptr)->operator=(e);
}
-static void convert_flow_stats_in(const Event& e, PyObject* proxy) {
- const Flow_stats_in_event& fsi
- = dynamic_cast<const Flow_stats_in_event&>(e);
-
- pyglue_setattr_string(proxy, "datapath_id", to_python(fsi.datapath_id));
- pyglue_setattr_string(proxy, "flows" , to_python<vector<Flow_stats> >(fsi.flows));
-
- ((Event*)SWIG_Python_GetSwigThis(proxy)->ptr)->operator=(e);
-}
-
static void convert_aggregate_stats_in(const Event& e, PyObject* proxy) {
const Aggregate_stats_in_event& asi
= dynamic_cast<const Aggregate_stats_in_event&>(e);
@@ -465,8 +454,6 @@ PyRt::PyRt(const Context* c,
&convert_bootstrap_complete);
register_event_converter(Table_stats_in_event::static_get_name(),
&convert_table_stats_in);
- register_event_converter(Flow_stats_in_event::static_get_name(),
- &convert_flow_stats_in);
register_event_converter(Port_stats_in_event::static_get_name(),
&convert_port_stats_in);
register_event_converter(Aggregate_stats_in_event::static_get_name(),
View
14 nox/src/nox/lib/core.py
@@ -198,7 +198,7 @@ def switch_update(self, dpid):
def send_openflow_packet(self, dp_id, packet, actions,
- inport=openflow.OFPP_NONE):
+ inport=openflow.OFPP_CONTROLLER):
"""
sends an openflow packet to a datapath
@@ -221,7 +221,7 @@ def send_openflow_packet(self, dp_id, packet, actions,
raise Exception('Bad argument')
def send_openflow_buffer(self, dp_id, buffer_id, actions,
- inport=openflow.OFPP_NONE):
+ inport=openflow.OFPP_CONTROLLER):
"""
Tells a datapath to send out a buffer
@@ -272,7 +272,7 @@ def send_flow_command(self, dp_id, command, attrs,
# Former PyAPI methods
def send_openflow(self, dp_id, buffer_id, packet, actions,
- inport=openflow.OFPP_NONE):
+ inport=openflow.OFPP_CONTROLLER):
"""
Sends an openflow packet to a datapath.
@@ -529,14 +529,6 @@ def register_for_desc_stats_in(self, handler):
self.register_handler(Desc_stats_in_event.static_get_name(),
gen_ds_in_cb(handler))
- ###############################################################################
- # Minha funcao para fazer o registro do flow_stats_in
- # (Diogo)
- ###############################################################################
- def register_for_flow_stats_in(self, handler):
- self.register_handler(Flow_stats_in_event.static_get_name(),
- gen_fs_in_cb(handler))
-
def register_for_datapath_leave(self, handler):
"""
register a handler to be called on a every datapath_leave
View
12 nox/src/nox/lib/util.py
@@ -199,18 +199,6 @@ def f(event):
f.cb = handler
return f
-def gen_fs_in_cb(handler):
- def f(event):
- #stats = {}
- #stats['datapath'] = event.datapath_id
- stats = event.flows
- ret = f.cb(event.datapath_id, stats)
- if ret == None:
- return CONTINUE
- return ret
- f.cb = handler
- return f
-
def gen_port_status_cb(handler):
def f(event):
if event.reason == openflow.OFPPR_ADD:
View
399 nox/src/nox/netapps/switchstats/switchstats.py
@@ -15,10 +15,6 @@
# You should have received a copy of the GNU General Public License
# along with NOX. If not, see <http://www.gnu.org/licenses/>.
-#import redis
-import pymongo
-import time
-from datetime import date
import logging
from nox.lib.core import *
@@ -37,22 +33,12 @@
# Default values for the periodicity of polling for each class of
# statistic
-DEFAULT_POLL_TABLE_PERIOD = 3
-DEFAULT_POLL_PORT_PERIOD = 3
-DEFAULT_POLL_AGGREGATE_PERIOD = 3
-DEFAULT_POLL_FLOW_PERIOD = 3
-DEFAULT_POLL_FILE_PERIOD = 5
+DEFAULT_POLL_TABLE_PERIOD = 5
+DEFAULT_POLL_PORT_PERIOD = 5
+DEFAULT_POLL_AGGREGATE_PERIOD = 5
-#pool = redis.ConnectionPool(host='localhost', port=6379, db=0)
-#r = redis.Redis(connection_pool=pool)
-#r.flushall()
lg = logging.getLogger('switchstats')
-dp_list = []
-from pymongo import Connection
-connection = Connection('localhost', 27017)
-db = connection.test_database
-mongo_stats = {}
## \ingroup noxcomponents
# Collects and maintains switch and port stats for the network.
@@ -71,7 +57,6 @@ class switchstats(Component):
def add_port_listener(self, dpid, port, listener):
self.port_listeners[dpid][port].append(listener)
-
def remove_port_listener(self, dpid, port, listener):
try:
self.port_listeners[dpid][port].remove(listener)
@@ -96,47 +81,20 @@ def __init__(self, ctxt):
self.dp_table_stats = {}
self.dp_desc_stats = {}
self.dp_port_stats = {}
- self.dp_aggr_stats ={}
- self.dp_flow_stats = {}
def port_timer(self, dp):
if dp in self.dp_stats:
- #self.ctxt.send_port_stats_request(dp, openflow.OFPP_NONE)
+ self.ctxt.send_port_stats_request(dp)
self.post_callback(self.dp_poll_period[dp]['port'] + 1, lambda : self.port_timer(dp))
def table_timer(self, dp):
if dp in self.dp_stats:
self.ctxt.send_table_stats_request(dp)
self.post_callback(self.dp_poll_period[dp]['table'], lambda : self.table_timer(dp))
-
- def aggr_timer(self, dp):
- if dp in self.dp_stats:
- match = openflow.ofp_match()
- match.wildcards = 0xffffffff
- self.ctxt.send_aggregate_stats_request(dp, match, 0xff)
- self.post_callback(self.dp_poll_period[dp]['aggr'], lambda : self.aggr_timer(dp))
-
- def flow_timer(self, dp):
- if dp in self.dp_stats:
- match = openflow.ofp_match()
- match.wildcards = 0xffffffff
- self.ctxt.send_flow_stats_request(dp, match, 0xff)
- #lg.debug("Mandei o pedido de flowstats para o dpid " + mac_to_str(dp))
- self.post_callback(self.dp_poll_period[dp]['flow'], lambda : self.flow_timer(dp))
-
- def file_timer(self):
- self.print_stats()
- self.post_callback(DEFAULT_POLL_FILE_PERIOD, lambda : self.file_timer())
def dp_join(self, dp, stats):
- #lg.warn('############### dp_join ###############\n')
- global dp_num
- if dp in dp_list:
- dp_list.remove(dp)
- #lg.warn(dp)
- dp_list.append(dp)
- dp_list.sort()
+
dpid_obj = datapathid.from_host(dp)
stats['dpid'] = dp
self.dp_stats[dp] = stats
@@ -154,31 +112,18 @@ def dp_join(self, dp, stats):
self.dp_poll_period[dp]['table'] = DEFAULT_POLL_TABLE_PERIOD
self.dp_poll_period[dp]['port'] = DEFAULT_POLL_PORT_PERIOD
self.dp_poll_period[dp]['aggr'] = DEFAULT_POLL_AGGREGATE_PERIOD
- self.dp_poll_period[dp]['flow'] = DEFAULT_POLL_FLOW_PERIOD
# Switch descriptions do not change while connected, so just send once
- #self.ctxt.send_desc_stats_request(dp)
+ self.ctxt.send_desc_stats_request(dp)
# stagger timers by one second
self.post_callback(self.dp_poll_period[dp]['table'],
lambda : self.table_timer(dp))
- self.post_callback(self.dp_poll_period[dp]['port'] + 0.5,
+ self.post_callback(self.dp_poll_period[dp]['port'] + 1,
lambda : self.port_timer(dp))
- self.post_callback(self.dp_poll_period[dp]['aggr'] + 1,
- lambda : self.aggr_timer(dp))
- self.post_callback(self.dp_poll_period[dp]['flow'] + 1.5,
- lambda : self.flow_timer(dp))
- self.post_callback(DEFAULT_POLL_FILE_PERIOD,
- lambda : self.file_timer())
-
- mongo_stats[dp] = {}
- mongo_stats[dp]["table"] = db.mongo_stats[dp]["table"]
- mongo_stats[dp]["aggr"] = db.mongo_stats[dp]["aggr"]
- mongo_stats[dp]["desc"] = db.mongo_stats[dp]["desc"]
- mongo_stats[dp]["flow"] = db.mongo_stats[dp]["flow"]
-
- return CONTINUE
+ return CONTINUE
+
def dp_leave(self, dp):
dpid_obj = datapathid.from_host(dp)
@@ -196,10 +141,6 @@ def dp_leave(self, dp):
del self.dp_desc_stats[dp]
if self.dp_port_stats.has_key(dp):
del self.dp_port_stats[dp]
- if self.dp_aggr_stats.has_key(dp):
- del self.dp_aggr_stats[dp]
- if self.dp_flow_stats.has_key(dp):
- del self.dp_flow_stats[dp]
if dp in self.port_listeners:
del self.port_listeners[dp]
@@ -213,204 +154,15 @@ def map_name_to_portno(self, dpid, name):
return None
def table_stats_in_handler(self, dpid, tables):
- #lg.warn('############### table_stats_in_handler ###############\n')
self.dp_table_stats[dpid] = tables
- for i in range(len(tables)):
- #print "table[",i,"]"
-
- doc = {"_id": self.dp_table_stats[dpid][i]['table_id'],
- "name": self.dp_table_stats[dpid][i]['name'],
- "active_count": self.dp_table_stats[dpid][i]['active_count'],
- "lookup_count": self.dp_table_stats[dpid][i]['lookup_count']}
-
- mongo_stats[dpid]["table"].save(doc)
-
-# print "*** Estatisticas com o MongoDB ***"
-# for x in mongo_stats[dpid]["table"].find({"_id":self.dp_table_stats[dpid][i]['table_id']}):
-# print "mongo_table[",dpid,"].table_id = ", x["_id"]
-# print "mongo_table[",dpid,"].name = ", x["name"]
-# print "mongo_table[",dpid,"].active_count = ", x["active_count"]
-# print "mongo_table[",dpid,"].lookup_count = ", x["lookup_count"]
-# print "*****"
-
def desc_stats_in_handler(self, dpid, desc):
- #lg.warn('############### desc_stats_in_handler ###############\n')
self.dp_desc_stats[dpid] = desc
ip = self.ctxt.get_switch_ip(dpid)
self.dp_desc_stats[dpid]["ip"] = str(create_ipaddr(c_htonl(ip)))
- mongo_stats[dpid]["desc"].drop()
-
- doc = {"mfr_desc": self.dp_desc_stats[dpid]['mfr_desc'],
- "hw_desc": self.dp_desc_stats[dpid]['hw_desc'],
- "sw_desc": self.dp_desc_stats[dpid]['sw_desc'],
- "serial_num": self.dp_desc_stats[dpid]['serial_num'],
- "dp_desc": self.dp_desc_stats[dpid]['dp_desc']}
-
- mongo_stats[dpid]["desc"].save(doc)
-
-# print "***** mongodb desc *****"
-
-# for x in mongo_stats[dpid]["desc"].find():
-# print "mongo_desc[",dpid,"].table_id = ", x["mfr_desc"]
-# print "mongo_desc[",dpid,"].name = ", x["hw_desc"]
-# print "mongo_desc[",dpid,"].active_count = ", x["sw_desc"]
-# print "mongo_desc[",dpid,"].lookup_count = ", x["serial_num"]
-# print "mongo_desc[",dpid,"].lookup_count = ", x["dp_desc"]
-# print "*****"
-
-
- def aggr_stats_in_handler(self, dpid, aggr):
- #lg.warn('############### aggr_stats_in_handler ###############\n')
- self.dp_aggr_stats[dpid] = aggr
-
- #r.set('dp_aggr_stats:' + str(dpid) + ':packet_count', self.dp_aggr_stats[dpid]['packet_count'])
- #r.set('dp_aggr_stats:' + str(dpid) + ':byte_count', self.dp_aggr_stats[dpid]['byte_count'])
- #r.set('dp_aggr_stats:' + str(dpid) + ':flow_count', self.dp_aggr_stats[dpid]['flow_count'])
-
- mongo_stats[dpid]["aggr"].drop()
-
- doc = {"packet_count": self.dp_aggr_stats[dpid]['packet_count'],
- "byte_count": self.dp_aggr_stats[dpid]['byte_count'],
- "flow_count": self.dp_aggr_stats[dpid]['flow_count']}
-
- mongo_stats[dpid]["aggr"].save(doc)
-
-
-# print "***** mongodb Aggr *****"
-# for x in mongo_stats[dpid]["aggr"].find():
-# print "mongo_aggr[",dpid,"].packet_count = ", x["packet_count"]
-# print "mongo_aggr[",dpid,"].byte_count = ", x["byte_count"]
-# print "mongo_aggr[",dpid,"].flow_count = ", x["flow_count"]
-# print "*****"
-
- #print "dp_aggr_stats[",dpid,"].packet_count = ", r.get('dp_aggr_stats:' + str(dpid) + ':packet_count')
- #print "dp_aggr_stats[",dpid,"].byte_count = ", r.get('dp_aggr_stats:' + str(dpid) + ':byte_count')
- #print "dp_aggr_stats[",dpid,"].flow_count = ", r.get('dp_aggr_stats:' + str(dpid) + ':flow_count')
-
- def flow_stats_in_handler(self, dpid, flows):
- #lg.warn('############### flow_stats_in_handler ###############\n')
- self.dp_flow_stats[dpid] = flows
- mongo_stats[dpid]["flow"].drop()
- for i in range(len(flows)):
- #print "flow[",i,"]"
-
- #r.hset('dp_flow_stats:' + str(dpid) + ':packet_count', i, self.dp_flow_stats[dpid][i]['packet_count'])
- #r.hset('dp_flow_stats:' + str(dpid) + ':byte_count', i, self.dp_flow_stats[dpid][i]['byte_count'])
-
- match = ""
- k = self.dp_flow_stats[dpid][i]['match']
- if k.has_key('in_port'):
- match = "in_port: " + str(k['in_port'])
- if k.has_key('dl_src'):
- match = "dl_src: " + str(ethernetaddr(k['dl_src']))
-
- elif k.has_key('dl_dst'):
- if k['dl_type'] == 2048:
- match = "ip"
- elif k['dl_type'] == 2054:
- match = "ethernet"
-
- match = match + ", dl_dst: " + str(ethernetaddr(k['dl_dst'])) + "; nw_dst: " + str(ipaddr(k['nw_dst']))
- if k.has_key('nw_dst_n_wild'):
- if k['nw_dst_n_wild'] > 0:
- match = match + "/" + str(32 - k['nw_dst_n_wild'])
-
- elif k.has_key('nw_dst'):
- if k['nw_proto'] == 17:
- match = "udp"
- match = match + "; nw_dst: " + str(ipaddr(k['nw_dst']))
-
- elif k.has_key('nw_dst_mask'):
- match = "nw_dst_mask: " + str(ipaddr(k['nw_dst_mask']))
-
- elif k.has_key('nw_proto'):
- match = ""
- if k['nw_proto'] == 89:
- match = match + "ospf"
- match = match + "; tp_src: " + str(k['tp_src']) + "; tp_dst: " + str(k['tp_dst'])
- if k['nw_proto'] == 6:
- match = match + "tcp"
- match = match + "; tp_dst: " + str(k['tp_dst'])
- if k['nw_proto'] == 1:
- match = match + "icmp"
- elif k.has_key('dl_type'):
- if k['dl_type'] == 2054:
- match = "arp"
- #match = str(k) + match
-
- cont = 0
- actions = []
- actionStr = ""
- for j in self.dp_flow_stats[dpid][i]['actions']:
- del j['len']
- if j['type'] == 0:
- j['type'] = "OUTPUT"
- actions.insert(cont, str(j['type'] + ': ' + 'port ' + str(j['port'])))
- elif j['type'] == 1:
- j['type'] = "SET_VLAN_VID"
- elif j['type'] == 2:
- j['type'] = "SET_VLAN_PCP"
- elif j['type'] == 3:
- j['type'] = "STRIP_VLAN"
- elif j['type'] == 4:
- j['type'] = "SET_DL_SRC"
- actions.insert(cont, str(j['type'] + ': ' + str(ethernetaddr(j['dl_src']))))
- elif j['type'] == 5:
- j['type'] = "SET_DL_DST"
- actions.insert(cont, str(j['type'] + ': ' + str(ethernetaddr(j['dl_dst']))))
- elif j['type'] == 6:
- j['type'] = "SET_NW_SRC"
- elif j['type'] == 7:
- j['type'] = "OFPAT_SET_NW_DST"
- elif j['type'] == 8:
- j['type'] = "OFPAT_SET_NW_TOS"
- elif j['type'] == 9:
- j['type'] = "OFPAT_SET_TP_SRC"
- elif j['type'] == 10:
- j['type'] = "OFPAT_SET_TP_DST"
- else:
- pass
- # print "ACTION TYPE: ",j['type']
-
- cont = cont + 1
-
- for j in range(len(self.dp_flow_stats[dpid][i]['actions'])):
- actionStr = actionStr + actions[j]
- if j != len(self.dp_flow_stats[dpid][i]['actions']):
- actionStr = actionStr + "; "
-
- #r.hset('dp_flow_stats:' + str(dpid) + ':packet_count', i, self.dp_flow_stats[dpid][i]['packet_count'])
- #r.hset('dp_flow_stats:' + str(dpid) + ':byte_count', i, self.dp_flow_stats[dpid][i]['byte_count'])
- #r.hset('dp_flow_stats:' + str(dpid) + ':match', i, match)
- #r.hset('dp_flow_stats:' + str(dpid) + ':actions', i, actionStr)
-
- doc = {"_id": i,
- "packet_count": self.dp_flow_stats[dpid][i]["packet_count"],
- "byte_count": self.dp_flow_stats[dpid][i]["byte_count"],
- "match": match,
- "actions": actionStr}
-
- #mongo_stats[dpid]["flow"].drop()
- mongo_stats[dpid]["flow"].save(doc)
-
- #print "REDIS FLOW_STATS"
- #print "dp_flow_stats[",dpid,"].packet_count = ", r.hget('dp_flow_stats:' + str(dpid) + ':packet_count', i)
- #print "dp_flow_stats[",dpid,"].byte_count = ", r.hget('dp_flow_stats:' + str(dpid) + ':byte_count', i)
- #print "dp_flow_stats[",dpid,"].match = ", r.hget('dp_flow_stats:' + str(dpid) + ':match', i)
- #print "dp_flow_stats[",dpid,"].actions = ", r.hget('dp_flow_stats:' + str(dpid) + ':actions', i)
-
-# print "MONGO FLOW_STATS"
-# for x in mongo_stats[dpid]["flow"].find({"_id": i}):
-# print "mongo_flow[",dpid,"].packet_count = ", x["packet_count"]
-# print "mongo_flow[",dpid,"].byte_count = ", x["byte_count"]
-# print "mongo_flow[",dpid,"].match = ", x["match"]
-# print "mongo_flow[",dpid,"].actions = ", x["actions"]
-# print "*****"
-
-
- """def port_stats_in_handler(self, dpid, ports):
+
+ def port_stats_in_handler(self, dpid, ports):
if dpid not in self.dp_port_stats:
new_ports = {}
for port in ports:
@@ -430,10 +182,9 @@ def flow_stats_in_handler(self, dpid, flows):
# XXX Fire listeners for port stats
self.fire_port_listeners(dpid, port['port_no'], port)
self.dp_port_stats[dpid] = new_ports
- """
+
def port_status_handler(self, dpid, reason, port):
- #lg.warn('############### port_status_in_handler ###############\n')
intdp = int(dpid)
if intdp not in self.dp_stats:
log.err('port status from unknown datapath', system='switchstats')
@@ -481,135 +232,15 @@ def install(self):
self.register_for_datapath_leave(self.dp_leave)
self.register_for_table_stats_in(self.table_stats_in_handler)
- self.register_for_flow_stats_in(self.flow_stats_in_handler)
+
self.register_for_desc_stats_in(self.desc_stats_in_handler)
- self.register_for_aggregate_stats_in(self.aggr_stats_in_handler)
- #self.register_for_port_stats_in(self.port_stats_in_handler)
+
+ self.register_for_port_stats_in(self.port_stats_in_handler)
self.register_for_port_status(self.port_status_handler)
def getInterface(self):
return str(switchstats)
- def print_stats(self):
- pass
- #if dpid == 5:
- f = open("../../../rfweb/data/switchstats.json", "w")
- f.write('{ "nodes": [\n')
- for i in range(len(dp_list)):
- f.write('\t{\n')
- value = '\t\t"id": "' + str(dp_list[i]) + '",\n'
- f.write(value)
-
- value = '\t\t"name": "switch' + str(dp_list[i]) + '",\n'
- f.write(value)
-
- value = '\t\t"data": {\n'
- f.write(value)
-
- dp_id = "dp" + str(i+1)
- value = '\t\t\t"$dp_id": "' + str(dp_id) + '",\n'
- f.write(value)
-
- # IMPRIMINDO FLOW_STATS
- f.write('\t\t\t"$flows": [\n')
- if self.dp_flow_stats:
-
-
-
- #for j in range(len(self.dp_flow_stats[dp_list[i]])):
- # f.write('\t\t\t{\n')
-#
-# value = '\t\t\t\t"flow": "' + str(j) + '",\n'
- # f.write(value)
- # value = '\t\t\t\t"ofp_match": "' + r.hget('dp_flow_stats:' + str(dp_list[i]) + ':match', j) + '",\n'
- # f.write(value)
-
- # value = '\t\t\t\t"ofp_actions": "' + r.hget('dp_flow_stats:' + str(dp_list[i]) + ':actions', j) + '",\n'
-
-# f.write(value)
- # value = '\t\t\t\t"packet_count": "' + r.hget('dp_flow_stats:' + str(dp_list[i]) + ':packet_count', j) + '",\n'
- # f.write(value)
- # value = '\t\t\t\t"byte_count": "' + r.hget('dp_flow_stats:' + str(dp_list[i]) + ':byte_count', j) + '"\n'
- # f.write(value)
-# if j < (len(self.dp_flow_stats[dp_list[i]])-1):
-# f.write('\t\t\t},\n')
-# else:
-# f.write('\t\t\t}\n')
-
-
- l = mongo_stats[dp_list[i]]["flow"].find().explain()["nscanned"]
- j = 0
- for x in mongo_stats[dp_list[i]]["flow"].find():
- f.write('\t\t\t{\n')
-
- value = '\t\t\t\t"flow": "' + str(j) + '",\n'
- f.write(value)
- value = '\t\t\t\t"ofp_match": "' + x["match"] + '",\n'
- f.write(value)
- value = '\t\t\t\t"ofp_actions": "' + x["actions"] + '",\n'
- f.write(value)
- value = '\t\t\t\t"packet_count": "' + str(x["packet_count"]) + '",\n'
- f.write(value)
- value = '\t\t\t\t"byte_count": "' + str(x["byte_count"]) + '"\n'
- f.write(value)
- if j < (l-1):
- f.write('\t\t\t},\n')
- else:
- f.write('\t\t\t}\n')
- j = j + 1
-
- f.write('\t\t\t],\n')
-
- # IMPRIMINDO DESC_STATS
- x = mongo_stats[dp_list[i]]["desc"].find_one()
- value = '\t\t\t"$ofp_desc_stats":["' + str(x["mfr_desc"]) + '", '
- value = value + '"' + str(x["hw_desc"]) + '", '
- value = value + '"' + str(x["sw_desc"]) + '", '
- value = value + '"' + str(x["serial_num"]) + '", '
- value = value + '"' + str(x["dp_desc"]) + '"],\n'
-
- f.write(value)
-
- # IMPRIMINDO AGGREGATE STATS
- x = mongo_stats[dp_list[i]]["aggr"].find_one()
- if (x is not None):
- value = '\t\t\t"$ofp_aggr_stats":["' + str(x["packet_count"]) + '", '
- value = value + '"' + str(x["byte_count"]) + '", '
- value = value + '"' + str(x["flow_count"]) + '"],\n'
-
- #value = '\t\t\t"$ofp_aggr_stats":["' + r.get('dp_aggr_stats:' + str(dp_list[i]) + ':packet_count') + '", '
- #value = value + '"' + r.get('dp_aggr_stats:' + str(dp_list[i]) + ':byte_count') + '", '
- #value = value + '"' + r.get('dp_aggr_stats:' + str(dp_list[i]) + ':flow_count') + '"],\n'
-
- f.write(value)
-
- # IMPRIMINDO TABLE_STATS
- value = '\t\t\t"$ofp_table_stats":['
- l = mongo_stats[dp_list[i]]["table"].find().explain()["nscanned"]
- j = 0
- for x in mongo_stats[dp_list[i]]["table"].find():
- value = value + '['
- value = value + '"' + str(x["_id"]) + '", '
- value = value + '"' + str(x["name"]) + '", '
- value = value + '"' + str(x["active_count"]) + '", '
- value = value + '"' + str(x["lookup_count"])
-
- if j < l-1:
- value = value + '"],'
- else:
- value = value + '"]]\n'
- j = j + 1
-
- f.write(value)
-
- f.write('\t\t}\n')
- if i < (len(dp_list)-1):
- f.write('\t},\n')
- else:
- f.write('\t}\n')
-
- f.write(']}')
- f.close()
def getFactory():
class Factory:
View
199 nox/src/nox/netapps/topology/topology.cc
@@ -16,10 +16,6 @@
* along with NOX. If not, see <http://www.gnu.org/licenses/>.
*/
#include "topology.hh"
-#include <fstream>
-#include <iostream>
-#include <algorithm>
-#include <time.h>
#include <boost/bind.hpp>
#include <inttypes.h>
@@ -30,9 +26,6 @@
#include "port-status.hh"
#include "vlog.hh"
-//#include "client/dbclient.h"
-//#include "mongo.h"
-
using namespace std;
using namespace vigil;
using namespace vigil::applications;
@@ -43,10 +36,6 @@ namespace applications {
static Vlog_module lg("topology");
-//std::list<int> latitudeList;
-hash_map<datapathid, int> latitudeList;
-hash_map<datapathid, int> longitudeList;
-
Topology::Topology(const Context* c,
const json_object*)
: Component(c)
@@ -69,7 +58,6 @@ Topology::getInstance(const container::Context* ctxt, Topology*& t)
t = dynamic_cast<Topology*>
(ctxt->get_by_interface(container::Interface_description
(typeid(Topology).name())));
- srand ( time(NULL) );
}
void
@@ -159,14 +147,6 @@ Topology::handle_datapath_join(const Event& e)
nlm_iter->second.active = true;
nlm_iter->second.ports = dj.ports;
-
- int latitude = rand() % 10 - 26;
- int longitude = rand() % 10 - 50;
- //int latitude = 10;
- if(latitudeList.find(dj.datapath_id) == latitudeList.end()) {
- latitudeList[dj.datapath_id] = latitude;
- longitudeList[dj.datapath_id] = longitude;
- }
return CONTINUE;
}
@@ -189,7 +169,6 @@ Topology::handle_datapath_leave(const Event& e)
VLOG_ERR(lg, "Received datapath_leave for non-existing dp %"PRIx64".",
dl.datapath_id.as_host());
}
- //latitudeList.erase(dl.datapath_id);
return CONTINUE;
}
@@ -267,7 +246,7 @@ Topology::delete_port(const datapathid& dp, const Port& port)
Disposition
Topology::handle_link_event(const Event& e)
{
-
+
const Link_event& le = assert_cast<const Link_event&>(e);
if (le.action == Link_event::ADD) {
add_link(le);
@@ -305,8 +284,6 @@ Topology::add_link(const Link_event& le)
LinkPorts lp = { le.sport, le.dport };
dlm_iter->second.push_back(lp);
add_internal(le.dpdst, le.dport);
-
- printNetworkLinkMap(le);
}
@@ -342,16 +319,12 @@ Topology::remove_link(const Link_event& le)
topology.erase(nlm_iter);
}
}
-
- printNetworkLinkMap(le);
return;
}
}
lg.err("Remove link event for non-existing link %"PRIx64":%hu --> %"PRIx64":%hu",
le.dpsrc.as_host(), le.sport, le.dpdst.as_host(), le.dport);
-
- //printNetworkLinkMap(le);
}
@@ -406,176 +379,6 @@ Topology::remove_internal(const datapathid& dp, uint16_t port)
}
}
-
-/*
- Uma maneira aparentemente bem mais simples eh passar link_event (le) como parametro para remove_internal e add_internal
- e repassar para printNetworkLinkMap. printNetworkLinkMap imprime le.dpsrc, le.dpdst, le.sport e le.dport.
- Esse metodo nao foi testado.
-*/
-void
-Topology::printNetworkLinkMap(const Link_event& le) {
- NetworkLinkMap::iterator nlm_iter = topology.find(le.dpsrc);
-
-// std::cout << "######################\n";
-// std::cout << "Enlace adicionado\n";
-// std::cout << "######################\n";
-// std::cout << le.dpsrc.string() << ":" << le.sport << " -> "<< le.dpdst.string() << ":" << le.dport << "\n";
-
- nlm_iter = topology.begin();
- LinkSet ls;
-// std::cout << "######################\n";
-// std::cout << "Topologia\n";
-// std::cout << "######################\n";
- while(nlm_iter != topology.end()) {
-
- DatapathLinkMap::iterator dplm_iter = nlm_iter->second.outlinks.begin();
- while(dplm_iter != nlm_iter->second.outlinks.end()) {
-
- ls = dplm_iter->second;
- LinkSet::iterator ls_iter = ls.begin();
- while(ls_iter != ls.end()) {
-// std::cout << nlm_iter->first.string() << ":" << ls_iter->src << " -> " << dplm_iter->first.string() <<":" << ls_iter->dst <<"\n";
- ls_iter++;
- }
- dplm_iter++;
- }
- nlm_iter++;
- }
-
-
- hash_map<datapathid, int>::iterator it;
-// for(it=latitudeList.begin(); it!=latitudeList.end(); ++it) {
-// std::cout << "datapath: " << it->first.string() << "; latitude :" << it->second << "\n";
-// }
-
- ofstream file("../../../rfweb/data/topology.json");
- if (file.is_open())
- {
- file << "{ \"nodes\": [\n";
- file << "\t{\n";
- file << "\t\t\"id\": \"rf-server\",\n";
- file << "\t\t\"name\": \"RouteFlow Server\",\n";
- file << "\t\t\"adjacencies\": [\n";
- file << "\t\t\t{\n";
- file << "\t\t\t\t\"nodeTo\": \"controller\",\n";
- file << "\t\t\t\t\"nodeFrom\": \"rf-server\",\n";
- file << "\t\t\t\t\"data\": {\n";
- file << "\t\t\t\t\t\"$color\": \"#145D80\"\n";
- file << "\t\t\t\t}\n";
- file << "\t\t\t}\n";
- file << "\t\t],\n";
- file << "\t\t\"data\": {\n";
- file << "\t\t\t\"$type\": \"rf-server\",\n";
- file << "\t\t\t\"timer\": " << time(NULL) <<",\n";
- file << "\t\t\t\"$dim\": 20,\n";
- file << "\t\t\t\"$x\": -50,\n";
- file << "\t\t\t\"$y\": -170,\n";
- file << "\t\t\t\"latitude\": -21.950884,\n";
- file << "\t\t\t\"longitude\": -43.775578\n";
- file << "\t\t}\n";
- file << "\t},\n";
- file << "\t{\n";
- file << "\t\t\"id\": \"controller\",\n";
- file << "\t\t\"name\": \"Controller\",\n";
- file << "\t\t\"adjacencies\": [],\n";
- file << "\t\t\"Label\": {\n";
- file << "\t\t\t\"$color\": \"#\"\n";
- file << "\t\t},\n";
- file << "\t\t\"data\": {\n";
- file << "\t\t\t\"$type\": \"controller\",\n";
- file << "\t\t\t\"timer\": " << time(NULL) <<",\n";
- file << "\t\t\t\"$dim\": 25,\n";
- file << "\t\t\t\"$x\": 100,\n";
- file << "\t\t\t\"$y\": -170,\n";
- file << "\t\t\t\"latitude\": -22.002466,\n";
- file << "\t\t\t\"longitude\": -46.062749\n";
- file << "\t\t}\n";
- file << "\t}";
-
- NetworkLinkMap::iterator nlm_iter = topology.begin();
- for(int i = 0; i<topology.size(); i++) {
- int aux = nlm_iter->first.string().find_first_not_of("0");
- if(aux<9) {
- nlm_iter++;
- continue;
- }
- file << ",\n";
- file << "\t{\n";
- stringstream convert (nlm_iter->first.string().erase(0, aux));
- int d;
- convert >> std::hex >> d;
- file << "\t\t\"id\": \"" << dec << d << "\",\n";
- file << "\t\t\"name\": \"switch" << dec << d << "\",\n";
- file << "\t\t\"adjacencies\": [\n";
- file << "\t\t\t{\n";
- file << "\t\t\t\t\"nodeTo\": \"controller\",\n";
- file << "\t\t\t\t\"nodeFrom\": \"" << dec << d << "\",\n";
- file << "\t\t\t\t\"data\": {\n";
- file << "\t\t\t\t\t\"$color\": \"#145D80\"\n";
- file << "\t\t\t\t}\n";
- file << "\t\t\t}";
- if (nlm_iter == topology.end()) {
- file << "************ ERROR ****************\n";
- return;
-
- } else {
- DatapathLinkMap::iterator dplm_iter = nlm_iter->second.outlinks.begin();
- if(dplm_iter != nlm_iter->second.outlinks.end())
- file << ",\n";
- else
- file << "\n";
-
- while(dplm_iter != nlm_iter->second.outlinks.end()) {
- file << "\t\t\t{\n";
- int aux2 = dplm_iter->first.string().find_first_not_of("0");
-
- stringstream convert1 (nlm_iter->first.string().erase(0, aux));
- int d1;
- convert1 >> std::hex >> d1;
-
- stringstream convert2 (dplm_iter->first.string().erase(0, aux2));
- int d2;
- convert2 >> std::hex >> d2;
-
- file << "\t\t\t\t\"nodeTo\": \"" << dec << d2 << "\",\n";
- file << "\t\t\t\t\"nodeFrom\": \"" << dec << d1 << "\",\n";
- file << "\t\t\t\t\"data\": {\n";
- file << "\t\t\t\t\t\"$color\": \"#010101\"\n";
- file << "\t\t\t\t}\n";
- dplm_iter++;
- file << "\t\t\t}";
- if(dplm_iter != nlm_iter->second.outlinks.end())
- file << ",\n";
- else
- file << "\n";
- }
- }
-
- file << "\t\t],\n";
- file << "\t\t\"data\": {\n";
- file << "\t\t\t\"$type\": \"of-switch\",\n";
- file << "\t\t\t\"timer\": " << time(NULL) <<",\n";
- file << "\t\t\t\"$dim\": 20,\n";
- //int latitude = rand() % 10 - 23;
- //int longitude = rand() % 10 - 48;
- //file << "\t\t\t\"latitude\": -23.538609,\n";
- //file << "\t\t\t\"longitude\": -46.682607\n";
- file << "\t\t\t\"latitude\": " << latitudeList[nlm_iter->first] << ",\n";
- file << "\t\t\t\"longitude\": " << longitudeList[nlm_iter->first] << "\n";
- file << "\t\t}\n";
- file << "\t}";
- nlm_iter++;
- }
- file << "]}";
-
- file.close();
-
- }
- else std::cout << "Unable to open topology file";
-
}
-
-} //end namespace vigil
-
REGISTER_COMPONENT(container::Simple_component_factory<Topology>, Topology);
View
2 nox/src/nox/netapps/topology/topology.hh
@@ -139,8 +139,6 @@ private:
/** \brief Remove internal port
*/
void remove_internal(const datapathid&, uint16_t);
-
- void printNetworkLinkMap(const Link_event&);
};
} // namespace applications
View
384 nox/src/nox/netapps/topology/topologyOld.cc
@@ -1,384 +0,0 @@
-/* Copyright 2008 (C) Nicira, Inc.
- *
- * This file is part of NOX.
- *
- * NOX is free software: you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation, either version 3 of the License, or
- * (at your option) any later version.
- *
- * NOX is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with NOX. If not, see <http://www.gnu.org/licenses/>.
- */
-#include "topology.hh"
-
-#include <boost/bind.hpp>
-#include <inttypes.h>
-
-#include "assert.hh"
-#include "datapath-join.hh"
-#include "datapath-leave.hh"
-#include "port-status.hh"
-#include "vlog.hh"
-
-using namespace std;
-using namespace vigil;
-using namespace vigil::applications;
-using namespace vigil::container;
-
-namespace vigil {
-namespace applications {
-
-static Vlog_module lg("topology");
-
-Topology::Topology(const Context* c,
- const json_object*)
- : Component(c)
-{
- empty_dp.active = false;
-
- // For bebugging
- // Link_event le;
- // le.action = Link_event::ADD;
- // le.dpsrc = datapathid::from_host(0);
- // le.dpdst = datapathid::from_host(1);
- // le.sport = 0;
- // le.dport = 0;
- // add_link(le);
-}
-
-void
-Topology::getInstance(const container::Context* ctxt, Topology*& t)
-{
- t = dynamic_cast<Topology*>
- (ctxt->get_by_interface(container::Interface_description
- (typeid(Topology).name())));
-}
-
-void
-Topology::configure(const Configuration*)
-{
- register_handler<Link_event>
- (boost::bind(&Topology::handle_link_event, this, _1));
- register_handler<Datapath_join_event>
- (boost::bind(&Topology::handle_datapath_join, this, _1));
- register_handler<Datapath_leave_event>
- (boost::bind(&Topology::handle_datapath_leave, this, _1));
- register_handler<Port_status_event>
- (boost::bind(&Topology::handle_port_status, this, _1));
-}
-
-void
-Topology::install()
-{}
-
-const Topology::DpInfo&
-Topology::get_dpinfo(const datapathid& dp) const
-{
- NetworkLinkMap::const_iterator nlm_iter = topology.find(dp);
-
- if (nlm_iter == topology.end()) {
- return empty_dp;
- }
-
- return nlm_iter->second;
-}
-
-const Topology::DatapathLinkMap&
-Topology::get_outlinks(const datapathid& dpsrc) const
-{
- NetworkLinkMap::const_iterator nlm_iter = topology.find(dpsrc);
-
- if (nlm_iter == topology.end()) {
- return empty_dp.outlinks;
- }
-
- return nlm_iter->second.outlinks;
-}
-
-
-const Topology::LinkSet&
-Topology::get_outlinks(const datapathid& dpsrc, const datapathid& dpdst) const
-{
- NetworkLinkMap::const_iterator nlm_iter = topology.find(dpsrc);
-
- if (nlm_iter == topology.end()) {
- return empty_link_set;
- }
-
- DatapathLinkMap::const_iterator dlm_iter = nlm_iter->second.outlinks.find(dpdst);
- if (dlm_iter == nlm_iter->second.outlinks.end()) {
- return empty_link_set;
- }
-
- return dlm_iter->second;
-}
-
-
-bool
-Topology::is_internal(const datapathid& dp, uint16_t port) const
-{
- NetworkLinkMap::const_iterator nlm_iter = topology.find(dp);
-
- if (nlm_iter == topology.end()) {
- return false;
- }
-
- PortMap::const_iterator pm_iter = nlm_iter->second.internal.find(port);
- return (pm_iter != nlm_iter->second.internal.end());
-}
-
-
-Disposition
-Topology::handle_datapath_join(const Event& e)
-{
- const Datapath_join_event& dj = assert_cast<const Datapath_join_event&>(e);
- NetworkLinkMap::iterator nlm_iter = topology.find(dj.datapath_id);
-
- if (nlm_iter == topology.end()) {
- nlm_iter = topology.insert(std::make_pair(dj.datapath_id,
- DpInfo())).first;
- }
-
- nlm_iter->second.active = true;
- nlm_iter->second.ports = dj.ports;
- return CONTINUE;
-}
-
-Disposition
-Topology::handle_datapath_leave(const Event& e)
-{
- const Datapath_leave_event& dl = assert_cast<const Datapath_leave_event&>(e);
- NetworkLinkMap::iterator nlm_iter = topology.find(dl.datapath_id);
-
- if (nlm_iter != topology.end()) {
- if (!(nlm_iter->second.internal.empty()
- && nlm_iter->second.outlinks.empty()))
- {
- nlm_iter->second.active = false;
- nlm_iter->second.ports.clear();
- } else {
- topology.erase(nlm_iter);
- }
- } else {
- VLOG_ERR(lg, "Received datapath_leave for non-existing dp %"PRIx64".",
- dl.datapath_id.as_host());
- }
- return CONTINUE;
-}
-
-Disposition
-Topology::handle_port_status(const Event& e)
-{
- const Port_status_event& ps = assert_cast<const Port_status_event&>(e);
-
- if (ps.reason == OFPPR_DELETE) {
- delete_port(ps.datapath_id, ps.port);
- } else {
- add_port(ps.datapath_id, ps.port, ps.reason != OFPPR_ADD);
- }
-
- return CONTINUE;
-}
-
-void
-Topology::add_port(const datapathid& dp, const Port& port, bool mod)
-{
- NetworkLinkMap::iterator nlm_iter = topology.find(dp);
- if (nlm_iter == topology.end()) {
- VLOG_WARN(lg, "Add/mod port %"PRIu16" to unknown datapath %"PRIx64" - adding default entry.",
- port.port_no, dp.as_host());
- nlm_iter = topology.insert(std::make_pair(dp,
- DpInfo())).first;
- nlm_iter->second.active = false;
- nlm_iter->second.ports.push_back(port);
- return;
- }
-
- for (std::vector<Port>::iterator p_iter = nlm_iter->second.ports.begin();
- p_iter != nlm_iter->second.ports.end(); ++p_iter)
- {
- if (p_iter->port_no == port.port_no) {
- if (!mod) {
- VLOG_DBG(lg, "Add known port %"PRIu16" on datapath %"PRIx64" - modifying port.",
- port.port_no, dp.as_host());
- }
- *p_iter = port;
- return;
- }
- }
-
- if (mod) {
- VLOG_DBG(lg, "Mod unknown port %"PRIu16" on datapath %"PRIx64" - adding port.",
- port.port_no, dp.as_host());
- }
- nlm_iter->second.ports.push_back(port);
-}
-
-void
-Topology::delete_port(const datapathid& dp, const Port& port)
-{
- NetworkLinkMap::iterator nlm_iter = topology.find(dp);
- if (nlm_iter == topology.end()) {
- VLOG_ERR(lg, "Delete port from unknown datapath %"PRIx64".",
- dp.as_host());
- return;
- }
-
- for (std::vector<Port>::iterator p_iter = nlm_iter->second.ports.begin();
- p_iter != nlm_iter->second.ports.end(); ++p_iter)
- {
- if (p_iter->port_no == port.port_no) {
- nlm_iter->second.ports.erase(p_iter);
- return;
- }
- }
-
- VLOG_ERR(lg, "Delete unknown port %"PRIu16" from datapath %"PRIx64".",
- port.port_no, dp.as_host());
-}
-
-Disposition
-Topology::handle_link_event(const Event& e)
-{
-
- const Link_event& le = assert_cast<const Link_event&>(e);
- if (le.action == Link_event::ADD) {
- add_link(le);
- } else if (le.action == Link_event::REMOVE) {
- remove_link(le);
- } else {
- lg.err("unknown link action %u", le.action);
- }
-
- return CONTINUE;
-}
-
-
-void
-Topology::add_link(const Link_event& le)
-{
- NetworkLinkMap::iterator nlm_iter = topology.find(le.dpsrc);
- DatapathLinkMap::iterator dlm_iter;
- if (nlm_iter == topology.end()) {
- VLOG_WARN(lg, "Add link to unknown datapath %"PRIx64" - adding default entry.",
- le.dpsrc.as_host());
- nlm_iter = topology.insert(std::make_pair(le.dpsrc,
- DpInfo())).first;
- nlm_iter->second.active = false;
- dlm_iter = nlm_iter->second.outlinks.insert(std::make_pair(le.dpdst,
- LinkSet())).first;
- } else {
- dlm_iter = nlm_iter->second.outlinks.find(le.dpdst);
- if (dlm_iter == nlm_iter->second.outlinks.end()) {
- dlm_iter = nlm_iter->second.outlinks.insert(std::make_pair(le.dpdst,
- LinkSet())).first;
- }
- }
-
- LinkPorts lp = { le.sport, le.dport };
- dlm_iter->second.push_back(lp);
- add_internal(le.dpdst, le.dport);
-}
-
-
-void
-Topology::remove_link(const Link_event& le)
-{
- NetworkLinkMap::iterator nlm_iter = topology.find(le.dpsrc);
- if (nlm_iter == topology.end()) {
- lg.err("Remove link event for non-existing link %"PRIx64":%hu --> %"PRIx64":%hu (src dp)",
- le.dpsrc.as_host(), le.sport, le.dpdst.as_host(), le.dport);
- return;
- }
-
- DatapathLinkMap::iterator dlm_iter = nlm_iter->second.outlinks.find(le.dpdst);
- if (dlm_iter == nlm_iter->second.outlinks.end()) {
- lg.err("Remove link event for non-existing link %"PRIx64":%hu --> %"PRIx64":%hu (dst dp)",
- le.dpsrc.as_host(), le.sport, le.dpdst.as_host(), le.dport);
- return;
- }
-
- for (LinkSet::iterator ls_iter = dlm_iter->second.begin();
- ls_iter != dlm_iter->second.end(); ++ls_iter)
- {
- if (ls_iter->src == le.sport && ls_iter->dst == le.dport) {
- dlm_iter->second.erase(ls_iter);
- remove_internal(le.dpdst, le.dport);
- if (dlm_iter->second.empty()) {
- nlm_iter->second.outlinks.erase(dlm_iter);
- if (!nlm_iter->second.active && nlm_iter->second.ports.empty()
- && nlm_iter->second.internal.empty()
- && nlm_iter->second.outlinks.empty())
- {
- topology.erase(nlm_iter);
- }
- }
- return;
- }
- }
-
- lg.err("Remove link event for non-existing link %"PRIx64":%hu --> %"PRIx64":%hu",
- le.dpsrc.as_host(), le.sport, le.dpdst.as_host(), le.dport);
-}
-
-
-void
-Topology::add_internal(const datapathid& dp, uint16_t port)
-{
- NetworkLinkMap::iterator nlm_iter = topology.find(dp);
- if (nlm_iter == topology.end()) {
- VLOG_WARN(lg, "Add internal to unknown datapath %"PRIx64" - adding default entry.",
- dp.as_host());
- DpInfo& di = topology[dp] = DpInfo();
- di.active = false;
- di.internal.insert(std::make_pair(port, std::make_pair(port, 1)));
- return;
- }
-
- PortMap::iterator pm_iter = nlm_iter->second.internal.find(port);
- if (pm_iter == nlm_iter->second.internal.end()) {
- nlm_iter->second.internal.insert(
- std::make_pair(port, std::make_pair(port, 1)));
- } else {
- ++(pm_iter->second.second);
- }
-}
-
-
-void
-Topology::remove_internal(const datapathid& dp, uint16_t port)
-{
- NetworkLinkMap::iterator nlm_iter = topology.find(dp);
- if (nlm_iter == topology.end()) {
- lg.err("Remove internal for non-existing dp %"PRIx64":%hu",
- dp.as_host(), port);
- return;
- }
-
- PortMap::iterator pm_iter = nlm_iter->second.internal.find(port);
- if (pm_iter == nlm_iter->second.internal.end()) {
- lg.err("Remove internal for non-existing ap %"PRIx64":%hu.",
- dp.as_host(), port);
- } else {
- if (--(pm_iter->second.second) == 0) {
- nlm_iter->second.internal.erase(pm_iter);
- if (!nlm_iter->second.active && nlm_iter->second.ports.empty()
- && nlm_iter->second.internal.empty()
- && nlm_iter->second.outlinks.empty())
- {
- topology.erase(nlm_iter);
- }
- }
- }
-}
-
-}
-}
-
-REGISTER_COMPONENT(container::Simple_component_factory<Topology>, Topology);
View
147 nox/src/nox/netapps/topology/topologyOld.hh
@@ -1,147 +0,0 @@
-/* Copyright 2008 (C) Nicira, Inc.
- *
- * This file is part of NOX.
- *
- * NOX is free software: you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation, either version 3 of the License, or
- * (at your option) any later version.
- *
- * NOX is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with NOX. If not, see <http://www.gnu.org/licenses/>.
- */
-#ifndef TOPOLOGY_HH
-#define TOPOLOGY_HH 1
-
-#include <list>
-
-#include "component.hh"
-#include "hash_map.hh"
-#include "discovery/link-event.hh"
-#include "netinet++/datapathid.hh"
-#include "port.hh"
-
-namespace vigil {
-namespace applications {
-
-/** \ingroup noxcomponents
- *
- * \brief The current network topology
- */
-class Topology
- : public container::Component {
-
-public:
- /** \brief Structure to hold source and destination port
- */
- struct LinkPorts {
- uint16_t src;
- uint16_t dst;
- };
-
- typedef std::vector<Port> PortVector;
- typedef hash_map<uint16_t, std::pair<uint16_t, uint32_t> > PortMap;
- typedef std::list<LinkPorts> LinkSet;
- typedef hash_map<datapathid, LinkSet> DatapathLinkMap;
-
- /** \brief Structure to hold information about datapath
- */
- struct DpInfo {
- /** \brief List of ports for datapath
- */
- PortVector ports;
- /** \brief Map of internal ports (indexed by port)
- */
- PortMap internal;
- /** \brief Map of outgoing links
- * (indexed by datapath id of switch on the other end)
- */
- DatapathLinkMap outlinks;
- /** \brief Indicate if datapath is active
- */
- bool active;
- };
-
- /** \brief Constructor
- */
- Topology(const container::Context*, const json_object*);
-
- /** \brief Get instance of component
- */
- static void getInstance(const container::Context*, Topology*&);
-
- /** \brief Configure components
- */
- void configure(const container::Configuration*);
-
- /** \brief Install components
- */
- void install();
-
- /** \brief Get information about datapath
- */
- const DpInfo& get_dpinfo(const datapathid& dp) const;
- /** \brief Get outgoing links of datapath
- */
- const DatapathLinkMap& get_outlinks(const datapathid& dpsrc) const;
- /** \brief Get links between two datapaths
- */
- const LinkSet& get_outlinks(const datapathid& dpsrc, const datapathid& dpdst) const;
- /** \brief Check if link is internal (i.e., between switches)
- */
- bool is_internal(const datapathid& dp, uint16_t port) const;
-
-private:
- /** \brief Map of information index by datapath id
- */
- typedef hash_map<datapathid, DpInfo> NetworkLinkMap;
- NetworkLinkMap topology;
- DpInfo empty_dp;
- LinkSet empty_link_set;
-
- //Topology() { }
-
- /** \brief Handle datapath join
- */
- Disposition handle_datapath_join(const Event&);
- /** \brief Handle datapath leave
- */
- Disposition handle_datapath_leave(const Event&);
- /** \brief Handle port status changes
- */
- Disposition handle_port_status(const Event&);
- /** \brief Handle link changes
- */
- Disposition handle_link_event(const Event&);
-
- /** \brief Add new port
- */
- void add_port(const datapathid&, const Port&, bool);
- /** \brief Delete port
- */
- void delete_port(const datapathid&, const Port&);
-
- /** \brief Add new link
- */
- void add_link(const Link_event&);
- /** \brief Delete link
- */
- void remove_link(const Link_event&);
-
- /** \brief Add new internal port
- */
- void add_internal(const datapathid&, uint16_t);
- /** \brief Remove internal port
- */
- void remove_internal(const datapathid&, uint16_t);
-};
-
-} // namespace applications
-} // namespace vigil
-
-#endif
View
2 pox/ext/README
@@ -1,2 +0,0 @@
-This directory is not under version control, so it makes a good place to
-put your own projects.
View
152 pox/ext/rfproxy.py
@@ -12,8 +12,8 @@
conn = mongo.Connection()
rftable = conn[MONGO_DB_NAME][RF_TABLE_NAME]
-
-
+
+
class DatapathJoin(MongoIPC.MongoIPCMessage):
def __init__(self, **kwargs):
MongoIPC.MongoIPCMessage.__init__(self, DATAPATH_JOIN, **kwargs)
@@ -50,6 +50,15 @@ def str(self):
string += " " + MongoIPC.MongoIPCMessage.str(self).replace("\n", "\n ")
return string
+def send_ofmsg(dp_id, ofmsg, success, fail):
+ topology = core.components['topology']
+ switch = topology.getEntityByID(dp_id)
+ if switch is not None and switch.connected:
+ switch.send(ofmsg)
+ log.info(success)
+ else:
+ log.debug(fail)
+
class RFProcessor:
def process(self, from_, to, channel, msg):
topology = core.components['topology']
@@ -59,34 +68,44 @@ def process(self, from_, to, channel, msg):
operation_id = int(msg["operation_id"])
ofmsg = create_config_msg(operation_id)
- switch = topology.getEntityByID(dp_id)
- switch.send(ofmsg)
+ send_ofmsg(dp_id, ofmsg,
+ "Flow modification (config) sent to datapath %i" % dp_id,
+ "Switch is disconnected, cannot send flow modification (config)")
elif type_ == FLOW_MOD:
- print "Installing flow mod to datapath %s" % msg["dp_id"]
dp_id = int(msg["dp_id"])
-
+
netmask = str(msg["netmask"])
# TODO: fix this. It was just to make it work...
netmask = netmask.count("255")*8
-
+
address = str(msg["address"]) + "/" + str(netmask)
-
+
src_hwaddress = str(msg["src_hwaddress"])
dst_hwaddress = str(msg["dst_hwaddress"])
dst_port = int(msg["dst_port"])
-
- ofmsg = create_flow_install_msg(address, netmask, src_hwaddress, dst_hwaddress, dst_port)
-
- switch = topology.getEntityByID(dp_id)
- switch.send(ofmsg)
-
+
+ if (not msg["is_removal"]):
+ ofmsg = create_flow_install_msg(address, netmask, src_hwaddress, dst_hwaddress, dst_port)
+ send_ofmsg(dp_id, ofmsg,
+ "Flow modification (install) sent to datapath %i" % dp_id,
+ "Switch is disconnected, cannot send flow modification (install)")
+ else:
+ ofmsg = create_flow_remove_msg(address, netmask, src_hwaddress)
+ send_ofmsg(dp_id, ofmsg,
+ "Flow modification (removal) sent to datapath %i" % dp_id,
+ "Switch is disconnected, cannot send flow modification (removal)")
+ ofmsg = create_temporary_flow_msg(address, netmask, src_hwaddress)
+ send_ofmsg(dp_id, ofmsg,
+ "Flow modification (temporary) sent to datapath %i" % dp_id,
+ "Switch is disconnected, cannot send flow modification (temporary)")
+
class RFFactory(IPC.IPCMessageFactory):
def build_for_type(self, type_):
if type_ == DATAPATH_CONFIG:
return DatapathConfig()
if type_ == FLOW_MOD:
return FlowMod()
-
+
def ofm_match_dl(ofm, match, value):
ofm.match.wildcards &= ~match;
@@ -161,9 +180,8 @@ def create_flow_install_msg(ip, mask, srcMac, dstMac, dstPort):
ofm_match_dl(ofm, OFPFW_DL_TYPE, 0x0800)
if (MATCH_L2):
ofm_match_dl(ofm, OFPFW_DL_DST, srcMac)
-
+
ofm.match.set_nw_dst(ip)
-
ofm.priority = OFP_DEFAULT_PRIORITY + mask
ofm.command = OFPFC_ADD
if (mask == 32):
@@ -187,7 +205,7 @@ def create_flow_remove_msg(ip, mask, srcMac):
if (MATCH_L2):
ofm_match_dl(ofm, OFPFW_DL_DST, srcMac)
- ofm_match_nw(ofm, ((31 + mask) << OFPFW_NW_DST_SHIFT), 0, 0, 0, ip)
+ ofm.match.set_nw_dst(ip)
ofm.priority = OFP_DEFAULT_PRIORITY + mask
ofm.command = OFPFC_DELETE_STRICT
return ofm
@@ -200,8 +218,7 @@ def create_temporary_flow_msg(ip, mask, srcMac):
if (MATCH_L2):
ofm_match_dl(ofm, OFPFW_DL_DST, srcMac)
- ofm_match_nw(ofm, ((31 + mask) << OFPFW_NW_DST_SHIFT), 0, 0, 0, ip)
-
+ ofm.match.set_nw_dst(ip)
ofm.priority = OFP_DEFAULT_PRIORITY + mask
ofm.command = OFPFC_ADD
@@ -221,65 +238,62 @@ def dpid_to_switchname(dpid):
return switchname
def handle_connectionup(event):
- print "DP is up, installing config flows...", dpid_to_switchname(event.dpid)