This page is currently being updated to match whats being developed. Dont trust this stuff just yet
ipdevpoll is the planned replacement for getDeviceData, becoming the third generation SNMP polling framework for NAV. This page will lay out the various design ideas and specifications for the new program.
The core application has the following basic tasks:
Jobs are a group of plug-ins run in a specified order, and scheduled at a specified time interval. The scheduling is handled by Twisted.
NAV provide two jobs in the default configuration, Inventory
and Logging
.
These provide a sane order for the plug-ins to run. The user may specify their own jobs and schedules in jobs.conf
. The configuration files used by ipdevpoll follows the syntax
demanded by the python ConfigParser-module.
Example jobs.conf
[inventory]
interval: 60s
plugins:
typeoid
dnsname
interfaces
vlan
prefix
One advantage of this way of doing things is easy deployment of several ipdevpolld-processes, either on the same host or distributed over several machines.
Data is provided through the use of shadow-classes of django-models. Plugins never write django-objects directly. This is handled by the job the plugin runs in at the end of each job.
Shadow-classes are created in nav.ipdevpoll.shadows
Example shadowclass specification
class Interface(Shadow): __shadowclass__ = manage.Interface __lookups__ = [('netbox', 'ifname'), ('netbox', 'ifindex')]
The Shadow-class has two important attributes. __shadowclass__
specifies
which django model to mimic. This will make any instances of the shadowclass
answers to the same attributes as the django model. When a property is changed
on the shadowclass, the class will set it state to touched. This is needed for
determining wheter to update an object or not at the end of a run. To lookup
existing objects in the database the storage system first tries the primary key
field on the object. If the PK is None, the __lookup__
-attribute is used
for the checking. The __lookup__
attribute is a list of strings and tuples.
Tuples specifies combined lookups, while strings specifies single field
lookups. The different lookups are tried in the order they are defined in the
list. If we look at the example above, it will first try to get an existing
object with only the interface.id
-field. If its unavailable it tries the
first object in the lookup-list, a combined search for objects matching both
the netbox
and ifname
properties on the shadow instance. If
unsuccessful it goes on to the next lookup field.
There are two major “gotcha”s here. First of, the lookups must be unique. If multiple rows are returned an exception is raised. The second is primary key lookups where the primary key is not an AutoField; if no rows are found when using non-AutoField primary keys, a new object is created. If the field is an AutoField however, and no rows are returned, an exception is raised.
Deletion of objects in the database is done if the shadowobject has its
delete
-property set to True.
Once the job is done the storage routine is called. The storage system then parses all
the shadow objects contained in job_handler.container
and make s sure all foreignkeys needed for the storing
an instance are saved before the object it self.
There are two main groupings of plugins that need to be considered, inventory plugins and logging plugins. These two groups should by default be configured to run as separate jobs as their scheduling intervals vary greatly.
Installed plugins should be specified as a simple list of module names to import in string form (much like django's INSTALLED_APPS setting). During import each plugin will be responsible for registering itself by calling gdd.plugins.register(<class>), this will add the plugin to a global list of plugins which will be topologically sorted once the plugin list has been processed.
Each plugin will be called in order and must itself determine if the netbox in question is of instrest (ie. dues it support the mibs that the plugin knows about, does the netbox vendor match those supported by the plugin…). Once this is decided the plugin can either pass or start collecting more SNMP data and populate the data persistence store and/or modify the already there (ie. vendor specific interpretations). Once all plugins have run gDD must ensure that the updated data is stored to the database.
To keep the plugin directory somewhat organized the following organization has been proposed:
/plugins /inventory - entity_mib.py - ... /cisco - entity_mib.py - ... /hp - .. /log - arp.py - ...
Plugins should be divided into their respective responsibilities. Each plugin should be its own python module with a defined handler-class sub-classed from the nav.ipdevpoll.Plugin-class. The handler should only deal with storing the information to the database and logic to determine if vendor-specific collections methods should be used. Each plugin should supply its own test-suite in a folder called test withing the plugin module. Every plugin should prefer to use the MIB-2 standard mibs for collecting information, falling back to vendor specific mibs. To maximize the amount of easily testable code, all parsing logic should be extracted into their own methods and not be entangled into the collection process (methods that returns deferred objects). Remember to call the processing methods asynchronously as well.
/nav/ipdevpoll/plugins - /iftable - iftable.py - test_iftable.py - collector.py - /vendor - cisco.py - test_cisco.py - alcatel.py - test_alcatel.py - ..
Example plugin:
from nav.ipdevpoll import Plugin from nav.ipdevpoll.plugins.vlan import collector class Vlan(Plugin): def __init__(self, *args, **kwargs): """ Initialize the plugin """ Plugin.__init__(self, *args, **kwargs) self.deferred = defer.Deferred() def handle(self): """ This is the entry point for the plugin. """ self.logger.debug("Collecting VLAN information") collector.collect(self.job_handler.agent) return self.deferred def process_result(self, result): """ Standard processing method for all plugins. The collections methods should provide data in the same format to this method. """ if not result: self.logger.debug( "No VLAN information found using MIB-2 MIBS.\ Trying vendor specific collection") # Try vendor specific methods else: self.logger.debug("Found %s VLANs. Processing." % len(result)) for ifIndex, vlan in result: # Store the information to database or similar pass # Plugin-run finished. Exit. self.deferred.callback(True) return result
Already implemented plugins can be found on plugins
The following database changes have been suggested.
A long standing issue has been merging the artificially separated swport
and gwport
tables. In the SNMP MIBs there is no such divide, both swports and gwports are referred to as interfaces, and IP-MIB can be used to find which interfaces operate on layer 3 (IP). All interface-generic information should be stored in a single interface table, while specific layer 3 properties should be stored in separate related tables. There are also many attributes from IF-MIB's ifTable
that are not collected and stored by the current gDD solution, but which would be very interesting to store.
Most NAV code that deals with interfaces will have duplicate code to take care of swport
and gwport
. Most subsystems only deal with reading from these tables, so during the transition to a new db model one can create swport
and gwport
views. gDD and networkDiscovery (the topology deriver) are the only known systems that change data in the swport/gwport tables. networkDiscovery will need a rewrite to support a new model (or update rules need to be added to the new views).
The current NAV data model mandates that all gwports/swports bek related to a module. For netboxes that do not contain physical modules (or whose modules cannot be found by gDD), the current gDD will create an artifical module related to the same device record as the containing netbox and add swports/gwports to this module.
The IF-MIB has no concept of an interface/module relationship. Information about such a relationship will most likely come from proprietary MIBs (or possibly a properly populated ENTITY-MIB), and should be treated as a bonus if found. Ie. NAV's model should add to the interface table a mandatory foreign key to netbox, and only an optional foreign key into the module table.
The first NAV release to include ipdevpoll was version 3.6.