User Tools

Site Tools


devel:blueprints:ipdevpoll

This is an old revision of the document!


ipdevpoll design specification

Introduction

ipdevpoll is the planned replacement for getDeviceData, becoming the third generation SNMP polling framework for NAV. This page will lay out the various design ideas and specifications for the new program.

Basics

  • Asynchronous SNMP polling, built with Twisted and TwistedSNMP
  • Plugin based architecture
  • Data persistence based on Twisted aDBI? (FIXME)

Core application

The gDD core application has the following basic tasks:

  • Schedule executing of jobs
  • Execute plugins contained in job in correct order
  • Handle rescheduling when a plugin signals that previous state has been invalidated
  • Provide data persistence to plugins

Scheduling of jobs instead of individual plugins has been chosen to simplify the application due to the high level of dependency between some of the plugins. Each job defined in the the gDD configuration defines a set of plugins that are to be run at a given interval. The scheduling of these intervals is handled using Twisted's built in scheduling mechanisms.

The previous version of gDD determined plugin execution order based on numeric values assigned to each plugin, this implies that every plugin must know about the value of all other plugins to be sure that it gets executed before and/or after relevant plugins. To avoid this error prone method of ordering using a simple dependence system between plugins has been suggested. Each plugin needs to know which plugins need to run before itself. This data can be used to do a topological sort based on the dependence arcs between plugins.

Data

Data storage will to a certain degree be modeled around the existing gDD data-plugins. However instead of creating a plugin architecture for this we will simply be hardcoding the data-models that we need. In proposed implementation the plugins simply create and populate data objects with as much information as they have. Upon creation these objects are added to a class-level array of initiated objects. Once all plugins complete the data-models are told to process the data.

  1. Combine data from all instances that map to same database object.
  2. Retrieve current data in db.
  3. Update if necessary.
  4. Flush global instance storage for model.

Plugins

There are two main groupings of plugins that need to be considered, inventory plugins and logging plugins. These two groups should by default be configured to run as separate jobs as their scheduling intervals vary greatly.

Installed plugins should be specified as a simple list of module names to import in string form (much like django's INSTALLED_APPS setting). During import each plugin will be responsible for registering itself by calling gdd.plugins.register(<class>), this will add the plugin to a global list of plugins which will be topologically sorted once the plugin list has been processed.

Each plugin will be called in order and must itself determine if the netbox in question is of instrest (ie. dues it support the mibs that the plugin knows about, does the netbox vendor match those supported by the plugin…). Once this is decided the plugin can either pass or start collecting more SNMP data and populate the data persistence store and/or modify the already there (ie. vendor specific interpretations). Once all plugins have run gDD must ensure that the updated data is stored to the database.

To keep the plugin directory somewhat organized the following organization has been proposed:

/plugins
  /inventory
    - entity_mib.py
    - ...
    /cisco
      - entity_mib.py
      - ...
    /hp
      - ..
  /log
    - arp.py
    - ...

Suggested log plugins

  • ARP
  • CAM (currently in getBoksMacs)
  • RRD stats (currently provided by cricket)

Suggested inventory plugins

  • Type OID
  • DNS
  • Interface (if-MIB)
  • GW (if-MIB)
  • Static routes
  • Module/serial (entity-MIB)
  • Cisco support
  • HP support

Suggested status monitoring plugins

  • ModuleMon

Database changes

The following database changes have been suggested.

Merge swport/gwport tables

A long standing issue has been merging the artificially separated swport and gwport tables. In the SNMP MIBs there is no such divide, both swports and gwports are referred to as interfaces, and IP-MIB can be used to find which interfaces operate on layer 3 (IP). All interface-generic information should be stored in a single interface table, while specific layer 3 properties should be stored in separate related tables. There are also many attributes from IF-MIB's ifTable that are not collected and stored by the current gDD solution, but which would be very interesting to store.

Most NAV code that deals with interfaces will have duplicate code to take care of swport and gwport. Most subsystems only deal with reading from these tables, so during the transition to a new db model one can create swport and gwport views. gDD and networkDiscovery (the topology deriver) are the only known systems that change data in the swport/gwport tables. networkDiscovery will need a rewrite to support a new model (or update rules need to be added to the new views).

No module requirement for interfaces

The current NAV data model mandates that all gwports/swports bek related to a module. For netboxes that do not contain physical modules (or whose modules cannot be found by gDD), the current gDD will create an artifical module related to the same device record as the containing netbox and add swports/gwports to this module.

The IF-MIB has no concept of an interface/module relationship. Information about such a relationship will most likely come from proprietary MIBs (or possibly a properly populated ENTITY-MIB), and should be treated as a bonus if found. Ie. NAV's model should add to the interface table a mandatory foreign key to netbox, and only an optional foreign key into the module table.

Code

The mercurial repositories can be found at FIXME

References

MIBs

devel/blueprints/ipdevpoll.1246541368.txt.gz · Last modified: 2009/07/02 13:29 by klette