User Tools

Site Tools


backendprocesses

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
backendprocesses [2010/05/18 10:15]
olemb
backendprocesses [2015/09/24 12:48] (current)
morten warn of obsolesence
Line 1: Line 1:
 ====== Back-end processes in NAV ====== ====== Back-end processes in NAV ======
  
-NAV has a number of back-end processes. This document gives an overviewlisting key information ​and  +<note warning>​This page has not been updated for several years, and may be quite outdated ​for NAV 4</​note>​
-detailed description ​for each process. We also give references to documentation found elsewhere on metaNAV. +
- +
- +
- +
-The following figure complements this document (the NAV 3.3 snmptrapd is not included in the figure): +
- +
-{{architecture1.png?​800|The ​NAV processes}} +
- +
- +
  
 +NAV has a number of back-end processes. This page attempts to give an overview of them.
  
 +{{:​backend-processes-3.10.png?​800|The NAV processes}}
  
 ===== nav list / nav status ===== ===== nav list / nav status =====
Line 24: Line 16:
    * [[backendprocesses#​collecting_statistics|cricket]] (includes makecricketConfig,​ Cricket collector and cleanrrds)    * [[backendprocesses#​collecting_statistics|cricket]] (includes makecricketConfig,​ Cricket collector and cleanrrds)
    * [[#​eventengine|eventengine]]    * [[#​eventengine|eventengine]]
-   * [[#getdevicedata|getDeviceData]] +   * [[#ipdevpoll|ipdevpoll]]
-   * [[#​iptrace|iptrace]]+
    * [[#​logengine|logengine]]    * [[#​logengine|logengine]]
    * [[#​mactrace|mactrace]]    * [[#​mactrace|mactrace]]
    * [[#​maintengine|maintengine]]    * [[#​maintengine|maintengine]]
-   * [[#​networkdiscovery_topology|networkDiscovery]] (physical and [[#​networkdiscovery_vlan|vlan discovery]]) 
    * [[#​pping|pping]]    * [[#​pping|pping]]
 +   * [[#​psuwatch|psuwatch]]
    * [[#​servicemon|servicemon]]    * [[#​servicemon|servicemon]]
    * [[#​smsd|smsd]]    * [[#​smsd|smsd]]
-   * [[#​thresholdmon|thresholdMon]] 
    * [[#​snmptrapd|snmptrapd]]    * [[#​snmptrapd|snmptrapd]]
 +   * [[#​thresholdmon|thresholdMon]]
 +   * [[#​topology|topology]]
  
  
 ====== Building the network model ====== ====== Building the network model ======
  
-===== getDeviceData ​===== +===== ipdevpoll ​=====
- +
- +
  
 ==== Key information ==== ==== Key information ====
  
-^ Process name         ​| ​getDeviceData ​ | +^ Process name         ​| ​ipdevpoll ​|
-^ Alias                | gDD / the snmp data collector  ​|+
 ^ Polls network ​       | Yes            | ^ Polls network ​       | Yes            |
 ^ Brief description ​   | Collects SNMP data from equipment in the netbox table and stores data regarding the equipment in a number of tables. Does not build topology. | ^ Brief description ​   | Collects SNMP data from equipment in the netbox table and stores data regarding the equipment in a number of tables. Does not build topology. |
-^ Depends upon         | Seed data must be filled in the netbox table, ​either by the [[seedessentials|Edit Database tool]] ​or by the autodiscovery contrib |+^ Depends upon         | Seed data must be filled in the netbox table, ​using the [[seedessentials|Seed Database tool]] | 
-^ Updates tables ​      | netbox, netboxsnmpoid,​ netboxinfo, device, module, gwport, gwportprefix,​ prefix, vlan, swport, swportallowedvlan, netbox_vtpvlan ​+^ Updates tables ​      | netbox, netboxsnmpoid,​ netboxinfo, device, module, gwportprefix,​ prefix, vlan, interface, swportallowedvlan | 
-^ Run mode             | Daemon process. Thread based. ​+^ Run mode             | Daemon process | 
-^ Default scheduling ​  ​| ​Initial data collection for new netboxes ​is done every 5 minutesUpdate polls on existing netboxes ​is done every 6 hrs. Collection of certain OIDs for the netbox may deviate from this interval; i.e. the moduleMon OID is polled every hour. | +^ Default scheduling ​  ​| ​Polling ​is organized into jobs in ''​ipdevpoll.conf'',​ so is job scheduling. | 
-^ Config file | getDeviceData.conf | +^ Config file | ipdevpoll.conf | 
-^ Log files | getDeviceData.log og getDeviceData/​getDeviceData-stderr.log |  +^ Log files | ipdevpoll.log |  
-^ Programming language | Java | +^ Programming language | Python ​
-^ Lines of code        | Approx 8200 +^ Further doc          | |
-^ Further doc          | [[http://​metanav.uninett.no/​static/​reports/​tigaNAV.pdf|tigaNAV report chapter 5]]|+
  
  
Line 65: Line 52:
 ==== Details ==== ==== Details ====
  
-  * Initial OID classification ​\\ When gDD detects a new box that has a valid snmp read community (regardless of category), he will start initial OID classifiation. This is done by testing the netbox against all OIDs in the snmpoid table and in turn populating the netboxsnmpoid tableTesting is done based on attributes in snmpoid tablesee reference to further doc for details. Frequency will be set based on the snmpoid.defaultfreq. +  * jobs and plugins ​\\ All ipdevpoll'​s work is done by plugins ​Plugins are organized into jobsand jobs are scheduled ​for each active IP device individually
- +  * inventory job \\ Polls for inventory information every 6 hours (by default) ​Inventory information includes interfaces, serial numbers, modules, VLANs and prefixes. 
-  * Plugin-based architecture ​\\ gDD has a plugin based architecturePlugins fall into two types; device plugins ​and data plugins: ​ +  profiler job \\ Runs every 5 minutes, profiling devices if deemed necessary NAV has an internal list of SNMP OIDs that are tested ​for compatibility ​with each device. ​ ​This ​is used to create ​sort of profile that says what the device supports - the profile is typically used to produce a Cricket configuration that will collect statistics from proprietary OIDs
-     ​Device plugins collects data with SNMPEach device plugin is geared towards a particular type of equipment, supporting a particular subset ​of OIDs. See further doc for details. ​  +  * logging job \\ Runs every 30 minutes and collects log-like ​information from devices ​At ​the time beingonly the arp plugin runscollecting ARP caches from routers.  ​ARP data is logged ​to a table, ​and aids in topology detection and client machine tracking.
-     * Data plugins updates NAVdb with data fed from the device ​pluginsA particular data plugin ​is responsible for particular table (or set of tables) in the databaseSee further doc for details. +
- +
-  * Module monitor ​\\ The module monitor is a data plugin within gDD. It has the dedicated function of detecting outage of modules in operating netboxes. When a module is detected down a moduleDown event is posted on the event queue (eventq). +
- +
- +
-===== iptrace ===== +
- +
- +
- +
- +
- +
- +
- +
-==== Key information ​==== +
- +
-^ Process name         | iptrace | +
-^ Alias                | IP-to-mac collector / arplogger| +
-^ Polls network ​       | Yes | +
-^ Brief description ​   | Collects arp data from routers and stores this information in the arp table+
-^ Depends upon         | The routers (GW / GSW) must be in the netbox table. To assign prefixes to arp entries[[#​getDeviceData|getDeviceData]] must have done router data collection. | +
-^ Updates tables ​      ​| ​arp +
-^ Run mode             | cron | +
-^ Default scheduling ​  | every 30 minutes (0,30 * * * *)No threads | +
-^ Config file          | None | +
-^ Log file             | None | +
-^ Programming language | Perl | +
-^ Lines of code        | Approx 130 lines| +
-^ Further doc          | [[http://​metanav.uninett.no/​static/​reports//​NAVMe.pdf|NAVMe report ch 4.5.8]] (Norwegian) | +
- +
- +
- +
-==== Details ==== +
- +
-  * iptrace understands proxy arp and will not store arp entries that are "​false"​. +
-  * iptrace ignores routers that are known to be down (fixed in NAV 3.1). +
-  * The command line tool [[commandlinetools#​navclean.py|navclean.py]] offers ​means of deleting old arp (and cam) entries.+
  
  
Line 116: Line 67:
 ^ Polls network ​       | Yes | ^ Polls network ​       | Yes |
 ^ Brief description ​   | Collects mac addresses behind switch table data for all switches (cat GSW, SW, EDGE). The process also checks for spanning tree blocked ports. | ^ Brief description ​   | Collects mac addresses behind switch table data for all switches (cat GSW, SW, EDGE). The process also checks for spanning tree blocked ports. |
-^ Depends upon         | [[#getDeviceData|getDeviceData]] must have created the swport tables for the switches. |+^ Depends upon         | [[#ipdevpoll|ipdevpoll]] must have created the swport tables for the switches. |
 ^ Updates tables ​      | cam (mac adresses), netboxinfo (CDP neighbors), swp_netbox (the candidate list for the physical topology builder), swportblocked (switch ports that are blocked by spannning for a given vlan). | ^ Updates tables ​      | cam (mac adresses), netboxinfo (CDP neighbors), swp_netbox (the candidate list for the physical topology builder), swportblocked (switch ports that are blocked by spannning for a given vlan). |
 ^ Run mode             | cron | ^ Run mode             | cron |
Line 123: Line 74:
 ^ Log file             | getBoksMacs.log | ^ Log file             | getBoksMacs.log |
 ^ Programming language | Java | ^ Programming language | Java |
-^ Lines of code        | Approx 1400 | +^ Further doc          | [[http://nav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 2.1]] (Norwegian),​ [[http://nav.uninett.no/​static/​reports/​tigaNAV.pdf|tigaNAV report ch 5.4.5 and ch 5.5.3]] |
-^ Further doc          | [[http://metanav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 2.1]] (Norwegian),​ [[http://metanav.uninett.no/​static/​reports/​tigaNAV.pdf|tigaNAV report ch 5.4.5 and ch 5.5.3]] |+
  
  
Line 157: Line 107:
  
 One notable improvement is the addition of the interface field in the swport table. It is used for matching the CDP remote interface, and makes this matching much more reliable. Also, both the cam and the swp_netbox tables now use netboxid and ifindex to uniquely identify a swport port instead of the old netboxid, module, port-triple. This has significantly simplified swport port matching, and especially since the old module field of swport was a shortened version of what is today the interface field, reliability has increased as well. One notable improvement is the addition of the interface field in the swport table. It is used for matching the CDP remote interface, and makes this matching much more reliable. Also, both the cam and the swp_netbox tables now use netboxid and ifindex to uniquely identify a swport port instead of the old netboxid, module, port-triple. This has significantly simplified swport port matching, and especially since the old module field of swport was a shortened version of what is today the interface field, reliability has increased as well.
-- 
- 
-===== networkDiscovery_topology ===== 
- 
- 
- 
  
 +===== topology =====
 ==== Key information ==== ==== Key information ====
  
-^ Process name         ​| ​networkDiscovery.sh topology+^ Process name         ​| ​navtopology ​
-^ Alias                | Physical Topology Builder |+^ Alias                | Physical ​and VLAN Topology Builder |
 ^ Polls network ​       | No | ^ Polls network ​       | No |
-^ Brief description ​   | Builds the physical topology ​of the network; i.e. which netbox is connected to which netbox. ​+^ Brief description ​   | Builds ​NAV's model of the physical ​network ​topology ​as well as the VLAN sub-topologies ​
-^ Depends upon         | mactrace fills data in swp_netbox representing the candidate ​list of physical ​neighborship. This is the data that the physical topology builder uses.|+^ Depends upon         | mactrace fills data in ''​swp_netbox'', ​representing the list of physical ​neighbor candidates. This is the data that the physical topology builder uses. |
 ^ Updates tables ​      | Sets the to_netboxid and to_swportid fields in the swport and gwport tables. | ^ Updates tables ​      | Sets the to_netboxid and to_swportid fields in the swport and gwport tables. |
 ^ Run mode             | cron | ^ Run mode             | cron |
 ^ Default scheduling ​  | every hour (35 * * * *) | ^ Default scheduling ​  | every hour (35 * * * *) |
 ^ Config file          | None |  ^ Config file          | None | 
-^ Log file             ​| ​networkDiscovery/​networkDiscovery-topology.html og networkDiscovery/​networkDiscovery-stderr.log   ​|  +^ Log file             ​| ​navtopology.log |  
-^ Programming language | Java | +^ Programming language | Python ​|
-^ Lines of code        | Approx 1500 (shared with vlan topology builder) | +
-^ Further doc          | [[http://​metanav.uninett.no/​static/​reports/​tigaNAV.pdf|tigaNAV report ch 5.5.4]] | +
  
 ==== Details ==== ==== Details ====
  
-FIXME This is cut and paste from the tigaNAV report. Consider a rewrite.+=== Physical topology ===
  
 +The topology discovery system builds NAV's view of the network topology based
 +on cues from information collected previously via SNMP.  ​
  
 +The information cues come from routers'​ IPv4 ARP caches and IPv6 Neighbor
 +Discovery caches, interface physical (MAC) addresses, switch forwarding tables
 +and CDP (Cisco Discovery Protocol). ​ The mactrace process has already
 +pre-parsed these cues and created a list of neighbor candidates for each port
 +in the network.
  
-The network topology discovery system automatically discovers the physical topology ​of the network monitored by NAV based on the data in the swp_netbox table collected by the cam logger. No major updates have been necessary except for adjustment to the new structure of the NAVdb; the basic algorithm remains the same. While the implementation of said algorithm is somewhat complicated as to gracefully handle missing data, the following is a simplified description:​ +The physical topology ​detection ​algorithm is responsible for reducing ​the list 
- +of neighbor candidates ​of each port to just one single device.
-   * We start with a candidate list for each swport port. These are the switches located behind a switch port and the goal of the algorithm is to pick the one to which it is connected directly. Some of the candidate lists, those of the switches one level up from the edge, will contain only one candidate. We can thus pick this as the switch directly connected and proceed to remove said switches from all other lists. After this removal there will be more candidate lists with only one candidate, and we can apply the same procedure again. +
-   * If we have the complete information about the network we could now simply iterate until all candidate lists were empty; however, to deal with missing information we sometimes have to make an educated guess of which is the directly connected switch. The network topology discover system makes the guess by looking at how far each candidate is from the router and how many switches are connected below them, and then try to pick the one which most closely matches the current switch. +
- +
-In practice the use of CDP makes this process very reliable for the devices supporting it, and this makes it easier to correctly determine the remaining topology even in the case of missing information. +
- +
-===== networkDiscovery_vlan ===== +
- +
- +
-==== Key information ==== +
-^ Process name         | networkDiscovery.sh vlan| +
-^ Alias                | Vlan Topology Builder | +
-^ Polls network ​       | No  | +
-^ Brief description ​   | Builds the per vlan topology on the swithed network with interconnected trunks. The algorithm is a top-down depth-first traversel starting at the primary router port for the vlan. | +
-^ Depends upon         | The physical topology need to be in place, this process therefore supersedes the physical topology builder.| +
-^ Updates tables ​      | swportvlan | +
-^ Run mode             | cron | +
-^ Default scheduling ​  | every hour (38 * * * *) | +
-^ Config file          | None |  +
-^ Log file             | networkDiscovery/​networkDiscovery-vlan.html og networkDiscovery/​networkDiscovery-stderr.log ​  |  +
-^ Programming language | Java | +
-^ Lines of code        | See the physical topology builder above | +
-^ Further doc          | [[http://​metanav.uninett.no/​static/​reports/​tigaNAV.pdf|tigaNAV report ch 5.5.5]] | +
- +
- +
-==== Details ====+
  
-FIXME This is cut and paste from the tigaNAV reportConsider a rewrite.+In practice the use of CDP makes this process very reliable for the devices 
 +supporting it, and this makes it easier to correctly determine ​the remaining 
 +topology even in the case of missing information CDP is, however, not 
 +trusted more than switch forwarding tables, as CDP packets may pass unaltered 
 +through switches that don't support CDP, causing CDP data to be inaccurate.
  
-After the physical ​topology ​of the network has been mapped by the network topology discover system it still remains to explore the logical topology, or the VLANs. Since modern switches support trunking, which can transport several independent VLANs over a single physical link, the logical topology can be non-trivial and indeed, in practice it usually is.+=== VLAN topology ​===
  
-The vlan discovery system uses a simple top-down depth-first graph traversal algorithm to discover which VLANs are actually running on the different trunks and in which direction. Direction is here defined relative to the router portwhich is the top of the tree, currently owning the lowest gateway IP or the virtual IP in the case of HSRPIn additionsince NAV v3 now fully supports the reuse of VLAN numbers, the vlan discovery system will also make the connection from VLAN number to actual vlan as defined in the vlan table for all non-trunk ports it encounters.+After the physical topology model of the network has been built, the logical 
 +topology ​of the VLANs still remains Since modern switches support 802.1Q 
 +trunkingwhich can transport several independent VLANs over a single physical 
 +link, the logical topology can be non-trivial and indeed, in practice ​it 
 +usually is.
  
-A special case are //closed// VLANs which do not have a gateway IP; the vlan discovery system ​will still traverse these VLANs without setting any direction ​and also creating a new VLAN record ​in the vlan tableThe NAV administrator can fill inn descriptive information afterward if desired.+The vlan discovery system ​uses a simple top-down depth-first graph traversal 
 +algorithm to discover which VLANs are actually running on the different trunks 
 +and in which direction. Direction is here defined relative to the router port, 
 +which is the top of the tree, currently owning the lowest gateway IP or the 
 +virtual IP in the case of HSRP ​Re-use of VLAN numbers in physicallyq disjoint 
 +parts of the network is supported.
  
-The implementation of this subsystem is again complicated by factors such as the need for checking at both ends of a trunk if the VLAN is allowed to traverse it, the fact that VLAN numbers on each end of non-trunk links need not match (the number closer to the top of the tree should then be given precedence and the lower VLAN numbers rewritten to match), that both trunks and non-trunks can be blocked (again at either end) by the spanning tree protocol and of course that it needs to be highly efficient and scalable in the case of large networks with thousands of switches and tens of thousands of switch ports.+The VLAN topology detector does not currently support mapping unrouted VLANs.
  
 ====== Monitoring the network ====== ====== Monitoring the network ======
Line 243: Line 180:
 ^ Log file             | pping.log ​  ​| ​ ^ Log file             | pping.log ​  ​| ​
 ^ Programming language | Python | ^ Programming language | Python |
-^ Lines of code        | Approx 4200, shared with servicemon | +^ Further doc          | See below, based on and translated from [[http://nav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 3.4]] (Norwegian) |
-^ Further doc          | See below, based on and translated from [[http://metanav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 3.4]] (Norwegian) |+
  
  
Line 316: Line 252:
  
 ==== Key information ==== ==== Key information ====
-^ Process name         ​| ​[[servicemon]] |+^ Process name         | servicemon |
 ^ Alias                | The service monitor | ^ Alias                | The service monitor |
 ^ Polls network ​       | Yes| ^ Polls network ​       | Yes|
Line 327: Line 263:
 ^ Log file             | servicemon.log ​  ​| ​ ^ Log file             | servicemon.log ​  ​| ​
 ^ Programming language | Python | ^ Programming language | Python |
-^ Lines of code        | See pping above, shared code base | +^ Further doc          | See the [[servicemon]] page and/​or ​[[http://nav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 3.5]] (Norwegian) |
-^ Further doc          | [[http://metanav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 3.5]] (Norwegian) |+
  
 ==== Details ==== ==== Details ====
Line 350: Line 285:
 ^ Log file             | thresholdMon.log ​  ​| ​ ^ Log file             | thresholdMon.log ​  ​| ​
 ^ Programming language | Python | ^ Programming language | Python |
-^ Lines of code        | Approx 400 | 
 ^ Further doc          | See [[ThresholdMonitor]] | ^ Further doc          | See [[ThresholdMonitor]] |
  
Line 357: Line 291:
  
   * See [[ThresholdMonitor]]   * See [[ThresholdMonitor]]
- 
-===== moduleMon ===== 
- 
- 
-==== Key information ==== 
-^ Process name         | getDeviceData data plugin moduleMon | 
-^ Alias                | The module monitor | 
-^ Polls network ​       | Yes | 
-^ Brief description ​   | A plugin to gDD. A dedicated OID is polled. If this is a HP switch, a specific HP OID is used (oidkey hpStackStatsMemberOperStatus),​ similarly for 3Com (oidkey 3cIfMauType). For other equipment the genereric moduleMon OID is used. For 3com and HP the OID actually tells us if a module is down or not. For the generic test we (in lack of something better) check if an arbitrary ifindex on the module in question responds. If the module has no ports, no check is done.  | 
-^ Depends upon         | The switch or router to be processed by gDD with apropriate data in module and gwport/​swport. | 
-^ Updates tables ​      | posts moduleMon events on the eventq. Sets in addition the boolean module.up value. | 
-^ Run mode             | daemon, a part of gDD. | 
-^ Default scheduling ​  | Depends on the defaultfreq of the moduleMon OID (equivalently for the HP and 3com OIDs) Defaults to 1 hour. | 
-^ Config file          | see gDD |  
-^ Log file             | see gDD   ​| ​ 
-^ Programming language | Java | 
-^ Lines of code        | Part of gDD, see gDD. | 
-^ Further doc          | Not much. | 
  
  
Line 394: Line 310:
 ^ Log file             | eventEngine.log ​  ​| ​ ^ Log file             | eventEngine.log ​  ​| ​
 ^ Programming language | Java | ^ Programming language | Java |
-^ Lines of code        | Approx 3000 lines | +^ Further doc          | [[http://nav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 3.6]] (Norwegian). Updates in [[http://nav.uninett.no/​static/​reports/​tigaNAV.pdf|tigaNAV report ch 4.3.1]]. |
-^ Further doc          | [[http://metanav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 3.6]] (Norwegian). Updates in [[http://metanav.uninett.no/​static/​reports/​tigaNAV.pdf|tigaNAV report ch 4.3.1]]. |+
  
 ==== Details ==== ==== Details ====
Line 418: Line 333:
 ^ Log file             | maintengine.log ​  ​| ​ ^ Log file             | maintengine.log ​  ​| ​
 ^ Programming language | Python | ^ Programming language | Python |
-^ Lines of code        | Approx 300 | +^ Further doc          | Old doc: [[http://nav.uninett.no/​static/​reports/​tigaNAV.pdf|tigaNAV report ch 8]]. The maintenance system was rewritten for NAV 3.1. See [[devel:​tasklist2006#​t3rewrite_the_message_and_maintenance_tool|here]] for more. |
-^ Further doc          | Old doc: [[http://metanav.uninett.no/​static/​reports/​tigaNAV.pdf|tigaNAV report ch 8]]. The maintenance system was rewritten for NAV 3.1. See [[devel:​tasklist2006#​t3rewrite_the_message_and_maintenance_tool|here]] for more. |+
  
 ==== Details ==== ==== Details ====
Line 439: Line 353:
 ^ Config file          | alertengine.cfg |  ^ Config file          | alertengine.cfg | 
 ^ Log file             | alertengine.log og alertengine.err.log ​  ​| ​ ^ Log file             | alertengine.log og alertengine.err.log ​  ​| ​
-^ Programming language | perl | +^ Programming language | Python ​
-^ Lines of code        | Approx 1900 +^ Further doc          | [[http://nav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 3.7 and 3.8]] (Norwegian). |
-^ Further doc          | [[http://metanav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 3.7 and 3.8]] (Norwegian). |+
  
 ==== Details ==== ==== Details ====
Line 464: Line 377:
 ^ Log file             | smsd.log |  ^ Log file             | smsd.log | 
 ^ Programming language | Python (Perl in 3.1) | ^ Programming language | Python (Perl in 3.1) |
-^ Lines of code        | Approx 1200 | 
 ^ Further doc          | subsystem/​smsd/​README in the NAV sources describes the available dispatchers and more | ^ Further doc          | subsystem/​smsd/​README in the NAV sources describes the available dispatchers and more |
  
Line 504: Line 416:
  
  
-===== The snmptrapd =====+===== snmptrapd =====
  
  
Line 522: Line 434:
 ^ Log file             | snmptrapd.log and snmptraps.log ​ |  ^ Log file             | snmptrapd.log and snmptraps.log ​ | 
 ^ Programming language | Python ​  | ^ Programming language | Python ​  |
-^ Lines of code        | Approx 200 + traphandlers | 
 ^ Further doc          | - | ^ Further doc          | - |
  
Line 539: Line 450:
 ^ Polls network ​       | No | ^ Polls network ​       | No |
 ^ Brief description ​   | | ^ Brief description ​   | |
-^ Depends upon         | That gDD has filled the gwport, swport tables (and more...) |+^ Depends upon         | That ipdevpoll ​has filled the gwport, swport tables (and more...) |
 ^ Updates tables ​      | The RRD database (rrd_file and rrd_datasource) | ^ Updates tables ​      | The RRD database (rrd_file and rrd_datasource) |
 ^ Run mode             | cron | ^ Run mode             | cron |
Line 545: Line 456:
 ^ Config file          | None |  ^ Config file          | None | 
 ^ Log file             | cricket-changelog ​  ​| ​ ^ Log file             | cricket-changelog ​  ​| ​
-^ Programming language | perl | +^ Programming language | python ​|
-^ Lines of code        | Approx 1600 |+
 ^ Further doc          | [[howtoconfigurecricket|How to configure Cricket addons in NAV v3]] | ^ Further doc          | [[howtoconfigurecricket|How to configure Cricket addons in NAV v3]] |
  
Line 567: Line 477:
 ^ Log file             | cricket/​giga.log og cricket/​normal.log ​  ​| ​ ^ Log file             | cricket/​giga.log og cricket/​normal.log ​  ​| ​
 ^ Programming language | not relevant | ^ Programming language | not relevant |
-^ Lines of code        | not relevant | 
 ^ Further doc          | not relevant | ^ Further doc          | not relevant |
  
Line 586: Line 495:
 ^ Log file             | ?   ​| ​ ^ Log file             | ?   ​| ​
 ^ Programming language | Perl | ^ Programming language | Perl |
-^ Lines of code        | Approx 200 | 
 ^ Further doc          | - | ^ Further doc          | - |
  
Line 612: Line 520:
 ^ Log file             | None   ​| ​ ^ Log file             | None   ​| ​
 ^ Programming language | Python | ^ Programming language | Python |
-^ Lines of code        | Approx 350 | +^ Further doc          | [[http://nav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 2.4]] (Norwegian). |
-^ Further doc          | [[http://metanav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 2.4]] (Norwegian). |+
  
 ==== Details ==== ==== Details ====
Line 634: Line 541:
 ^ Default scheduling ​  | | ^ Default scheduling ​  | |
 ^ Programming language | | ^ Programming language | |
-^ Lines of code        | | 
 ^ Further doc          | [[Arnold|Arnold]] | ^ Further doc          | [[Arnold|Arnold]] |
  
backendprocesses.1274177705.txt.gz · Last modified: 2010/05/18 10:15 by olemb