User Tools

Site Tools


backendprocesses

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
backendprocesses [2007/10/08 19:22]
faltin
backendprocesses [2012/05/07 11:16]
morten fix url
Line 1: Line 1:
 ====== Back-end processes in NAV ====== ====== Back-end processes in NAV ======
  
-NAV has a number of back-end processes. This document gives an overview, listing key information and  +NAV has a number of back-end processes. This page attempts to give an overview ​of them.
-detailed description for each process. We also give references to documentation found elsewhere on metaNAV. +
- +
- +
- +
-The following figure complements this document (the NAV 3.3 snmptrapd is not included in the figure): +
- +
-{{architecture1.png?800|The NAV processes}} +
- +
- +
- +
  
 +{{:​backend-processes-3.10.png?​800|The NAV processes}}
  
 ===== nav list / nav status ===== ===== nav list / nav status =====
Line 24: Line 14:
    * [[backendprocesses#​collecting_statistics|cricket]] (includes makecricketConfig,​ Cricket collector and cleanrrds)    * [[backendprocesses#​collecting_statistics|cricket]] (includes makecricketConfig,​ Cricket collector and cleanrrds)
    * [[#​eventengine|eventengine]]    * [[#​eventengine|eventengine]]
-   * [[#getdevicedata|getDeviceData]] +   * [[#ipdevpoll|ipdevpoll]]
-   * [[#​iptrace|iptrace]]+
    * [[#​logengine|logengine]]    * [[#​logengine|logengine]]
    * [[#​mactrace|mactrace]]    * [[#​mactrace|mactrace]]
    * [[#​maintengine|maintengine]]    * [[#​maintengine|maintengine]]
-   * [[#​networkdiscovery_topology|networkDiscovery]] (physical and [[#​networkdiscovery_vlan|vlan discovery]]) 
    * [[#​pping|pping]]    * [[#​pping|pping]]
 +   * [[#​psuwatch|psuwatch]]
    * [[#​servicemon|servicemon]]    * [[#​servicemon|servicemon]]
    * [[#​smsd|smsd]]    * [[#​smsd|smsd]]
-   * [[#​thresholdmon|thresholdMon]] 
    * [[#​snmptrapd|snmptrapd]]    * [[#​snmptrapd|snmptrapd]]
 +   * [[#​thresholdmon|thresholdMon]]
 +   * [[#​topology|topology]]
  
  
 ====== Building the network model ====== ====== Building the network model ======
  
-===== getDeviceData ​===== +===== ipdevpoll ​=====
  
 ==== Key information ==== ==== Key information ====
  
-^ Process name         ​| ​getDeviceData ​ | +^ Process name         ​| ​ipdevpoll ​|
-^ Alias                | gDD / the snmp data collector  ​|+
 ^ Polls network ​       | Yes            | ^ Polls network ​       | Yes            |
 ^ Brief description ​   | Collects SNMP data from equipment in the netbox table and stores data regarding the equipment in a number of tables. Does not build topology. | ^ Brief description ​   | Collects SNMP data from equipment in the netbox table and stores data regarding the equipment in a number of tables. Does not build topology. |
-^ Depends upon         | Seed data must be filled in the netbox table, ​either by the Edit Database tool or by the autodiscovery contrib |+^ Depends upon         | Seed data must be filled in the netbox table, ​using the [[seedessentials|Seed ​Database tool]] 
-^ Updates tables ​      | netbox, netboxsnmpoid,​ netboxinfo, device, module, gwport, gwportprefix,​ prefix, vlan, swport, swportallowedvlan, netbox_vtpvlan ​+^ Updates tables ​      | netbox, netboxsnmpoid,​ netboxinfo, device, module, gwportprefix,​ prefix, vlan, interface, swportallowedvlan | 
-^ Run mode             | Daemon process. Thread based. ​+^ Run mode             | Daemon process | 
-^ Default scheduling ​  ​| ​Initial data collection for new netboxes ​is done every 5 minutesUpdate polls on existing netboxes ​is done every 6 hrs. Collection of certain OIDs for the netbox may deviate from this interval; i.e. the moduleMon OID is polled every hour. | +^ Default scheduling ​  ​| ​Polling ​is organized into jobs in ''​ipdevpoll.conf'',​ so is job scheduling. | 
-^ Config file | getDeviceData.conf | +^ Config file | ipdevpoll.conf | 
-^ Log files | getDeviceData.log og getDeviceData/​getDeviceData-stderr.log |  +^ Log files | ipdevpoll.log |  
-^ Programming language | Java | +^ Programming language | Python ​
-^ Lines of code        | Approx 8200 +^ Further doc          | |
-^ Further doc          | [[http://​metanav.uninett.no/​static/​reports/​tigaNAV.pdf|tigaNAV report chapter 5]]|+
  
  
Line 63: Line 50:
 ==== Details ==== ==== Details ====
  
-  * Initial OID classification ​\\ When gDD detects a new box that has a valid snmp read community (regardless of category), he will start initial OID classifiation. This is done by testing the netbox against all OIDs in the snmpoid table and in turn populating the netboxsnmpoid tableTesting is done based on attributes in snmpoid tablesee reference to further doc for details. Frequency will be set based on the snmpoid.defaultfreq. +  * jobs and plugins ​\\ All ipdevpoll'​s work is done by plugins ​Plugins are organized into jobsand jobs are scheduled ​for each active IP device individually
- +  * inventory job \\ Polls for inventory information every 6 hours (by default).  ​Inventory information includes interfaces, serial numbers, modules, VLANs and prefixes
-  * Plugin-based architecture ​\\ gDD has a plugin based architecture. Plugins fall into two types; device plugins and data plugins:  +  * profiler job \\ Runs every 5 minutes, profiling devices if deemed necessary ​NAV ​has an internal list of SNMP OIDs that are tested for compatibility with each device ​This ​is used to create ​sort of profile that says what the device supports ​the profile is typically used to produce a Cricket configuration that will collect statistics ​from proprietary OIDs
-     * Device plugins collects data with SNMP. Each device plugin is geared towards a particular type of equipment, supporting a particular subset of OIDs. See further doc for details. ​  +  * logging job \\ Runs every 30 minutes ​and collects ​log-like information from devices.  ​At the time being, only the arp plugin runs, collecting ARP caches from routers.  ​ARP data is logged to table, ​and aids in topology detection and client machine tracking.
-     * Data plugins updates NAVdb with data fed from the device plugins. A particular data plugin is responsible for a particular table (or set of tablesin the databaseSee further doc for details. +
- +
-  * Module monitor ​\\ The module monitor is a data plugin within gDDIt has the dedicated function ​of detecting outage of modules in operating netboxesWhen a module ​is detected down moduleDown event is posted on the event queue (eventq). +
- +
- +
-===== iptrace ===== +
- +
- +
- +
-==== Key information ==== +
- +
-^ Process name         | iptrace | +
-^ Alias                | IP-to-mac collector / arplogger| +
-^ Polls network ​       | Yes | +
-^ Brief description ​   | Collects arp data from routers and stores this information in the arp table| +
-^ Depends upon         | The routers (GW / GSW) must be in the netbox table. To assign prefixes to arp entries, gDD must have done router data collection. | +
-^ Updates tables ​      | arp | +
-^ Run mode             | cron | +
-^ Default scheduling ​  ​| ​every 30 minutes ​(0,30 * * * *). No threads | +
-^ Config file          | pping.conf | +
-^ Log file             | pping.log +
-^ Programming language | Perl| +
-^ Lines of code        | Approx 130 lines| +
-^ Further doc          | [[http://​metanav.uninett.no/​static/​reports//​NAVMe.pdf|NAVMe report ch 4.5.8]] (Norwegian) | +
- +
- +
-==== Details ==== +
- +
-  * iptrace understands proxy arp and will not store arp entries that are "​false"​. +
-  * The command line tool [[commandlinetools#​navclean.py|navclean.py]] offers ​means of deleting old arp (and cam) entries.+
  
  
Line 108: Line 65:
 ^ Polls network ​       | Yes | ^ Polls network ​       | Yes |
 ^ Brief description ​   | Collects mac addresses behind switch table data for all switches (cat GSW, SW, EDGE). The process also checks for spanning tree blocked ports. | ^ Brief description ​   | Collects mac addresses behind switch table data for all switches (cat GSW, SW, EDGE). The process also checks for spanning tree blocked ports. |
-^ Depends upon         ​| ​gDD must have completed ​the swport tables for the switches. |+^ Depends upon         ​| ​[[#​ipdevpoll|ipdevpoll]] ​must have created ​the swport tables for the switches. |
 ^ Updates tables ​      | cam (mac adresses), netboxinfo (CDP neighbors), swp_netbox (the candidate list for the physical topology builder), swportblocked (switch ports that are blocked by spannning for a given vlan). | ^ Updates tables ​      | cam (mac adresses), netboxinfo (CDP neighbors), swp_netbox (the candidate list for the physical topology builder), swportblocked (switch ports that are blocked by spannning for a given vlan). |
 ^ Run mode             | cron | ^ Run mode             | cron |
Line 115: Line 72:
 ^ Log file             | getBoksMacs.log | ^ Log file             | getBoksMacs.log |
 ^ Programming language | Java | ^ Programming language | Java |
-^ Lines of code        | Approx 1400 | +^ Further doc          | [[http://nav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 2.1]] (Norwegian),​ [[http://nav.uninett.no/​static/​reports/​tigaNAV.pdf|tigaNAV report ch 5.4.5 and ch 5.5.3]] | 
-^ Further doc          | [[http://metanav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 2.1]] (Norwegian),​ [[http://metanav.uninett.no/​static/​reports/​tigaNAV.pdf|tigaNAV report ch 5.4.5 and ch 5.5.3]] |+
  
  
Line 125: Line 82:
  
 === The algorithm === === The algorithm ===
 +
 +FIXME The following is cut and paste from the referenced chapters in NAVMore and tigaNAV. Rewrite/​translate to one text.
  
 CAM-loggeren kjører hvert kvarter. Det som skjer da er at alle svitsjer (SW og KANT) blir CAM-loggeren kjører hvert kvarter. Det som skjer da er at alle svitsjer (SW og KANT) blir
Line 135: Line 94:
 data fra enn andre. data fra enn andre.
  
-===== networkDiscovery_topology =====+--
  
 +The cam logger, responsible for the collection of MAC addresses and CDP data, has been updated to make use of the OID database. This has greatly simplified its internal structure as all devices are now treated in a uniform manner; the immediate benefit is that data collection is no longer dependent on type information and no updates should be necessary to support new types. Upgrades in the field can happen without the need for additional updates to the NAV software.
  
 +--
  
 +The cam logger collects the bridge tables of all switches, saving the MAC entries in the cam table of the NAVdb. Additionally,​ it collects CDP data from all switches and routers supporting this feature; the result is saved in the swp_netbox table for use by the network topology discover system.
  
 +While its basic operation remains the same, it has been rewritten to take advantage of the OID database; the internal data collection framework has been unified and all devices are treated in the same manner. Thus, data collections are no longer based on type information and a standard set of OIDs are used for all devices. When a new type is added to NAV the cam logging should “just work”, which is a major design goal of NAV v3.
 +
 +One notable improvement is the addition of the interface field in the swport table. It is used for matching the CDP remote interface, and makes this matching much more reliable. Also, both the cam and the swp_netbox tables now use netboxid and ifindex to uniquely identify a swport port instead of the old netboxid, module, port-triple. This has significantly simplified swport port matching, and especially since the old module field of swport was a shortened version of what is today the interface field, reliability has increased as well.
 +
 +===== topology =====
 ==== Key information ==== ==== Key information ====
  
-^ Process name         ​| ​networkDiscovery.sh topology+^ Process name         ​| ​navtopology ​
-^ Alias                | Physical Topology Builder |+^ Alias                | Physical ​and VLAN Topology Builder |
 ^ Polls network ​       | No | ^ Polls network ​       | No |
-^ Brief description ​   | Builds the physical topology ​of the network; i.e. which netbox is connected to which netbox. ​+^ Brief description ​   | Builds ​NAV's model of the physical ​network ​topology ​as well as the VLAN sub-topologies ​
-^ Depends upon         | mactrace fills data in swp_netbox representing the candidate ​list of physical ​neighborship. This is the data that the physical topology builder uses.|+^ Depends upon         | mactrace fills data in ''​swp_netbox'', ​representing the list of physical ​neighbor candidates. This is the data that the physical topology builder uses. |
 ^ Updates tables ​      | Sets the to_netboxid and to_swportid fields in the swport and gwport tables. | ^ Updates tables ​      | Sets the to_netboxid and to_swportid fields in the swport and gwport tables. |
 ^ Run mode             | cron | ^ Run mode             | cron |
 ^ Default scheduling ​  | every hour (35 * * * *) | ^ Default scheduling ​  | every hour (35 * * * *) |
 ^ Config file          | None |  ^ Config file          | None | 
-^ Log file             ​| ​networkDiscovery/​networkDiscovery-topology.html og networkDiscovery/​networkDiscovery-stderr.log   ​|  +^ Log file             ​| ​navtopology.log |  
-^ Programming language | Java | +^ Programming language | Python ​|
-^ Lines of code        | Approx 1500 (shared with vlan topology builder) | +
-^ Further doc          | [[http://​metanav.uninett.no/​static/​reports/​tigaNAV.pdf|tigaNAV report ch 5.5.4]] ​|+
  
 ==== Details ==== ==== Details ====
  
-  * see the tigaNAV report as referenced for details.+=== Physical topology ===
  
 +The topology discovery system builds NAV's view of the network topology based
 +on cues from information collected previously via SNMP.  ​
  
-===== networkDiscovery_vlan =====+The information cues come from routers'​ IPv4 ARP caches and IPv6 Neighbor 
 +Discovery caches, interface physical (MAC) addresses, switch forwarding tables 
 +and CDP (Cisco Discovery Protocol). ​ The mactrace process has already 
 +pre-parsed these cues and created a list of neighbor candidates for each port 
 +in the network.
  
 +The physical topology detection algorithm is responsible for reducing the list
 +of neighbor candidates of each port to just one single device.
  
-==== Key information ==== +In practice ​the use of CDP makes this process very reliable ​for the devices 
-^ Process name         | networkDiscovery.sh vlan| +supporting itand this makes it easier to correctly determine ​the remaining 
-^ Alias                | Vlan Topology Builder | +topology ​even in the case of missing information CDP is, however, not 
-^ Polls network ​       | No  | +trusted more than switch forwarding ​tables, as CDP packets may pass unaltered 
-^ Brief description ​   | Builds ​the per vlan topology on the swithed network with interconnected trunks. The algorithm is a top-down depth-first traversel starting at the primary router port for the vlan. | +through switches that don't support CDP, causing CDP data to be inaccurate.
-^ Depends upon         | The physical topology need to be in place, this process therefore supersedes ​the physical ​topology ​builder.| +
-^ Updates ​tables ​      | swportvlan | +
-^ Run mode             | cron | +
-^ Default scheduling ​  | every hour (38 * * * *) | +
-^ Config file          | None |  +
-^ Log file             | networkDiscovery/​networkDiscovery-vlan.html og networkDiscovery/​networkDiscovery-stderr.log ​  |  +
-^ Programming language | Java | +
-^ Lines of code        | See the physical topology builder above | +
-^ Further doc          | [[http://​metanav.uninett.no/​static/​reports/​tigaNAV.pdf|tigaNAV report ch 5.5.5]] |+
  
-==== Details ====+=== VLAN topology ​===
  
-  * see the tigaNAV report as referenced for details+After the physical topology model of the network has been built, the logical 
 +topology of the VLANs still remains. ​ Since modern switches support 802.1Q 
 +trunking, which can transport several independent VLANs over a single physical 
 +link, the logical topology can be non-trivial and indeed, in practice it 
 +usually is. 
 + 
 +The vlan discovery system uses a simple top-down depth-first graph traversal 
 +algorithm to discover which VLANs are actually running on the different trunks 
 +and in which direction. Direction is here defined relative to the router port, 
 +which is the top of the tree, currently owning the lowest gateway IP or the 
 +virtual IP in the case of HSRP.  Re-use of VLAN numbers in physicallyq disjoint 
 +parts of the network is supported. 
 + 
 +The VLAN topology detector does not currently support mapping unrouted VLANs.
  
 ====== Monitoring the network ====== ====== Monitoring the network ======
  
 ===== pping ===== ===== pping =====
 +
  
  
Line 200: Line 178:
 ^ Log file             | pping.log ​  ​| ​ ^ Log file             | pping.log ​  ​| ​
 ^ Programming language | Python | ^ Programming language | Python |
-^ Lines of code        | Approx 4200, shared with servicemon | +^ Further doc          | See below, based on and translated from [[http://nav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 3.4]] (Norwegian) | 
-^ Further doc          | [[http://metanav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 3.4]] (Norwegian) |+ 
 + 
 + 
 + 
 + 
  
 ==== Details ==== ==== Details ====
    
-  * see the NAVMore report ​as referenced ​for details+pping is a daemon with its own (configurable) scheduling. pping works in parallel which makes each ping sweep very  
 +efficient. The frequency of each ping sweep is per default 20 seconds. The maximum allowed response time for a host is 5 seconds (per default). A host is declared down on the event queue after four consecutive "no responses"​ (also configurable). This means 
 +that it takes between 80 and 99 seconds from a host is down till pping declares it as down.  
 + 
 +Please note the [[#​eventengine|event engine]] will 
 +have a grace period of one minute (configurable) before a "box down warning"​ is posted on the alert queue, and another three minutes before the box is declared down (also configurable). In summery expect 5-6 minutes before a host is declared down.  
 + 
 +The configuration file ''​pping.conf''​ lets you adjust the following:​ 
 +^parameter ^description ^default | 
 +| user | the user that runs the service | navcron | 
 +| packet size |size of the icmp packet | 64 byte | 
 +| check interval | how often you want to run a ping sweep | 20 seconds | 
 +| timeout |seconds to wait for reply after last ping request is sent | 5 seconds | 
 +| nrping |number of requests without answer before marking the device as unavailable | 4 | 
 +| delay | ms between each ping request | 2 ms | 
 + 
 +In addition you can configure debug level, location of log file and location of pid file. 
 + 
 +Note: In order to uniquely identify the icmp echo response packets pping needs to tailor make the packets with its own signature. This delays the overall throughput a bit, but pping can still manage 90-100 hosts per second, which should be sufficient for most needs. 
 + 
 + 
 + 
 +=== Algorithm - one ping sweep === 
 + 
 +<​code>​ 
 +pping has three threads: 
 +  1. Thread 1 generates and sends out the icmp packets. 
 +  2. Thread 2 receives echo replies, checks the signature and stores the result to RRD. 
 +  3. The main thread does the main scheduling and reports to the event queue. 
 + 
 +Thread 1 works this way: 
 +  FOR every host DO: 
 +    1. Generate an icmp echo packet with: (destination IP, timestamp, signature) 
 +    2. Send the icmp echo. 
 +    3. Add host to the "​Waiting for response"​ queue. 
 +    4. Sleep in the configured ''​delay''​ ms (default 2 ms). This delay will spread out the response times, which in  
 +       turn reduces the receive thread queue and will in effect make the measured response time more accurate. 
 + 
 +Thread 2 works this way: 
 +  As long as thread 1 is operating and as long as we have hosts in the "​Waiting for response"​ queue, with a 
 +  timout of 5 seconds (configurable):​ 
 +    1. Check if we have received packets 
 +    2. Get the data (the icmp reply packet) 
 +    3. Verify that the packet is to our pid.  
 +    4. Split the packet in (destination IP, timestamp, signature) 
 +       If IP is wrong or signature is wrong, discard. 
 +    5. If we recognize the IP address on the "​Waiting for response"​ queue, update response time for the host and 
 +       ​remove the host from the "​Waiting for response"​ queue. 
 + 
 +When thread 2 finishes the sweep is over. If hosts are remaining on the "​Waiting for response"​ queue, we set 
 +response time to "​None"​ and increments the "​number of consecute no-reply"​ counter for the host. 
 + 
 +When thread 3 detects that a host has to many no-replies a down event is posted on the event queue. 
 +</​code>​ 
 + 
 +Note that the response times are recorded to RRD which gives us response time and packet loss data as an extra bonus. 
  
 ===== servicemon ===== ===== servicemon =====
Line 222: Line 261:
 ^ Log file             | servicemon.log ​  ​| ​ ^ Log file             | servicemon.log ​  ​| ​
 ^ Programming language | Python | ^ Programming language | Python |
-^ Lines of code        | See pping above, shared code base | +^ Further doc          | See the [[servicemon]] page and/​or ​[[http://nav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 3.5]] (Norwegian) |
-^ Further doc          | [[http://metanav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 3.5]] (Norwegian) |+
  
 ==== Details ==== ==== Details ====
Line 245: Line 283:
 ^ Log file             | thresholdMon.log ​  ​| ​ ^ Log file             | thresholdMon.log ​  ​| ​
 ^ Programming language | Python | ^ Programming language | Python |
-^ Lines of code        | Approx 400 | 
 ^ Further doc          | See [[ThresholdMonitor]] | ^ Further doc          | See [[ThresholdMonitor]] |
  
Line 252: Line 289:
  
   * See [[ThresholdMonitor]]   * See [[ThresholdMonitor]]
- 
-===== moduleMon ===== 
- 
- 
-==== Key information ==== 
-^ Process name         | getDeviceData data plugin moduleMon | 
-^ Alias                | The module monitor | 
-^ Polls network ​       | Yes | 
-^ Brief description ​   | A plugin to gDD. A dedicated OID is polled. If this is a HP switch, a specific HP OID is used (oidkey hpStackStatsMemberOperStatus),​ similarly for 3Com (oidkey 3cIfMauType). For other equipment the genereric moduleMon OID is used. For 3com and HP the OID actually tells us if a module is down or not. For the generic test we (in lack of something better) check if an arbitrary ifindex on the module in question responds. If the module has no ports, no check is done.  | 
-^ Depends upon         | The switch or router to be processed by gDD with apropriate data in module and gwport/​swport. | 
-^ Updates tables ​      | posts moduleMon events on the eventq. Sets in addition the boolean module.up value. | 
-^ Run mode             | daemon, a part of gDD. | 
-^ Default scheduling ​  | Depends on the defaultfreq of the moduleMon OID (equivalently for the HP and 3com OIDs) Defaults to 1 hour. | 
-^ Config file          | see gDD |  
-^ Log file             | see gDD   ​| ​ 
-^ Programming language | Java | 
-^ Lines of code        | Part of gDD, see gDD. | 
-^ Further doc          | Not much. | 
  
  
Line 289: Line 308:
 ^ Log file             | eventEngine.log ​  ​| ​ ^ Log file             | eventEngine.log ​  ​| ​
 ^ Programming language | Java | ^ Programming language | Java |
-^ Lines of code        | Approx 3000 lines | +^ Further doc          | [[http://nav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 3.6]] (Norwegian). Updates in [[http://nav.uninett.no/​static/​reports/​tigaNAV.pdf|tigaNAV report ch 4.3.1]]. |
-^ Further doc          | [[http://metanav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 3.6]] (Norwegian). Updates in [[http://metanav.uninett.no/​static/​reports/​tigaNAV.pdf|tigaNAV report ch 4.3.1]]. |+
  
 ==== Details ==== ==== Details ====
Line 297: Line 315:
  
 ===== maintengine ===== ===== maintengine =====
 +
 +
  
  
Line 303: Line 323:
 ^ Alias                | The maintenance engine | ^ Alias                | The maintenance engine |
 ^ Polls network ​       | No | ^ Polls network ​       | No |
-^ Brief description ​   | Checks the defines ​maintenance schedules. If start or end of a maintenance period occurs at this run time, the relevant maintenanceEvents are posted on the eventq, one for each netbox/​module ​and/or service in question. | +^ Brief description ​   | Checks the defined ​maintenance schedules. If start or end of a maintenance period occurs at this run time, the relevant maintenanceEvents are posted on the eventq, one for each netbox and/or service in question. | 
-^ Depends upon         | NAV users must set up maintenance schedule which in turn is stored in the maintenance tables (emotdmaintenance,​ emotd_related). | +^ Depends upon         | NAV users must set up maintenance schedule which in turn is stored in the maintenance tables (maint_taskmaint_component). | 
-^ Updates tables ​      | Posts maintenance events on the eventq. Also updates the maintenance.state. |+^ Updates tables ​      | Posts maintenance events on the eventq. Also updates the maint_task.state. |
 ^ Run mode             | cron | ^ Run mode             | cron |
 ^ Default scheduling ​  | Every 5 minutes ( */5 * * * * )| ^ Default scheduling ​  | Every 5 minutes ( */5 * * * * )|
Line 311: Line 331:
 ^ Log file             | maintengine.log ​  ​| ​ ^ Log file             | maintengine.log ​  ​| ​
 ^ Programming language | Python | ^ Programming language | Python |
-^ Lines of code        | Approx 300 | +^ Further doc          | Old doc: [[http://nav.uninett.no/​static/​reports/​tigaNAV.pdf|tigaNAV report ch 8]]. The maintenance system was rewritten for NAV 3.1. See [[devel:​tasklist2006#​t3rewrite_the_message_and_maintenance_tool|here]] for more. |
-^ Further doc          | [[http://metanav.uninett.no/​static/​reports/​tigaNAV.pdf|tigaNAV report ch 8]].|+
  
 ==== Details ==== ==== Details ====
Line 332: Line 351:
 ^ Config file          | alertengine.cfg |  ^ Config file          | alertengine.cfg | 
 ^ Log file             | alertengine.log og alertengine.err.log ​  ​| ​ ^ Log file             | alertengine.log og alertengine.err.log ​  ​| ​
-^ Programming language | perl | +^ Programming language | Python ​
-^ Lines of code        | Approx 1900 +^ Further doc          | [[http://nav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 3.7 and 3.8]] (Norwegian). |
-^ Further doc          | [[http://metanav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 3.7 and 3.8]] (Norwegian). |+
  
 ==== Details ==== ==== Details ====
Line 341: Line 359:
  
 ===== smsd ===== ===== smsd =====
 +
 +
  
  
Line 347: Line 367:
 ^ Alias                | The SMS daemon | ^ Alias                | The SMS daemon |
 ^ Polls network ​       | No | ^ Polls network ​       | No |
-^ Brief description ​   | Checks the sms queue for new messages, formats the messages into one SMS and dispatches it via one or more dispatchers with a general interface. Support for multiple dispatchers are handled by a dispatcher handler layer. | +^ Brief description ​   | Checks the navprofiles.smsq table for new messages, formats the messages into one SMS and dispatches it via one or more dispatchers with a general interface. Support for multiple dispatchers are handled by a dispatcher handler layer. | 
-^ Depends upon         | alertEngine fills the smsq |+^ Depends upon         | alertEngine fills the navprofiles.smsq table |
 ^ Updates tables ​      | Updates the sent and timesent values of navprofiles.smsq | ^ Updates tables ​      | Updates the sent and timesent values of navprofiles.smsq |
 ^ Run mode             | Daemon process | ^ Run mode             | Daemon process |
 ^ Default scheduling ​  | Polls the sms queue every x minutes | ^ Default scheduling ​  | Polls the sms queue every x minutes |
 ^ Config file          | smsd.conf |  ^ Config file          | smsd.conf | 
-^ Log file             | smsd.log ​  ​|  +^ Log file             | smsd.log |  
-^ Programming language | Python ​ in NAV 3.2 (perl in 3.1) +^ Programming language | Python (Perl in 3.1) | 
-^ Lines of code        | In NAV 3.2: approx 1200 +^ Further doc          | subsystem/​smsd/​README in the NAV sources describes the available dispatchers and more |
-^ Further doc          | |+
  
  
-===== The snmptrapd =====+ 
 + 
 +==== Details ​==== 
 + 
 + 
 +=== Usage === 
 + 
 +As described when given the ''​-''''​–help''​ argument: 
 + 
 +  Usage: smsd [-h] [-c] [-d sec] [-t phone no.] 
 +   
 +    -h, --help ​           Show this help text 
 +    -c, --cancel ​         Cancel (mark as ignored) all unsent messages 
 +    -d, --delay ​          Set delay (in seconds) between queue checks 
 +    -t, --test ​           Send a test message to <phone no.> 
 + 
 +Especially note the ''​-''''​-test''​ option, which is useful for debugging when experiencing problems with smsd. 
 + 
 + 
 +=== Configuration === 
 + 
 +The configuration file smsd.conf lets you configure the following:​ 
 + 
 +^ parameter ​ ^ description ​                                    ^ default ​  ^ 
 +| username ​  | System user the process should try to run as    | navcron ​  | 
 +| delay      | Delay in seconds between queue runs             | 30        | 
 +| autocancel | Automatically cancel all messages older than '​autocancel',​ 0 to disable. Format like the PostgreSQL interval type, e.g. '1 day 12 hours'​. | 0 | 
 +| loglevel ​  | Filter level for log messages. Valid options are DEBUG, INFO, WARNING, ERROR, CRITICAL | INFO | 
 +| mailwarnlevel | Filter level for log messages sent by mail.  | ERROR     | 
 +| mailserver | Mail server to send log messages via.           | localhost | 
 +| dispatcherretry | Time, in seconds, before a dispatcher is retried after a failure | 300 | 
 +| dispatcherN | Dispatchers in prioritized order. Cheapest first, safest last. N should be 1,2,3,... | dispatcher1 defaults to GammuDispatcher | 
 + 
 +In addition, some dispatchers need extra configuration as described in comments in the config file. 
 + 
 + 
 +===== snmptrapd ===== 
  
  
Line 376: Line 432:
 ^ Log file             | snmptrapd.log and snmptraps.log ​ |  ^ Log file             | snmptrapd.log and snmptraps.log ​ | 
 ^ Programming language | Python ​  | ^ Programming language | Python ​  |
-^ Lines of code        | ? | 
 ^ Further doc          | - | ^ Further doc          | - |
  
Line 382: Line 437:
  
 ===== makecricketConfig ===== ===== makecricketConfig =====
 +
  
  
Line 392: Line 448:
 ^ Polls network ​       | No | ^ Polls network ​       | No |
 ^ Brief description ​   | | ^ Brief description ​   | |
-^ Depends upon         | That gDD has filled the gwport, swport tables (and more...) |+^ Depends upon         | That ipdevpoll ​has filled the gwport, swport tables (and more...) |
 ^ Updates tables ​      | The RRD database (rrd_file and rrd_datasource) | ^ Updates tables ​      | The RRD database (rrd_file and rrd_datasource) |
 ^ Run mode             | cron | ^ Run mode             | cron |
Line 398: Line 454:
 ^ Config file          | None |  ^ Config file          | None | 
 ^ Log file             | cricket-changelog ​  ​| ​ ^ Log file             | cricket-changelog ​  ​| ​
-^ Programming language | perl | +^ Programming language | python ​|
-^ Lines of code        | Approx 1600 |+
 ^ Further doc          | [[howtoconfigurecricket|How to configure Cricket addons in NAV v3]] | ^ Further doc          | [[howtoconfigurecricket|How to configure Cricket addons in NAV v3]] |
 +
 +==== Details ====
  
 ===== The Cricket collector (not NAV) ===== ===== The Cricket collector (not NAV) =====
 +
  
 ==== Key information ==== ==== Key information ====
Line 413: Line 471:
 ^ Updates tables ​      | Updates RRD files | ^ Updates tables ​      | Updates RRD files |
 ^ Run mode             | cron | ^ Run mode             | cron |
-^ Default scheduling ​  ​| ​In NAV 3.1 two separate ​run modes operate; gigabit ​ports are polled every minute, ​the rest every 5 minutes. |+^ Default scheduling ​  ​| ​Every 5 minute (Pre-NAV 3.2 had a one minute ​run mode for gigatit ​ports. As of NAV 3.2 64 bits counters ​are used and the 5 minutes ​run mode is used for //all// counters) |
 ^ Config files         | directory tree under cricket-config/​ |  ^ Config files         | directory tree under cricket-config/​ | 
 ^ Log file             | cricket/​giga.log og cricket/​normal.log ​  ​| ​ ^ Log file             | cricket/​giga.log og cricket/​normal.log ​  ​| ​
 ^ Programming language | not relevant | ^ Programming language | not relevant |
-^ Lines of code        | not relevant | 
 ^ Further doc          | not relevant | ^ Further doc          | not relevant |
  
Line 436: Line 493:
 ^ Log file             | ?   ​| ​ ^ Log file             | ?   ​| ​
 ^ Programming language | Perl | ^ Programming language | Perl |
-^ Lines of code        | Approx 200 | 
 ^ Further doc          | - | ^ Further doc          | - |
  
Line 462: Line 518:
 ^ Log file             | None   ​| ​ ^ Log file             | None   ​| ​
 ^ Programming language | Python | ^ Programming language | Python |
-^ Lines of code        | Approx 350 | +^ Further doc          | [[http://nav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 2.4]] (Norwegian). |
-^ Further doc          | [[http://metanav.uninett.no/​static/​reports/​NAVMore.pdf|NAVMore report ch 2.4]] (Norwegian). |+
  
 ==== Details ==== ==== Details ====
Line 484: Line 539:
 ^ Default scheduling ​  | | ^ Default scheduling ​  | |
 ^ Programming language | | ^ Programming language | |
-^ Lines of code        | | 
 ^ Further doc          | [[Arnold|Arnold]] | ^ Further doc          | [[Arnold|Arnold]] |
  
backendprocesses.txt · Last modified: 2015/09/24 12:48 by morten