OpenNMS Hardware Specs Case Study

Back in December I visited a client site where they had been running OpenNMS for a couple of years. The main reason was to migrate to new hardware as the old system was pretty overloaded and slow, with a load average between 6 and 10.

We replaced it with this:

Dell R420 Server

  • 2x Intel Xeon E5-2430 2.20GHz, 15M cache, 6-core processors
  • 24 GB RAM (6x 4GB 1333MHz DIMM)
  • 2x 500GB 7.2K RPM SAS 6Gbps 2.5” HDD – in RAID 1
  • 2x 100GB SATA Enterprise Value Solid State Drives 2.5” – in RAID 1
  • PERC H710 RAID controller, 512MB NV Cache
  • Intel Quad-port 1Gb Ethernet Adapter (add-on)
  • 2x onboard 1Gb Ethernet adapters

Here are the stats from today:

 09:23:56 up 30 days, 19:20,  2 users,  load average: 0.11, 0.06, 0.06

This system is monitoring 950 nodes, 4841 interfaces and 10171 services and 84738 RRD files. We put the O/S and application on the regular hard drives and /var/opennms on the SSDs. The database lives on a separate server on the same switch.

Note that we also switched from JRobin to RRDtool. While your mileage may vary, I have seen a number of installations that have benefited greatly, performance-wise, from that switch. On new installs I always set RRDtool as the default (it’s not in the main distribution because of the extra step of getting and installing the RRDtool package). The downside is there is no way to easily convert your old data.

I thought I’d share in case anyone else was looking to monitor a similar network.

If you learned something from this, or would like to learn more about OpenNMS and performance, don’t miss the 2013 OpenNMS Users Conference being held at the University of Fulda in March. We have people from six countries already registered, and I’m surprised my friends from Belgium, the Netherlands and the UK have yet to “represent”. Early bird registration ends on the 15th.