Network monitoring has evolved a lot in the last three decades. As new technologies and needs come into play (e.g. cloud, IoT), older monitoring methods are not sufficient to cope with the complex nature of modern networks. In the previous two decades, we saw two main technologies at the forefront of network monitoring: SNMP in the 1990’s and flow data analysis in the 2000’s. They still remain relevant and irreplaceable, but in this decade we are seeing the emergence of synthetic end-user monitoring. I discussed this in my previous post, The Takeoff of Real User Monitoring.
1990’s: SNMP
The foundation for SNMP was set in the 80’s, and SNMPv1 was standardized in 1988. The 1990’s was the decade that SNMP saw wide adoption, along with the Internet, WAN, and other technologies. Today, SNMP has reached Version 3, and it is a mature technology which is indispensable for network management and monitoring.
Pros: mature and standardized technology that gives essential health information about network devices
Cons: device information gives partial view of the network status
2000’s: Flow Data
Some of the first interface traffic capturing software was tcpdump, which was released in 1987. Another very popular traffic analyzer with a GUI is Wireshark, which was released in 1998. So, capturing network traffic is not a new idea, but it took the main stage in the 2000’s when vendors introduced specialized hardware taps to capture data from any network location and accompanying software to manage and analyze the captured data. By knowing all the traffic that goes through your network, you can do data and network forensics, intrusion detection, and traffic analysis. However, tap devices may be cost prohibitive for small to medium-sized enterprises, may need excessive data storage, and trying to use all this data for day-to-day troubleshooting is like “hunting a deer with a microscope” (as a fellow network engineer describes packet capturing data).
Pros: fine grain detail of all the data in the network, intrusion detection, forensic data
Cons: high data storage requirements, high cost, difficult to deal with the amount of data captured, useful only when there is user traffic on the network
2010’s: End-User Monitoring
Since there is nothing new under the sun, this is not a new idea either. We can trace the concept of synthetic end-user monitoring to the release of the humble Ping utility first developed in 1983. I bet Ping is one of the most commonly used utilities, not only in networking, but across all IT. Being a cofounder of synthetic network monitoring company, I have spoken to quite a few network engineers that have built or used homegrown synthetic network monitoring solutions. The need to capture end-to-end user experience data has been around since the beginning of networking.
In the era of cloud computing and the SaaS-ification of all services and applications, end-user monitoring takes the front stage as one of the most efficient and effective ways to make sure that all your services and applications are delivered to all your users at all network locations.
Pros: scalable, end-user experience, end-to-end monitoring
Cons: only synthetic data
More and more companies are adding synthetic network monitoring to their arsenal to complement or even replace SNMP and data flow monitoring. The driving force behind this is the transition from strictly closed WAN infrastructure to open architecture where a branch office may just have a border router and use multiple ISPs to get to a private or public datacenter. Under these circumstances, the network engineer has visibility only at the two ends, the branch office and the datacenter. Consequently, end-to-end user monitoring is one of the few methods they can rely on to see if applications and services are delivered to their users.
If you haven’t looked into an end-user monitoring tool, then I encourage you to take a look at NetBeez by signing up for a one-to-one demo.