Monitoring Systems Of Engagement: Riding The Waves Into The Future Of Software
If you’re building software, it is very likely that you are familiar with Conway’s Law. It is the single most important rule for software development. Employing this law will facilitate your success. Failing to abide by it, on the other hand, will guarantee your failure. According to M. Conway:
“…organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.”
— M. Conway
I’ve witnessed this first hand in many places throughout my career, among them command and control services, Big Data processing systems, and even security components embedded into the very cars you are now driving. And yet, nowhere has this been more apparent to me than in software monitoring.
First Wave: the birth of IT departments
In the early days, just as businesses were adopting computers, they began setting up IT departments. These departments were in charge of digitalizing the organization’s internal processes. This included providing employees with computers and the network to connect them, setting up server rooms, and deploying software such as Microsoft Exchange and Oracle ERP. Today, this arrangement is often referred to as the system of record.
Just as Conway’s law predicted, this is precisely what they monitored. They made sure the computers and servers were alive, the network was connecting everything, and that hardware utilization for the servers (CPU, memory, storage) was within bounds.
Second Wave: moving to the cloud
The ground started shifting as infrastructure began moving out of organizations with the help of hosting companies, cloud computing, and now PaaS and CaaS platforms. An ever-growing number of utility services were being consumed as SaaS, reducing the number of services IT had to deploy and operate.
At the same time, IT was being shifted from the back office to the front, and business management was expecting it to be the very focus of the customer’s journey. Mobile apps became the norm through the use of in-house developers, as well as outsourcing companies, flashy marketing websites, online purchasing, and loyalty programs.
Here, again, a new generation of software monitoring solutions arose: Cloud Monitoring tools. These tools are in fact a drop-in replacement for data-center monitoring tools and Application Performance Monitoring (APM). APM tools provide quality of service metrics such as uptime and latency, thus allowing IT to detect and remediate service disruption.
Third Wave: monitoring beyond IT
Today, we have passed the tipping point. Most IT and business leaders want to unload their system of record to SaaS providers as much as possible. At the same time, digital presence is becoming a key differentiating factor for all enterprises, even in traditionally offline markets. All these new services are being deployed to the cloud, with little to no “classic” IT involvement.
With all this occurring, where is software monitoring to go? If our business leaders are measuring us by how we engage the brand’s customers and the business value we are generating, then that is how we must measure our software. In fact, many forward-looking digital companies such as Facebook and Netflix have been monitoring themselves using metrics such as ‘likes’ and ‘hours watched’ for a long time now.
What’s next?
IT organizations are freeing themselves from the shackles of traditional IT, focusing more and more on an agile, DevOps, and product-centric approach for delivering unique software.
By monitoring software through the lens of business values and metrics, we have a much better perspective of how we are doing. As the CTO of a DevOps-oriented startup, I’ve learned that by doing so we make better decisions, allocate resources where they will truly make an impact, and are able to show our superiors and subordinates why it matters.
You know what your software is set out to achieve. Define it. Measure it. Track it. Optimize it.