Service Management must not shy away from application development
Most Service Management organizations are full of process people. And then, there are tools teams that are full of tools people. What I mean is that the tools teams are made up of individuals who are experts in different tools like Incident Management tools, Monitoring tools, and Desktop Management tools. The process people tend to be disconnected. Of course, the natural progression that we all hope for is that we have the process people blend and enable the tools people and vice versa - regardless of the organization’s reporting structure. But that progression is not enough.
We need to take a fundamental leap forward in this area: Service Management needs to think like an application development team. Further, Service Management needs to add developers to the skillset mix. No tool will ever deliver on an objective of Service Management on its own. The tool must support an effective process. And the tool must be integrated into the fabric of the teams using the process.
To be successful with Service Management, one needs to bring processes, tools, system-integration-code, data, and reporting together in a harmonious way to support the process and organizational goals. Small, simple tasks like “write a custom script to pull alerts from Service Provider X’s API into our monitoring” scare most Service Management and tools teams beyond belief. That sort of fear will not bode well for our goal of evolving our IT organization to support the cloud.
To support the introduction of public and private cloud, we must bring together the skillsets under a common set of goals aimed at “Service Management”. I am not necessarily making an organizational chart statement here. What I am suggesting is that the team for service management - even if the team is a virtual team - must have a common goal and the [virtual] team must include diverse process, tool, code, data, and reporting skillsets. One [virtual] team aimed at one set of Service Management objectives.
The naivety of “one tool to rule all tools”
So many organizations embark on a journey to improve a process like Incident Management, and they focus on the tool. They spend years deploying a tool and migrating all teams to that tool. Then, the company acquires another company and they start the process again. Even if there are no mergers or acquisitions, every organization consumes services from one or more third parties like internet connectivity, telephone services, data management, etc.
Of course we cannot force our suppliers to adopt our instance of our tool. The reality is that there are lots of tools involved in every organization’s Incident Management flow.
So what are we to do? We have several options:
- Ignore reality and continue down the path of a single tool
- Throw out the concept of a central tool and give our best effort to meet the goals of the processes without a tool or common schema
- Focus only on the integration aspects and focus on being successful with the goals and objectives of every process by taking a metadata approach
- Combine “A” and “C” whereby we push a central tooling agenda where it makes sense, and we enable our processes beyond our boundaries by adding a metadata approach.
We should all be driving ourselves towards “Option D” as we evolve to the cloud. This point should help the argument for bringing application development skills into the Service Management fray because without those system-integration, data, and reporting skills at Service Management’s disposal, we will not be successful with Option D.
The battle: Alerts versus Incidents
As I have already mentioned, it is sometimes the case that process-people and tools-people do not integrate well enough. Downstream from that failure to integrate, we see Incident Records and alerts being seen as completely disparate things. But if we go back to Incident and Problem Management as defined in Service Operations, the first step in both processes is DETECTION. And as we discussed in the first post in this series, (Building Service Monitoring as a Service with an Eye on the Cloud: The Future of Service Management in the Era of the Cloud) to be a great service provider to our customers and users, we need to be great at automated detection.
Automated detection is another way to say monitoring. Therefore, alerts should be looked at as triggers for Incidents and Problems. Ultimately, alerts are just an extension of Incident (and Problem) Records. Also, interestingly enough, the data within the alerts is critically important to extending our data-driven understanding of Incident timelines in both the micro (single Incident) and macro (groupings of Incidents) views.
We need to look at the alerts, and the data therein, as a subset of the Incident and Problem data rather than as an orthogonal set of data. Ultimately, the solution is one of data and not of tools. We must not be so pedantic on the discussion of monitoring tool versus Incident tool. We must look at it is a Service Monitoring [and Incident Management] Service.
More blog posts in the Building Service Monitoring as a Service with an Eye on the Cloud series
Read the first blog post from Carroll Moon, Service Monitoring as a Strategic Opportunity.
Read the second post, The Future of Service Management in the Era of the Cloud.
Read the fourth post Service Monitoring Service Outputs.
Read the fifth post Service Monitoring Service.
Read the sixth post Building Trust in the Service Monitoring Service.
Read the seventh post Making the Service Monitoring Service Viral.
Read the eighth post, Service Monitoring Application Development.
Read the ninth post, Monitoring Service Health.
Read the tenth post, Delivering the Service Monitoring Service.
Read the final post, The service monitoring service – rounding it all up.