The Critical Role of Artificial Intelligence in Large-Scale OSS Automation
Artificial intelligence and machine learning can prove to be extremely valuable as large-scale OSS processes and service orders become increasingly complex.
The automation of specific OSS processes (business and operational) has traditionally been approached using rules-based software systems. In other words, the software can complete processes using policy-informed workflows, e.g., when the service order’s state is changed to “active,” trigger process “X.” This works well in relatively simple product/service offerings, but as the complexity and size of bundled services increase, workflow automation may not meet mission-critical needs or may fail altogether.
The old attitude was to employ a team to manually manipulate the orders through completion. Over time, these teams become invaluable and they learn to overcome the eccentricities of various order management systems; they essentially become experts in game theory. This a difficult situation for any company, forcing it to employ a small group of operations staff who know complex manual fixes in processes as critical to business success as service fulfillment.
The modern remedy for this challenge is to build contextual awareness into the software systems so they can make human-like decisions based on a large number of variables. It is very important that systems can do this at scale as the complexity of digital service orders increases and the number of jobs that must be completed within a process grows beyond what it possible to do manually within stated SLAs.
OSS is Moving Toward a Post-Manual, Post-Rules Age
Artificial intelligence (AI) and machine learning have proven to be extremely useful in BSS disciplines, such as customer retention, customer experience management, customer behavior modeling and fraud detection. Yet in OSS and the multi-dimensional network layer, the use of AI and machine learning has thus far been significantly more sparse. Many see AI and machine learning as an extension of big data analytics projects that have been running in the operations side of telecom for years with mixed results. One of the main reservations that operators have with big data analytics is the amount of time and resources that must be spent on reprocessing, normalizing and standardizing the data coming from the various network domains. Then, in an OSS context, this network data must be relatable to the incoming demand from active service orders and contextual information coming from BSS.
Thus, the OSS layer forms the meeting point at which all data insight must be aligned and used. This is where AI can make a huge difference in the way we utilize information.
OSS With Native AI is a Turning Point
Any service provider’s existing OSS architecture is a multi-vendor environment with a mix of various modern and legacy systems. Therefore, the application of AI to these systems is done in an external way, in the same way a big data analytics platform may sit over the OSS, drawing data from the systems, processing that data and providing actionable insights to be fed back into the system. However, because of the fragmented nature of the OSS architecture, this AI work must be done in a siloed fashion. As such, the service fulfillment systems may not be exposed to insights from service assurance systems or network planning systems.
This approach is problematic for a number of reasons, but chiefly because it contradicts a central tenet of digital transformation: holistic, end-to-end service management. The answer comes in the shape of making AI native to the OSS systems and making AI a pervasive, single type. This will allow the intelligence derived from all parts of the operator to be acted on from the next level up in master control.
This concept dovetails perfectly with the ideas detailed in domain orchestration and drives automation projects forward much quicker than could be done in a piecemeal way. With AI and machine learning embedded in the orchestration layer, end-to-end master control service management can be used in a DevOps way to avoid manual intervention and enable repairs and improvements to be enabled on an ongoing basis.