CUSTOMER/ PARTNER LOGIN

Dealing with Transformation Change

Dealing with Transformation Change

By: John Locke, Consultant, MD, JCL Technologies Ltd

While working in the IT industry for many years and writing about legacy OSS, infrastructure and transformation challenges, I often wonder why so many businesses struggle with the task of change and why “transformation” is potentially becoming a dirty word. What is all the fuss about? I suspect all this apprehension may be for all the wrong reasons. Surely this is not too difficult or an impossible feat. Resolving legacy environments is certainly not a new problem. This challenge is well understood and there are many skills at a company’s disposal that can help.

Factors Contributing to Transformation Difficulties

So, what is the main contributor to this problem? Is it the arrival of digital services, cloud solutions and virtualization? Is it the inability to provide consumer content to multiple services or the inability to be agile and quick when providing new services? Or maybe it is the inability to gather customer insights that can possibly drive product innovation. I suspect the source of transformation difficulty is a mix of all the above.

The level of change is unparalleled and is still accelerating without a sign of slowing down. This creates multiple concerns for governance teams regarding:
• Risk,
• Service impact during the transformation life cycle, and
• Maintained business growth.

For companies that can adopt or are in the position to start a fresh or greenfield approach, the risk and impact is minimal – plus the cost is considerably less and quicker to realize. However, that does not alleviate the problems the change has on the organization.

Mitigate Transformation Challenges

So, could current telecommunications providers adopt a greenfield or ground zero approach? Is this an option that could be workable in achieving the business outcomes? Another idea to consider is a hybrid approach which may better fit the business needs.

I believe that a multifaceted approach could be adopted and be the preferred method in achieving the best outcome. This would in turn create less risk and cost while improving delivery. This approach has a better chance of delivering the business outcomes, while minimizing concern on the investment over the program duration.

Change requires many aspects to be aligned. Clarity is needed to understand how those changes will be achieved. To get to a point where you are ready for this transformation process, ask the following questions:

• What is the current pulse of the business?
• Does the IT team have the respect and influence within the company to implement these changes?
• Do we have the right level of partners and commercial capabilities to craft the complex contracts?
• What additional resources are required for onboarding?
• Do we have the experience personnel to tackle this level of change?

Teams Play a Vital Role in Transformation

Changing behavior to achieve the desired outcome will come down to having the right managers in place to deliver the message and define the team and organization structure. Defining clear lines of communications and upward reporting is a necessity in a smooth transformation process. It will also need the right people with the right skills – this includes clear contract deliverables and SLAs for 3rd parties. The good news is that having the experienced level of management is not a new requirement. Effective team management has always been required since the concept of management was defined.

Finally, change will require incredible communication from the top to the bottom of the organization. This is needed to maintain excellent project governance, provide business insight and visibility for speedy approvals and reduce any cyclic project delays. The change organization should have robust structure, clear lines of authority and senior sponsorship to maintain cohesive collaboration and commitment, all with the common goal to deliver the projects on time and within budget.

The team should be able to deal with all exceptions quickly to reduce the possibility of stalling. They also need to be available to provide clarity on directional changes impacting the program. Furthermore, if the speed of change is to be maintained and achieved, the team will need to manage the budget, skill gaps and general problems that require management decisions.

Change requires a coordinated approach that cuts across the whole organization. It needs the absolute sponsorship and support at the C-Level and the appropriate team structure. The right people with the right experience can deliver the transformation strategy directive successfully and within the required timeline. The bottom line is the entire organization staff must be engaged, removing any silos and resulting in a positive environment to achieve the above transformational change.

About John Locke:
John is a highly motivated and genuinely flexible CIO/CTO demonstrating over 25 years of experience driving game-changing technology-driven and business-centric transformations within multi-national organizations. In his role, he helps formulate and deliver effective strategies facilitating increases in business performance in terms of capability, efficiency, customer satisfaction and profitability. John previously held roles at Tata Consultancy UK & Europe as Infrastructure Services CTO, Tata Communications as Group CIO and Vanco as CTO.

Understanding the Culture of Telecommunications Legacy DNA

By: John Locke, Consultant, MD, JCL Technologies Ltd

The telecommunication companies we know today are very different than the old world of legacy monopoly organizations from many years ago – a time when monopolies ruled the landscape. Telecommunication companies were the only option if you required telephone or fixed line services. It was even more challenging when you had to work with multiple telecommunication providers, not to mention the levels of complication if international services were required. These issues required customers to act as intermediaries when stitching services across multiple carries or dealing with service issues, and generally resulted in the customer taking a considerable amount of time to resolve the issues.

A brief look at the history of the telecommunications industry

Culturally, telecommunication companies rarely had to consider the threat of competition or the competitive challenges that the private industry would normally have to consider. Since the early 1990s the telecommunications industry has gone through monumental changes:

• Monopolies were removed,
• New carriers appeared, and
• Competition increased, possibly being classified as a utility company.

Over the last 25 years we have seen the telecommunications industry margins erode due to fierce competition and massive technology changes. Additionally, the industry was required to quickly deliver new competitive solutions to increase revenues if they were to maintain any advantage over competitors.

It’s easy to see how the past has such an impact on the ability to generate the momentum and mindset needed to transform the business. This is not an easy proposition. It typically requires cohesive engagement to support any technology upgrade program that may include all Lines of Business (LOB) – which can include business process, organization structure and people.

Being successful in your transformation approach

So how can you approach the problem of transformation and be successful? One option is to not get bogged down with the legacy environments. These normally suck up 90% of the effort, cost, process and people to achieve the point within the legacy that would be used to begin the transformation process or program. Due to the complexity and time to deal with the legacy component, is there another approach?

Approaching this transformation as if it were a new program – with legacy components removed – could be a possible answer in achieving the focus required to bring new technology to bear fruit quickly. It would certainly change the mindset of the team assigned to the program and they would now be 100% allocated and committed to its success.

Another option may be to obsolete all or part of the legacy environment; however, I suspect that some integration will be required in maintaining business continuity. But the point here is that in general, there is a fixation on the legacy. Whatever the issue, the cord will need to be cut if the burden is to be removed.

Transformation is like any other program. It requires:
• The right approach,
• The commitment of the business, and
• The mindset of the folks.

It is important to note that the legacy element will always be underestimated, resulting in the overall program overrunning. From what I have seen, what is required at the start of the transformation program is very different than what the finished article is. This is because normal changes are required during the lifecycle of the program.

However, more importantly, what was perceived to be brought forward from the legacy into the transformation is considerably lesser than what was necessary at the start of the program. I believe as the business process evolves around the transformation, the business process will naturally become simple, ending with better alignment to the new business design – reducing product catalogue, improving efficiency and improving automation.

About John Locke:
John is a highly motivated and genuinely flexible CIO/CTO demonstrating over 25 years of experience driving game-changing technology-driven and business-centric transformations within multi-national organizations. In his role, he helps formulate and deliver effective strategies facilitating increases in business performance in terms of capability, efficiency, customer satisfaction and profitability.

There is a new awareness concerning the need for service assureance, and as 2013 draws to an end I wanted to talk about this global need. Because converged services are the new service standard, communication service providers are faced with multiple challenges in supporting new digital services with the legacy tools they have used for the last decade plus. The problem is that service providers are still deploying silo based tools that manage a specific function yet these tools actually add complexity and cost in managing a new or converged digital service. The fundamental challenge is how can a service provider ensure optimal service, i.e. service assurance across its customers and markets, whether they are consumer or commercial, while also driving operational efficiency? Many large enterprises and managed service providers certainly have this challenge of silo based tools also.

It is clear that the network monitoring space has in effect atrophied as I reviewed in my previous blog Addressing the Current State of OSS and NMS Solutions. I recently read that the market leaders do not innovate – they acquire a portfolio of disjoint products, and then atrophy hits. The atrophy becomes evident by customers getting hit with increased maintenance and new pricing plans which incur new costs for software that is basically a decade old or more.

 

To address these challenges, a service provider will need a next generation solution, one that supports end-to-end service assurance for converged services while also driving operational benefits. The key to this next generation solution is to have end-to-end, cross-domain correlation of resources to services to customers. To do so, a solution must support topology management, topology for physical, logical, and virtual environments. And it also must be protocol agnostic, in other words, support the transport layer to the application layer. It also must be integrated to the IT and OSS environment including CRM for SLA requirements, inventory for accurate resource views, ticketing for incident management, and more. It also must be able to be protocol agnostic for topology. Historically the fault management space was able to support any protocol – this is what made Micromuse the standard in that space. When one looks at the performance management market, it is clear that it is still protocol dependent, like the legacy root cause analysis vendors for IP networks. In addition, a larger problem in my opinion is how does one do cross domain correlation, across diverse software instances, from multiple vendors, all which have differing time stamps for each “event occurrence” which may be affecting a service?

To effectively support true service assurance, a solution must be unified across silos and domains, able to normalize data from any source, be able to support any form of topology regardless of the domain, and enable real time visualization in a multi-tenant manner. For the record, I just described our AssureNow solution.

Happy Holidays, and we are looking forward to helping you address your challenges inhttp://and AssureNow solution link to: http://assurenow.io/what-we-deliver/unifiedserviceassurance/ 2014 of assuring new and existing services to meet your customer experience management goals, while also driving new levels of operational efficiency.

where-is-the-customer-in-your-sea-of-cloud-data-and-metrics

Where Is The Customer In Your Sea Of Cloud Data And Metrics?

If you have migrated your applications to the cloud — whether a public cloud, a private cloud, or hybrid — you know that the virtual instances and underlying cloud computing platform provide a sea of data for analysis. You are probably drowning in it.

Platforms such as Amazon Web Services, Microsoft Azure, Google Cloud, and OpenStack all provide data to help system administrators manage CPU, disk, and bandwidth. Additional data can be gathered such as:

 

The volume of data generated by the platforms is enormous and very useful for resolving network monitoring issues when you only care about a few data points.

But this doesn’t answer questions about how well services are running or the customer’s actual experience.

Customer and Service Assurance

Monolith’s AssureNow unified service assurance framework provides end-to-end visibility across domains to enable you to proactively — in real time — manage customer service level agreements for availability and performance.

AssureNow Service Management goes beyond determining if the systems and applications are running as expected. It can answer questions like:

 

The AssureNow solution provides comprehensive data collection from a large number of network elements and systems. It aggregates, correlates, and analyzes volumes of data to provide insights on services and customers, not just networks. AssureNow’s powerful analytics engine provides fault, metric and topology management to cull out the service and customer information from the vast quantities of data coming fast and furious from network devices and systems.

AssureNow correlates Key Performance Indicators (KPIs) and Key Quality Indicators (KQIs) for specific services and customers — things your business cares about. This enables you to “see the forest from the trees”, measure the performance of your services, and make intelligent business decisions.

Cloud Management

AssureNow also helps you manage your cloud by gathering information on cloud service topology, load balancers, etc. So you can make smart business decision. For example:

Don’t Drown in a Sea of Data

AssureNow’s ability to unify, simplify, and automate data from any source and to manage it from a customer service agreement perspective can give you a competitive advantage. This service level view, as measured by KPIs and KQIs, lets you focus on the customer without drowning in a sea of data.

Monolith’s-AssureNow,-Legacy-NMS-tools,-and-the-future

Over the last few weeks I have gotten calls from friends of mine in the industry telling me that Monolith is doing one of the following:

1)      A customer told one of our employees that Monolith is being purchased by a competitor
2)      Someone else told me that I supposedly was attending meetings in Dubai

To set the record straight first we are not being purchased and while I would love to visit I have never been to Dubai.

In the last few months we have done the following:

1)      Received external investment from Evolution Capital-our A round: http://assurenow.io/evolutions-latest-growth-investment-provides-assurance/
2)      Our partner, Eirteic announced the following win at MBNL where AssureNow will be consolidating 5 fault management platforms onto one AssureNow platform.  IBM Netcool makes up a major part of the 5 platforms we are retiring.  We also have retired IBM Netcool at both enterprise and service provider accounts last year.   http://assurenow.io/eirteic-selected-to-deliver-mbnl-service-assurance-project/
3)      Monolith participated in the TM Forum Catalyst program in Nice in May where we along with Eirteic, Galileo Software and Eir showcased Customer Centric service management, Smart City-IoT service assurance and more:  http://www.eirteic.com/customer-centric-service-assurance-illustrates-customer-centric-future-of-operator-environments/

Monolith’s AssureNow platform is uniquely positioned to support end to end service assurance, tools rationalization, and flow through service assurance.  I met with a colleague who has been in this industry for over 15 years.  He told me emphatically that I and Monolith need to be much more-truthful, bolder and transparent on the current state of the industry.  I know one competitor who has claimed we are about to go out of business-again not true-see above.  The truth is that any vendor who is still proposing legacy, standalone best in breed tools for fault, performance or root cause-is effectively selling totally OBSOLETE or dead end software.  The legacy frameworks are not positioned to manage the rapidly evolving SDN/NfV world.  Without a unified platform-which we tried to do at Micromuse-there is no effective way a hybrid cloud environment can be managed by multiple standalone tools-and orchestrate resources, enforce SLA’s, provide real time correlation, etc- AssureNow can today.

The legacy fault, performance, root-cause tools are all on a dead end street.  AssureNow continues to evolve using our agile, dev ops capability on our unified code set.  How can legacy frameworks, built via acquisition, merge into one unified platform-not a marketecture, a real unified code set?  Answer is if they could they already would have and so as the link below says-they are dead but don’t know it-per Randy Newman.

Smart-City-Service-Assurance

Here at TMF Live 2016, I was fortunate to get more educated on the newest industry initiatives.    At the Smart City Dublin forum, the subject was how municipalities can save money and better enable their citizenry.   These opportunities are not being driven by cities themselves, but by innovative service providers offering exciting new services.    Cities have assets, like right-of-ways.    Cities have advancing needs, like tourism empowering free wifi.  Cities have challenges, like reducing budgets and stodgy policies.   While other service providers may shy away from engagement, others see this challenges as opportunities for new products and revenues.

The concept is simple.    Leverage the right aways (lamp posts), engage your NEPs, and install a wifi network enabled by advertising.   Smart cities can share in the ad-based profits and provide new tourist engaging services to grow the local community.   Through this service, provide a portal to citizens and tourists alike showing off the local digital economy.   Provide multi-tenant access showing other services from garbage collection to power outage notifications to enrich the peoples knowledge and grow the cities automation potential.   Reduce the cost and redundancy of all the services (basic to advanced) that the city is chartered to provide.

Where does service assurance come in?   The digital world is a unifying force.    Providing a single pane of glass is common sense, but unfortunately not common place.   Once deployed, the quality of city’s services define their brand.   The analog and digital services will need to be assured.  Proactive engagement is no longer a nice-to-have, its expected.    Leveraging a service assurance solution with proactive portal across all services will enable the smart city revolution.

Service providers, government, and equipment manufactures can be brought together to provide revenue positive new services (ie Dublin).   The question is how will service providers assure the quality and engage the populace?   Answer.   Unified Service Assurance.  Learn more about Unified Service Assurance with Monolith’s AssureNow.

Service-Assurance-Challenges-of-IoT

A common question of the day, “what are we going to do in the IoT world?”.   My typical response to service providers is, “well, that was last week…”    All kidding aside, we live in the connected generation.   Network access is the new oxygen.   The price to be paid is complexity and scale.   A good reference for what IoT use cases exist is this bemyapp article about Ten B2B use cases for IoT.

TheInternetThings

But what needs to be discussed are how to group these, what are the common threads.   Its best to categorize them into three buckets.    Environmental monitoring of smart meters to reduce human interaction requirements.   Tracking logistics through RFID is another common trend with IoT communities.   The most common is client monitoring.    In mobility, handset tracking and trending is common in CEM.   In an access network its monitoring the cable modems for millions of customers.  Which ever category your use case may be, the challenges will be similar.    How do you deal with the fact that your network becomes tens of millions of small devices instead of thousands of regular sized devices?   How do you handle that fact that billions of pieces of data need to be processed, but only a fraction would be immediately useful?   How can you break down the network to human understandable segmentations?

The solution is simple – Unified Service Assurance.   With a single source of truth, you can see the forest through the trees.   While the “things” in IoT are important, how they relay information and perform their work are equally important.    Monitoring holistic allows better understanding of the IoT environment – single point solutions will not address IoT.  Normalizing data enables for higher scale, while maintaining the high reliability.

Now that the network has been unified into a single source of truth, operations can start simplification of their workload.    First step, become service oriented.   Performance, fault, and topology is too much data – its the services you must rely upon.   How are the doing, what are the problems, how to fix them, and where you need to augment your network.    Next up, correlate everything – you need to look at the 1% of the 1% of the 1% to be successful.  KQIs are necessary, because the trees in the forest are antidotal information – the AFFECT.   Seeing the forest (as the KQI) allows you to become proactive and move quicker, be more decisive because you understand the trends and what is normal.  Its time to stop let the network manage you, and start managing your network.

After unifying your view and simplifying your approach, its time to automate.    The whole point of IoT is massive scale and automation, but if your SA solution cannot integrate openly with the orchestration solution, how will you ever automate resolution & maintenance?   We all must realize, human-based lifecycle management is not possible at IoT scale.   Its time to match the value of your network with the value of managing it.

Learn more about Monolith Software’s AssureNow by scheduling a call today.

The-Value-of-Realtime-Digital-Services

TMF Live 2016 wrapped up today and as I take my Uber back to NCE airport, oddly enough I experience the value of real-time digital services.   As most, I leverage the Uber ride-share service.   My reasons are as others: connivence, price, quality, etc.   As I have been coming to Nice for TMF Live, typically taxis are twice the cost.

As we are driving, the drivers phone beeped.   It told him that there was an accident up ahead and we needed to divert.  Interestingly, my phone beeped as he said this and I got the same message showing a red line up ahead.   The driver stated this was one of the reasons he switched to Uber, being a long time tax cab driver.   Because other Uber drivers are constantly, autonomously reporting traffic (way more than cab drivers do) he spends more time driving and less time in traffic.   He drives more customers and makes considerably more money.   The customers are happier, online bill pay provide less hassle – he drives, that is all he worries about.    The cost of Uber?   For him nothing, the passengers do that.   He drives and gets paid.   And is nice – offered me a paper (quaint) and free bottle of water before boarding.

So to review…    Uber based in California, 6,000 miles and 9 hours time difference away.   Using AWS hosting, it allows real-time automatic cross matching of traffic to make lives a little easier in Nice.   The mini to the macro at work here.   This 60+ year old driver, driving all his life, reaps the benefit.   I pay an extra 2e, 40% reduction in rates, smoother ride in a new car, and nicer driver — that is value for the customer.    What makes this miracle possible?   Realtime digital services.    Uber and others like them are winning the battle by pushing realtime digital services using LTE; competing against taxi cabs with CB radios.    As the newspaper industry realized already, the taxi cab industry will soon become… quaint…

My question to you?  What is your realtime service?   What does it mean to your business?  How do you assure it to continue to be realtime?

Contact Monolith Software today about AssureNow’s Unified Service Assurance.

PS.   Thanks T-Mobile for included international roaming.   Uber would not have been possible without you…

Supporting-the-New-End-to-End-Service-Assurance-World

Over the last 10 years, service providers have been striving to roll out the next generation of services that include voice, video, WiFi, wireless — and all the bandwidth customers can possibly digest. To accomplish this, providers need a more agile infrastructure to provision complex service offerings, across multiple platforms, hungry set-top boxes, smartphones and an army of tablets.

However, over time the service assurance base has atrophied. If you open the hood on any operation center, you’ll see network management tools that have been installed for 15-plus years. These tools include IBM Netcool, EMC SMARTS, Infovista, CA and the list goes on. Before next-generation services were conceived, these tools worked fine for fault and performance management, as well as root cause analysis, within specific domains. But this infrastructure management approach is no longer valid and is incapable of meeting end-users’ needs who want to access information that’s geometrically dispersed over an extremely complex infrastructure. Service providers need end-to-end visibility across all elements of the operational support system (OSS) and business support system (BSS) to assure service delivery–they need their Service Assurance MoM (Manager of Managers).

It’s just common sense that providers cannot provision services if they have no idea of the operational integrity of each network device within each domain. Yesterday’s methods of trying to do this with the legacy management systems and using overlay networks will never cultivate the new services. Legacy tools will never be able to correlate the myriad of problems between fault performance and real-time service assurance. Without this, service providers will never realize the lucrative promises that the billable, next-generation services are poised to deliver.

The lack of management visibility gets worse, as modern techniques that have proved to be a boon in traditional data center and enterprise networks, create an additional layer of obscurity. Case-in-point: Virtualization inhibits network visibility–there is no view into fiber, copper, satellites and even WiFi networks. The typical management approach has been to buy best-of-breed tools to oversee each domain separately, but this has proven difficult at best. Let’s face it. Legacy management tools cannot help next-generation network service providers because they are built to solely view IP in a software contained, packet-switched silo.

MoM’s the MAN-agement

What’s needed to safely land a new generation of application and services into consumers’ pockets is an approach that entails a unified Service Assurance MoM solution. A Service Assurance MoM-based solution provides end-to-end visibility and assurance correlation across any domain whether it’s TDM, father, DSL, IPTV, 2G, 3G or whatever other future protocol architects can dream up.

This approach is not simply a shiny new graphical user interface (GUI). No, a Service Assurance MoM offers the ability to do multi-tenancy at the interface level to show all devices–whether they are data centric or virtualized. This form of management is a brave new world that also includes runbook automation capabilities to allow real time creation of service assurance, as well as the provisioning to support real-time services that can be turned up and down as needed.

The keys to next-generation service delivery rely within a Service Assurance MoM approach that also enables thresholds to proactively ensure Service Level Agreements (SLAs). The Service Assurance MoM approach helps ensure SLAs by enabling different levels of visibility to support an agile, digital content system for cable, satellite, fiber optics or any other digital content provider. After all, even large enterprise organizations are heavily engaged in digital services and the complexity resides outside their data center walls –fiber, copper, WiFi, TL1, IP, UDP– they are all in the alphabet soup of protocols that need to be viewed as a single service.

Service provider, data center, enterprise, collocation or CLEC, it does not matter what the label is because all large networks are predicated on the delivery of services. Today’s service assurance needs a new management solution, that is built on a single source code to unify disparate network management platforms and create one single, holistic view into service performance and the issues affecting customer satisfaction. Organizations that offer these types of MoM solutions, such as Monolith Software, realize providers seek long-term cost savings while delivering better user and customer experiences.

Every-smart-city-needs-a-MoM’s-love

Over the last 10 years, communications service providers have been striving to roll out the next generation of services that include voice, video, Wi-Fi, wireless – and all the bandwidth customers can possibly digest. To accomplish this, providers need a more agile infrastructure to provision complex service offerings, across multiple platforms, hungry set-top boxes, smartphones and an army of tablets.

Over time the service assurance base has atrophied. If you open the hood on any operations center, you’ll see network management tools that have been installed for 15-plus years. These tools include IBM Netcool, EMC SMARTS, Infovista, CA and the list goes on. Before next-generation services were conceived, these tools worked fine for fault and performance management, as well as root-cause analysis, within specific domains.

Service assurance manager of managers

This infrastructure management approach is no longer valid and is incapable of meeting the needs of end users who want to access information that’s geometrically dispersed over an extremely complex infrastructure. Service providers need end-to-end visibility across all elements of operational and business support systems (OSS/BSS) to assure service delivery. The need a service assurance MoM (manager of managers).

It’s just common sense that operators cannot provision services if they have no idea of the operational integrity of each network device within each domain. Yesterday’s methods of trying to do this with legacy management systems and using overlay networks will never cultivate the new services. Legacy tools will never be able to correlate the myriad problems between fault performance and real-time service assurance. Without this, service providers will never realize the lucrative promises that the billable, next-generation services are poised to deliver.

The lack of management visibility gets worse as modern techniques that have proved to be a boon in traditional data centers and enterprise networks, create an additional layer of obscurity. Case-in-point: virtualization inhibits network visibility – there is no view into fiber, copper, satellites or even Wi-Fi networks. The typical management approach has been to buy best-of-breed tools to oversee each domain separately, but this has proven difficult at best. Let’s face it, legacy management tools cannot help next-generation network service providers because they are built to view IP solely in a software-containedd, packet-switched silo.

What’s needed is an approach that includes a unified service assurance MoM to deliver end-to-end visibility and assurance correlation across any domain, whether it’s TDM, DSL, IPTV, 2G, 3G or whatever other future protocol architects can dream up.

Catalyst shows the way

This is not a pipe dream. To prove it, Monolith Software has teamed with eir, Smart Dublin, Galileo, Liverpool University and MICTA to present the Customer-centric Service Assurance Catalyst at TM Forum Live! This proof of concept project proves how a customer experience visualization platform can enable service providers to predict and manage incidents from inside and outside the network, offering rapid notification to impacted customers and faster incident resolution times.

This approach is more than just a new graphical user interface. Data is gathered from multiple participating systems as well as from external sources (for example, traffic, weather and flood warnings), to provide a graphical dashboard that visualizes network services from a customer’s point of view. Any potential incidents are quickly flagged in the service management center (or mobile app) and can then be managed directly from the dashboard with responses automatically activated, status updates, resolution monitored and customers and third parties immediately notified.

We are excited to demonstrate how enterprise, telco, smart city and Internet of Things environments can use a single data analytics and visualization platform to pre-empt and rapidly resolve incidents impacting service and customers. Visit the Catalyst in Nice for a demonstration.