Guest Author, Author at Datamation https://www.datamation.com/author/guest-author/ Emerging Enterprise Tech Analysis and Products Mon, 25 Jul 2022 17:25:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 The Most Popular and Fastest-Growing AWS Products https://www.datamation.com/cloud/the-most-popular-and-fastest-growing-aws-products/ Mon, 08 Feb 2021 23:52:33 +0000 https://www.datamation.com/?p=20663 Enterprise IT departments are increasing cloud usage at an exponential rate. These tools and technologies enable greater innovation, cost savings, flexibility, productivity and faster-time-to-market, ultimately facilitating business modernization and transformation.

Amazon Web Services (AWS) is a leader among IaaS vendors, and every year around this time, we look back at the most popular AWS products of the past year, based on the percentage of 2nd Watch clients using them. We also evaluate the fastest-growing AWS products, based on how much spend our clients are putting towards various AWS products compared to the year before.

We’ve categorized the lists into the “100%” and the “Up-and-Comers.” The 100% tools are products that were used by all of our clients in 2020 – those products and services that are nearly universal and necessary in a basic cloud environment. The Up-and-Comers are the five fastest-growing products of the past year.

We also highlight a few products that didn’t make either list but are noteworthy and worth watching.

The “100%” Club

In 2020, there were 12 AWS products that were used by 100% of our client base:

  • AWS CloudTrail
  • AWS Key Management Service
  • AWS Lambda
  • AWS Secrets Manager
  • Amazon DynamoDB
  • Amazon Elastic Compute Cloud
  • Amazon Relational Database Service
  • Amazon Route 53
  • Amazon Simple Notification Service
  • Amazon Simple Queue Service
  • Amazon Simple Storage Service
  • Amazon CloudWatch

Why were these products so popular in 2020? For the most part, products that are universally adopted reflect the infrastructure that is required to run a modern AWS cloud footprint today.

Products in the 100%s club also demonstrate how AWS has made a strong commitment to the integration and extension of the cloud-native management tools stack, so external customers can have access to many of the same features and capabilities used in their own internal services and infrastructure.

The “Up-and-Comers” Club

The following AWS products were the fastest growing in 2020:

  • AWS Systems Manager
  • Amazon Transcribe
  • Amazon Comprehend
  • AWS Support BJS (Business)
  • AWS Security Hub

The fastest-growing products in 2020 seem to be squarely focused on digital application in some form, whether text/voice translation using machine learning (Comprehend and Transcribe) or protection of those applications and ensuring better security management overall (Security Hub).

This is a bit of a change from 2019, when the fastest-growing products were focused on application orchestration (AWS Step Functions) or infrastructure topics with products like Cost Explorer, Key Management Service or Container Service.

With a huge demand for data analytics and machine learning across enterprise organizations, utilizing services such as Comprehend and Transcribe allows you to gather insights into customer sentiment when examining customer reviews, support tickets, social media, etc.

Businesses can use the services to extract key phrases, places, people, brands, or events, and, with the help of machine learning, gain an understanding of how positive or negative conversations were conducted. This provides a company with a lot of power to modify practices, offerings, and marketing messaging to enhance customer relationships and improve sentiment.

Worth Watching

The following products were new to our Most Popular list in 2020 and therefore are worth watching:

AWS X-Ray allows users to understand how their application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. One factor contributing to its rising popularity is more distributed systems, like microservices, being developed and traceability becoming more important.

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. Increased use of Athena indicates more analysis is happening using a greater number of data sources, which signifies companies are becoming more data driven in their decision making.

A surge in the number of companies using EC2 Container Service and EC2 Container Registry demonstrates growing interest in containers and greater cloud maturity across the board. Companies are realizing the benefits of consistent/isolated environments, flexibility, better resource utilization, better automation and DevOps practices, and greater control of deployments and scaling.

Looking Ahead

For 2021, we expect there to be a continued focus on adoption of existing and new products focused on security, data, application modernization and cloud management. In our own client interactions, these are the constant topics of discussion and services engagements we are executing as part of cloud modernization across industries.

About the author: Joey Yore is Manager, Principal Consultants at 2nd Watch.

]]>
Seven KPIs for AIOps Teams https://www.datamation.com/artificial-intelligence/seven-kpis-for-aiops-teams/ Fri, 15 Jan 2021 17:22:42 +0000 https://www.datamation.com/?p=20608 Staffing levels within IT operations (ITOps) departments are flat or declining, enterprise IT environments are more complex by the day and the transition to the cloud is accelerating. Meanwhile the volume of data generated by monitoring and alerting systems is skyrocketing, and Ops teams are under pressure to respond faster to incidents.

Faced with these challenges, companies are increasingly turning to AIOps – the use of machine learning and artificial intelligence to analyze large volumes of IT operations data – to help automate and optimize IT operations. Yet before investing in a new technology, leaders want confidence that it will indeed bring value to end users, customers and the business at large.

Leaders looking to measure the benefits of AIOps and build KPIs (key performance indicators) for both IT and business audiences should focus on key factors such as uptime, incident response and remediation time, and predictive maintenance so that potential outages affecting employees and customers can be prevented.

Business KPIs connected to AIOps include employee productivity, customer satisfaction, and web site metrics such as conversion rate or lead generation. Bottom line, AIOps teams  can help companies cut IT operations costs through automation and rapid analysis; and it can support revenue growth by enabling business processes to run smoothly and with excellent user experiences.

KPIs to Measure AIOps

These common KPIs can measure the impact of AIOps on business processes:

1. Mean time to detect (MTTD): This KPI refers to how quickly it takes for an issue to be identified. AIOps can help companies drive down MTTD through the use of machine learning to detect patterns, block out the noise and identify outages. Amid an avalanche of alerts, ITOps can understand the importance and scope of an issue, which leads to faster identification of an incident, reduced down time, and better performance of business processes.

2. Mean time to acknowledge (MTTA): Once an issue has been detected, IT teams need to acknowledge the issue and determine who will address it. AIOps can use machine learning to automate that decision making process and quickly make sure that the right teams are working on the problem.

3. Mean time to restore/resolve (MTTR): When a key business process or application goes down, speedy restoration of service is key. ITOps plays an important role in using machine learning to understand if the issue has been seen previously and, based on past experiences, to recommend the most effective way to get the service back up and running.

4. Service Availability: Often expressed in terms of percentage of uptime over a period of time or outage minutes per period of time, AIOps can help boost service availability through the application of predictive maintenance.

5. Percentage of automated versus manual resolution: Increasingly, organizations are leveraging intelligent automation to resolve issues without manual intervention. Machine learning techniques can be trained to identify patterns, such as previous scripts that had been executed to remedy a problem, and take the place of a human operator.

6. User Reported versus Monitoring Detected: IT operations should be able to detect and remediate a problem before the end user is even aware of it. For example, if application performance or Web site performance is slowing down by milliseconds, ITOps wants to get an alert and fix the issue before the slowdown worsens and affects users. AIOps enables the use of dynamic thresholds to ensure that alerts are generated automatically and routed to the correct team for investigation or auto-remediated when policies dictate.

7. Time savings and associated cost savings: The use of AIOps whether to perform automation or more quickly identify and resolve issues will result in savings both in operator time and business time to value. These have a direct impact on the bottom line.

These seven KPIs can be correlated to business KPIs around user experience, application performance, customer satisfaction, improved e-commerce sales, employee productivity, and increased revenue. ITOps teams need the ability to quickly connect the dots between infrastructure and business metrics so that IT is prioritizing spend and effort on real business needs. Hopefully, as machine learning matures, AIOps tools can recommend ways to improve business outcomes or provide insights as to why digital programs succeed or miss the mark.

This article is comprised of industry information offered by Ciaran Byrne, VP of Product Management at OpsRamp.

]]>
What is Text Analysis? https://www.datamation.com/artificial-intelligence/what-is-text-analysis/ Mon, 02 Nov 2020 22:48:53 +0000 https://www.datamation.com/?p=20605 Text analysis extracts machine-readable data from unstructured or semi-structured text in order to mine insight about trends and user sentiment. To accomplish this, it uses artificial intelligencemachine learning and advanced data analytics techniques.

The world is experiencing a rapid exponential increase in information, especially structured or unstructured data: think social media posts, customer emails, transaction records, survey questions, news articles and research reports to name just a few. All these sources have texts that can be a rich source of insights for businesses, but this overabundance of information is both positive, creating endless opportunities in a data-driven economy, and negative, requiring significant resources and time to collect, study and make sense of it all.

Text Analysis: An Overview

Text analysis helps enterprises address this challenge.

Text analysis aims to overcome the obscurity of human language and achieve transparency for a specific domain. Using various techniques, text analysis solutions analyze unstructured data in all kinds of texts in order to identify and draw out high-quality information that will prove helpful in various scenarios, from data points to key ideas or concepts.

A form of qualitative analysis, text analysis can be used to perform a multitude of tasks such as sentiment analysis, named entity recognition, relation extraction and text classification, allowing users to identify and extract important information from intricate patterns in unstructured text, then transform it into structured data.

Using text analysis in business marketing can help companies summarize opinions about products and services. When used to analyze medical records, it can connect symptoms with the most appropriate treatment.

Text Analysis vs. Text Mining vs. Text Analytics

Many people mistakenly believe text mining and text analysis are different processes. In fact, both terms refer to an identical process and often are used interchangeably to explain the method.

On the other hand, while text analysis delivers qualitative results, text analytics delivers only quantitative results. When a machine performs text analysis, it presents important information based on the text. However, when it conducts text analytics, it looks for patterns across thousands of texts, usually yielding results in the form of measurable data presented through graphs and tables.

For example, imagine you want to know the outcomes of each support ticket handled by your customer service team. By analyzing the text from the ticket, you can see the entirety of the results in order to determine if they were positive or negative. For this, you must perform text analysis. But if you want to know how many tickets were solved and how fast, you would need text analytics.

What is Natural Language Processing (NLP)?

Natural Language Processing (NLP) is among the first technologies to give computers the capacity to extract meaning from human language. A form of artificial intelligence, NLP aims to teach computers to understand the meaning of a sentence or text in the same way humans do, effectively NLP helps machines “read” text by mimicking the human ability to learn a language.

Over the past decade, this discipline has improved significantly, and is found today in many widely used applications. Perhaps the most widespread would be digital voice assistants such as Siri, Alexa, and Google. With the help of NLP, these digital assistants can understand and respond to user requests.

How to use Text Analysis for your Business

There are many ways companies can take advantage of unstructured data through the use of text analysis and NLP. Much can be inferred when texts are in easy-to-automate blocks, providing insight into various aspects of a business including marketing, product development and business intelligence.

Additionally, analyzing texts to capture data can help support various tasks including:

  • content management
  • semantic search
  • content recommendation
  • regulatory compliance

Text analysis can also be used by businesses to discover patterns, find keywords, and derive other valuable information, such as:

  • Market research through finding what consumers value the most
  • Summarizing ideas from unstructured data such as web pages, blogs, PDF files and plain text
  • Removing anomalies from data through cleaning and pre-processing
  • Converting information from unstructured to structured
  • Evaluating data patterns leading to enhanced decision-making

Text Analysis Techniques

Word Frequency

A technique that measures the most frequently occurring words or concepts in a given text using numerical statistic TF-IDF (short for Term Frequency-Inverse Document Frequency). This is often used to analyze words or expressions used by customers in conversations. For example, if the word “slow” appears most often in negative tickets, this might suggest there are issues that need to be addressed with the response times of your client service team.

Word Sense Disambiguation

The process of differentiating words that have more than one meaning – a major challenge in NLP as many words can be interpreted several ways depending on context. For example, if the word “set” is found in a text, is it referring to the noun or the verb?

Summarization

A technique used to create a compressed version of a specific text. This is done by reading multiple text sources at once and condensing information into a concise format.

Information Extraction

Information is extracted from huge chunks of data. Entities and attributes from the data are identified. Text is analyzed and the relevant information is structured and stored for future use.

Information Retrieval

Extracting relevant patterns based on sets of phrases or words. This technique is used to observe and record user behavior for example.

Categorization

Texts are evaluated to identify topics, and assigned to business-relevant categories based on their content.

Clustering

A text-mining technique that can expand categorization by identifying intrinsic structures within texts and sorting multiple texts into relevant clusters for evaluation.

Text Analysis: Today and Tomorrow

Text Analysis is one of the most far-reaching enterprise technologies of the digital age: from helping companies detect business and product problems – and address them before they can grow into larger issues that affect or damage sales – or gain insights into their market, customers and competitors.

Today we’re seeing rapidly improving text-mining software that can be used to create large records of structured and actionable information. These datasets can be extracted from internal or external sources and analyzed for use in networking, lead generation or intelligence-gathering purposes – like hiring a computer to act as your intelligence analyst or researcher with faster and greater accuracy.

Like all technologies related to data science, text analysis is on a trajectory of exponential growth and innovation, enabling more businesses in almost any industry to make data-driven decisions and exploit the data-driven economy. Research suggests the text mining market is growing at a rate of over 18 percent per year, and could become a $16.85bn industry by 2027.

]]>
DPaaS: How Data Protection as a Service Helps Business https://www.datamation.com/cloud/dpaas-how-data-protection-as-a-service-helps-business/ Wed, 14 Oct 2020 08:00:00 +0000 http://datamation.com/2020/10/14/dpaas-how-data-protection-as-a-service-helps-business/

Data Protection as a Service (DPaaS) plays a core role in business today, and here’s why: because all companies use data in many forms, protecting data is a top priority for every organization. But not all companies can afford to invest in a skilled and resourceful IT team. Consequently, Data Protection as a Service (DPaaS) makes better sense both financially and technically.

Pushing this trend: the increasing migration of traditional databases and services to the cloud amid escalating cybersecurity threats.

Cloud Storage and Backup Benefits

Protecting your company’s data is critical. Cloud storage with automated backup is scalable, flexible and provides peace of mind. Cobalt Iron’s enterprise-grade backup and recovery solution is known for its hands-free automation and reliability, at a lower cost. Cloud backup that just works.

SCHEDULE FREE CONSULT/DEMO

DPaaS is typically offered as a cloud-based service designed to meet organizations’ data security and protection requirements while including options for resilient backup and recovery. The services are available via a subscription model.

Why DPaaS is Essential

A data-driven world generates an immense volume of data that needs to be processed, analyzed and used to help drive decisions. But data is vulnerable and requires elaborate protection. Risks include:

  • Data loss due to cybercrime, human error, IT failures or natural disasters, such as the coronavirus pandemic for which too few organizations were prepared.
  • Governance, risk management and compliance requirements, including data privacy considerations.
  • Rapidly increasing storage and backup demands, amid fast-expanding volumes of data.

Transitioning to the cloud means responsibilities for data security, backup and recovery are shared between the organization and cloud provider. Critical services that would previously have been handled by teams, systems and infrastructure on site are shifted to the cloud provider, but cloud providers are not responsible when someone in your organization makes a mistake or you are specifically targeted by malicious actors.

This is where DPaaS comes in, offering ease of acquisition, maintenance and management, and enabling services to be scaled up or down as demands evolve. By encompassing backup, storage, and disaster recovery, DPaaS enables a unified approach to protecting data.

DPaaS essentially consists of three primary services:

Backup-as-a-Service (BaaS): Offered as a business software that undertakes the uploading of critical data via the internet. The facility enables organizations to retrieve their data efficiently and securely.

Disaster-Recovery-as-a-Service (DRaaS): One of the most critical IT services, DRaaS allows organizations to move their systems and applications to the cloud. In case of an emergency, the service enables data to be restored to the pre-crisis state.

Storage-as-a-Service (STaaS): The STaaS facility backs up to an on-premises data storage system. This allows the organization to retain a physical copy of its data at all times.

A resilient DPaaS architecture must provide an integrated solution across storage, processing, networking, geography and management. This begins with establishing a reliable, scalable storage layer that protects the data from hardware errors and prevents data loss from deduplication errors and ransomware attacks.

On top of that, because many operations cannot afford extended downtime, the backup and recovery process must be quick, modular and restartable. A resilient service should auto-detect failed backups and restart, never interrupting other processes. Similarly, the system should avoid downtime for software updates or bug fixes.

As the network is usually the cause of most backup failures, a DPaaS architecture needs to be designed to mitigate exposure to network outages by transmitting as little data as possible, via features such as source-based deduplication, which when combined with a modular, restartable backup architecture will minimise the amount of data that needs to be transferred.

While network outages are one concern, another is the failure of datacenter hardware itself. Therefore, backup processing should not be tied to one datacenter but instead store data across multiple datacenters and geographies, keeping backups safe in the case of localised disasters.

In addition to the technical considerations, as with so many business services, two principal challenges associated with DPaaS emerge: cost and efficiency.

As a subscription service DPaaS enables organizations to choose the option that best suits their requirements, and at an affordable price. However, organizations must first confirm the efficiency and effectiveness of the service. It is also essential to evaluate whether or not your data and policies are fit for a DPaaS solution, and understand what DPaaS can and cannot do for the security of your cloud data, as well as what challenges you may need to deal with during adoption or when you need to recover data if disaster strikes.

DPaaS Demand is Surging

With the Covid-19 pandemic transforming how people work, more organizations are migrating to the cloud – if they haven’t already, it’s a safe bet they will do so in coming months and years. The migration will demand they invest in a robust and reliable DPaaS. Predictions vary: global market research firm, Allied Market Research, expects the DPaaS market to grow to $28.87 billion by 2022, with a compound annual growth rate of 31.5% between 2016 and 2022. Transparency Market Research, meanwhile, projects the market to grow to $46 billion by 2024.

Despite the variations in the projections, the one constant is that substantial growth will drive the sector. DPaaS is already a well-known acronym among most organizations.

While the growth trends point to a strong play for the vendors, their challenge will be differentiation as their services become commoditized. That means DPaaS providers must offer compelling value and a clear roadmap for their development as cloud technology evolves.

While the subscription model is cost-effective, there is a pressing need for the facility to rationalize costs and offer their services at an affordable price for small and medium enterprises to benefit from the technology. Most importantly, with working from home becoming “the new normal,” organizations will need to secure and protect the data buzzing through the fiber optic cables connecting employees from diverse geographical locations.

Leading DPaaS Providers

Amazon Web Services: The largest cloud service provider is also a leading DPaaS vendor. Amazon Simple Storage Service (S3) offers a highly scalable and durable storage service interface to secure and retrieve data. The facility uses a web service interface to store and retrieve data from anywhere over the internet.

Hewlett Packard Enterprise: The company’s Data Protector offers dynamic and adaptive data backup and recovery software solution. The service is cost-effective and a reliable protector of information.

ClumioClumio earned its reputation as a capable SaaS data protector for an all-cloud ecosystem. The Santa Clara, California-based company manages complex cloud-related issues and delivers value by focusing on scale and the elasticity of cloud services.

Veritas TechnologyVeritas offers storage management software and office backup service. The Gartner Magic Quadrant 2020 for Data Center Backup and Recovery Solutions names Veritas as a leader in the domain.

CommvaultA global leader in enterprise recovery and backup services, the company promotes its experience aiding organizations to become future-ready. With a customer support satisfaction score of 98%, according to the Gartner Critical Capabilities for Backup and Recovery, Commvault is a much sought-after DPaaS service provider. The Tinton Falls, New Jersey-based company promises to prevent costly data loss scenarios, delayed recovery SLAs, inefficient scaling and segregated data silos.

 

]]>
The Future of IT Service Management https://www.datamation.com/applications/the-future-of-it-service-management/ Thu, 08 Oct 2020 08:00:00 +0000 http://datamation.com/2020/10/08/the-future-of-it-service-management/

By Myles Suer

IT Service Management has been around for a long time. When I joined Peregrine Systems in the mid-2000s, I learned – to my amazement – that the embedded database for Peregrine’s Service Manager was nonrelational. Peregrine was founded in 1981, a few years before IBM brought DB2 to market. ITSM was clearly updated in 1989 when the British Government standardized many ITSM concepts and processes with the introduction of ITIL. ITIL itself has been updated on 7-8-year centers.

This author for example was a reviewer for ITIL Version 3. And Version 4, which was introduced just last year, aims to make ITSM easier for organizations to integrate with DevOps, Agile, and Lean Work Methods. The question is where does ITSM go from here? This was the question that I posed recently to the #CIOChat.

How Does Public Cloud Usage and DevOps Change ITSM?

  • CIO Rick Osterberg believes “the biggest win of deploying ITSM has been getting everyone onto a common language. In a cloud/DevOps world, you still need everyone to understand the difference between a P1 incident and a P4 service request.”
  • Former CIO Joanna Young agrees and suggests “the ITSM basics remain relevant. I look at them as good guidance and foundation.” With this said, Young claims “if we want to talk about ITSM system implementations. Having ITSM built into the delivery process encourages/forces the conversation about ongoing operational support.
  • CIO Carrie Shumaker agrees with Young. She insists it is “still critical to understand all the components, dependencies, and relationships. Also, it is critical to have processes and communications for significant incidents. Finally, changes still happen, and still create excitement.”
  • With this said, CIO David Seidl claims, “where you do ITSM, and the specifics of how you do it may change, but I think the core concepts remain. The same sort of changes you need to make for agile you likely have to make for DevOps, and you’ll need to do it again when we make another change. This assumes everything you’re doing uses DevOps, which frequently isn’t true. Instead, a lot of the time you’re doing DevOps as a container in an ITIL-esque organization, or some other hybrid/componentized model.”

DevOps Transforms How ITSM Looks

  • CIOs seems to believe DevOps transforms what ITSM looks like. Seidl claims that “while the ticketing side of ITSM may still exist, the process it feeds and how and where capability is deployed will have a lighter governance structure.”
  • For CIO Jason James, “we have moved from server sprawl to VM sprawl and now to cloud sprawl. ITSM can effectively be used to tie back departmental requests and approvals to help understand cloud consumption when the first confusing bill comes in from the vendors. ITSM today helps support as a result your internal auditing.”
  • Former CIO Isaac Sacolick takes a different point of view. He contends that “help desk has largely been about handling end user computing.” However, he believes, “as digital transformation has made technology/data business critical; the importance of the service desk has only increased. And with COVID, ITSM has become mission critical.” Having said this, Sacolick suggests that change management must now be a part of transformation management. Additionally, the Configuration Management Database (CMDB) which has always been a mess, now it is almost untenable when factoring in cloud elastic compute, serverless, and SaaS.” Given this, Sacolick says, “ITSM should support users, help with applications, and compensate for poorly implemented/selected technologies.”
  • Splunk’s Andi Mann said in response, “we used to pretend opening a ticket would start to solve a problem. I’m not sure that was ever really true, but certainly isn’t anymore. DevOps solves problems by swarming them with data and with people. Today, the Service Desk has become a system of record.”
  • With this said, Forrester’s ITSM/DevOps Analyst, Charlie Betz, suggests the following:

1. Desktop support (edge/end use compute) will always be a thing

2. Information management is a big problem for large scale digital, but a monolithic CMDB has never been the answer. There are subsets of the problem where a CMDB is useful

3. Change Management as a mostly an automated process needs to stay

4. The Change Advisory Board is not essential and can be done away if legitimate coordination is handled elsewhere. For example, it can be a standing item in a Scrum of Scrums agenda.

Historically, ITIL broke ITSM into Discrete Processes. Is a More Systematic View Needed?

  • Mann compared ITSM, at this point, to 2000 century time and motions studies. He said, I feel like ITSM was created for a world of monoliths because it was built upon a Tayloristic data center. It worked for that. But I feel it doesn’t work as well for a world of cloud, DevOps, microservices, analytics, remote, edge, without a mass of duct tape and bailing wire. Young too finds “Taylorism and the Hawthorn Effect outdated notions but still solidly entrenched within many IT organizations.”
  • Given this, CIO Jason James says “a modernization of ITSM is required. There needs to be more automation to allow for greater self-servicing including cloud provision when it is needed. Additionally, “all of the workflows have to be designed to support responsiveness and eliminate unnecessary approvals and delays. The need for an AI driven ITSM is there, but we still have a way to go to make it real.”
  • Seidl says in response he has “always looked at ITIL as useful language, not necessarily completely distinct silos. The concept versus the reality of implementation means that those are blurred, and the language is useful as a handle to make sure we’re somewhere close when acting on it.”
  • Osterberg takes fundamentally the same position, when he says, “it’s still all about language. It doesn’t matter what your backend is, you still have interruptions, deeper problems, changes, and transactional requests that all intersect and overlap. You need to have a shared language to talk about all of it.”

ITIL Overcomplicated Things?

  • Sacolick believes “ITIL overcomplicated things with too much jargon and process. The tools today, especially integrated DevOps and Agile along with AIOps, greatly simplify incident support. Agile isn’t about scaling up. It’s about culture and mindset. Organizations need to be making smart decisions about where agile teams can self-organize versus what standards are in place.” For this reason, he says that “ITSM should focus upon the big rocks in a cloud world. And not send just a chatbot to the rescue. Self-service things properly.”
  • Given this, James suggests that “ITSM must be highly automated and mobile-ready. Today, many of the functions of ITSM can be done with bots. These include password resets, equipment provisioning, and basic Q&A. These all can be automated so the Service Desk team can focus on more complex issues.  Many in-place ITSM solutions still lack these capabilities. Chatbots, if effectively implemented can be much less painful than opening a ticket, waiting on hold, or talking to someone. The few times I have interacted with Amazon in the last year have been done via their bots and each time my issues were corrected quickly.”

In terms of what functions continue to belong in ITSM, I received varying responses:

  • Young said “ITSM needs to have master data management and integrated processes across problem, change, service request, asset, supplier, and configuration’. However, Pitt narrows the list to “incident, problem, change, and request.” Taking a more integrative view for things, CIO Carrie Shumaker says “incidents become problems and lead to changes; changes cause incidents, and round and round we go. I’ve considered the unifying factor to be the service, or at times, the CMDB element.”
  • With respect to change management, Betz says “the problem is we equate change management with the change advisory (sometimes approval, yuck) board. The CAB is an inefficient, cadenced, synchronous, face to face dedicated communication channel. There are better approaches for solving the coordination problem.”
  • With this said, CISO Michael Oberlaender suggests that adding “an end-to-end concept that focuses on the value creation/chain is definitely needed. We do too many things in ITSM that are not adding enough value.”

Is There an Expansive Role within ITSM For a Catalog of Consumable Digital Business Services?

  • Seidl says “for some organizations, sure. The trick remains to pick and choose components and models that work well for your organization, your goals, and your capabilities. I’ve seen more organizations fail to roll out ITSM and more succeed figuring out how to work.”
  • CIO Martin Davis believes it depends on how you define ITSM. Should its scope be limited to just helping people with IT related problems, services, or is it wider than that. Where is the boundary? However, with the business becoming IT and IT becoming the business, the catalog is a great approach if you have a lot of discipline and strong vision. With this said, it can go very wrong if you’re disjointed and the kind of company that makes exceptions for everything. You need to be a process driven company to make this a success.”
  • Young has a similar perspective and says “potentially. However, many organizations are not even close to being able to tackle or afford this. For them, DevOps and agile are still nascent or wish list items. The question is what are vendors doing to provide bite sized pathways in particularly for small-mid cap organizations.”
  • Likewise, James sees “the potential for ITSM to evolve and interact with more systems and cloud services to provide more services all the while providing greater insights and responsiveness.” An open question remains. Does the service catalog become the catalog of APIs and consumable business services described in Jeanne Ross’s Designed for Digital?
  • Sacolick suggests that “IT has been trying to develop a catalog of consumable digital business services through many technology generations from before SOAP through Microservices. I believe that it won’t happen at scale until there are easy buttons.” A vendor opportunity?

Parting Words

Clearly many aspects of IT operations are experience change as more and more work loads move the public cloud. ITIL and ITSM are no different. CIOs see the opportunity for ITIL and ITSM to do more. And the next five years, ITSM will experience change.

  • According to Dion Hinchcliffe of Constellation Research, ITSM must and will evolve to be well-reconciled with DevOps. This means being inclusive of all IT, including SaaS, Public Cloud, and Shadow IT. This means, according to Dion, ITSM will become a proactive support and management network while enabling choice and competition. Doing this will create a highly usable approach including potentially, an expansive notion of service catalog.

ABOUT THE AUTHOR:

Myles Suer is the Head, Global Enterprise Marketing at Boomi. He is also facilitator of the #CIOChat, and is the #1 influencer of CIOs according to LeadTails. He is a top 100 digital influencer. Among other career highlights, he has led a data and analytics organization.

]]>
Finding a Career Path in AI https://www.datamation.com/artificial-intelligence/finding-a-career-path-in-ai/ Mon, 05 Oct 2020 08:00:00 +0000 http://datamation.com/2020/10/05/finding-a-career-path-in-ai-2/

 

 

 

 

 

 

 

 

By Dr. Feiyu Xu

Over the past decade, artificial intelligence (AI) has captured the imaginations of consumers and enterprises. AI is seen as the key to unlocking fully automated business processes and smart data. Given the significant role that AI will play in shaping intelligent enterprises of the future, it is no surprise that AI is attracting an increasing number of students.

For graduates looking to get into intelligent technologies, the good news is that business leaders in 2020 are increasingly embracing digital transformation. As the need for enterprises to gain actionable insights faster becomes more important than ever, businesses have their eyes set on AI to implement pilot transformation projects.

If you are looking for a job in AI, below are some insights to help navigate this seemingly nebulous field.

Identifying the Right Role

AI is still a nascent field that lacks rules and de facto standards, as such there isn’t much conformity when it comes to job titles or scope of work. Deciding which role is the most appropriate can be challenging. However, there are keywords to look out for to determine the core skills and expectations. The most frequent job titles refer to data, machine learning (ML), or AI generally; such as data scientist, AI expert, AI research scientist, AI data analyst, AI application engineer, ML engineer, ML scientist, and data annotation expert. Other job postings call for experts in a subset of AI, such as natural language processing expert, computer vision expert, AI games engineer, AI UX designer, or multimodal UX engineer.

Skills Required for Each Job

The most common jobs in AI are data scientist and AI expert, though these titles shed little light on what skills are required to excel. At some companies, data scientists are the people who curate, prepare, and clean data that is then leveraged by the ML experts for modifying and improving AI algorithms and models. In other cases, data scientists are tasked with solving business problems by drawing from data and using the appropriate tools from traditional data analytics or advanced machine learning to build models that can then help organizations overcome obstacles and hurdles.

For AI expert roles, applicants are typically expected to have an acute understanding of machine learning tools that can be stood up to glean important insights from available data. In these jobs, AI experts are very similar to data scientists, although they may also be expected to construct new AI applications.

Job descriptions that emphasize machine learning usually adds ML tools into the mix to distinguish new solutions by analyzing structured or even unstructured data. In these roles, the candidate is usually not required to have experience in the architecture and in the coding of software products, but a strong command of an advanced scripting language is a prerequisite.

Finding the Right Path

There are two considerations when thinking about carving a career in AI. First, candidates must decide what type of work environment they prefer. It is important to keep in mind that a team-first environment that encourages personal growth should outweigh opportunities for rapid upward mobility. The first few years are essential to gaining vocational skills. Aspiring AI experts all begin their career with an entry-level job, an integral step as graduates usually start on teams with experienced professionals that provide valuable insights and skills.

The second component in building a successful AI career is grounding the job to a central purpose. AI has a multitude of use cases today, especially when organizations are able to achieve AI at scale. Making sure a role has an ethical through-line will allow candidates to reap commercial benefits while also making significant societal contributions. AI at scale comes with fundamental ethical and social responsibilities, and AI professionals must successfully traverse all of the regulatory and ethical structures that surround the technology. Done correctly, new graduates can find themselves on a path to designing new materials or predicting weather patterns and many other jobs that provide an essential societal function.

About the Author:

Dr. Feiyu Xu, Global Head of Artificial Intelligence, SAP

]]>
CIOs Discuss the Promise of AI and Data Science https://www.datamation.com/artificial-intelligence/cios-discuss-the-promise-of-ai-and-data-science/ Fri, 25 Sep 2020 08:00:00 +0000 http://datamation.com/2020/09/25/cios-discuss-the-promise-of-ai-and-data-science/

By Myles Suer

A few years ago, I asked CIOs about data science and it turned into a yawner of a discussion. However, in the last few years as chief data officers have made their mark at more and more enterprises, CIOs have needed to build their data chops. Given this, it was time to assess where CIOs are today.

To do this, I ran a #CIOChat on AI and Data Science. From this discussion, it was clear CIOs are spending more time considering the “I” part of their titles. For this reason, they get not only the business potential from driving analytical organizations, but also what gaps need to be fixed to deliver it.

Where do You See the Biggest Promise in Your Industry?

CIO Wayne Sadin started the conversation by saying “we need to make our employees smarter by helping them quickly zero in on making better decisions, especially when the data is uncertain or incomplete.” He went onto say that “because AI is good at pattern-matching and remembering lots of facts, it seems reasonable that it can help most organizations.” To succeed, Sadin wants “software that can make inferences, detect patterns, and identify anomalies.” Personally, Sadin sees “AI as augmented intelligence that can make employees and clients (appear) smarter.”

For these reasons, former CIO Tim McBreen claims “pretty much every industry needs each to perform thinking tasks while freeing up staff to come up with new value based information that can come from the current data as well as collected data through AI or ML. This will be huge in distribution, logistics, transportation, etc. A number of companies have already made it part of their DNA. Where we need to see it grow is in financial and insurance services beyond traditional warranty related services.”

Meanwhile, CIO Jason James claims “we will see advancements in patient monitoring in healthcare from advancements in AI. Areas such as wound monitoring, blood sugar predictions, and infection prevention are already seeing solutions brought to market.”

CIO Milos Topic claims that “AI will initially see success in repetitive, high volume requests that would save everyone’s time and enable people to focus on more creative and innovative things.” For higher education, CIO Carie Shumaker favors AI for “answering high volume questions such as dates, deadlines, FAQs, locations, etc.

She sees value coming through real time, accurate answers, even off hours, with an interface that doesn’t judge you for not knowing the answer already.” CIO Paige Francis also sees potential for AI “around customer service, rapid responses, repeat transactions/processes, and equitable remote hands on experiences and visits.”

Concluding this conversation, former CIO Tim Crawford suggested, “we have long-since passed the point where managing data through traditional means is possible. Automation and intelligence are key. If you look at Amazon, machine learning is part of their core…not an add-on to consider. It’s part of their DNA to business operations and has been for a while.”

What Types of Problems are Best Suited for AI/ML/Data Science?

Crawford believes that “AI/ML/Data Science is broadly applicable across the enterprise. With the amount of data increasing, we can expect to see these technologies more widely used. Cybersecurity is a huge opportunity for IT.” McBreen agrees and takes a step further by “saying it might be a savior in cybersecurity. People can’t keep up with the way it runs today.”

CIO Steve Anthanas also sees this opportunity for AI/ML/Data Science when he says, “for the problems where humans add too much bias or cannot process data impartially. The thing that really interests me about AI in cybersecurity is the real-time nature of correlating tons of data that no human could ever do. Think about crowdsourcing security data from trillions of transactions in real-time and applying that in your infrastructure. The thing that helps is that unless you give the algorithm the information it cannot see people. AI doesn’t see color, mannerisms, attire, etc. and doesn’t make decisions based on factors that aren’t germane to the situation.”

McBreen, in contrast to Anthanas says, “we will have to watch for human bias also in the developers of the AI or ML engine. Either one can get way off track based upon bias and provide long range answers that are way off base. You need reasonability checks for both ML and AI. Meanwhile, James says listening to vendors and their solutions will solve all problems. Shoes not fitting? AI problem. Deliveries running slow? ML will fix it. Perhaps it’s not the type of problem they are best suited for, but the leaders willing to make the necessary personnel/data changes.”

CIO Milos Topic likes these tools, however, for “larger volume, things that don’t scale as easily with one-on-one type of support.” Sadin gets more specific and discusses problems with “large datasets and pattern recognition: helping people find the zebras (“when you hear hoofbeats, think horses not zebras” was part of medical education because a new doctor couldn’t know enough about everything).”

As an example, former CIO Mike Kail sees the opportunity to use these approaches for financial behavior and transaction monitoring. Schumaker says “here there is too much data and too many data streams for a human to quickly and accurately assess. I think AI/ML often over promises, though-it can be dirty, biased and infer causation where none exists.”

Fawad Butt, Chief Data Officer at United Healthcare & Optum, says, “in his experience while AI is being thrown around as the panacea, it does have some useful applications. In pattern recognition, simulating complex environments, recommendations etc. AI is for real. To be clear, the algorithms aren’t new, but we have data and compute today.”

What are the Biggest Things You Need to Put in Place to Establish Proficiency at AI/ML/Data Science?

Sadin believes “AI isn’t good with bad data”. He, also, suggests that “data cleansing represents a good problem domain for AI.” Sadin goes onto say, he “agrees with importance of getting to clean, consistent data. To make AI/ML help us, we need to work with dirty data and identify, isolate, correct data issues? Intelligence should mean the ability to reason in the real world of inadequate/incomplete/inconsistent data.”

CIO Adam Martin agrees and says “dirty data inside of organizations is a huge problem for sure”. For this reason, Sadin suggests that “the elephant in the room is dirty data. That dirty data is our own fault. Much of that data comes from applications that we developed without sufficient consideration for how it’s digital exhaust might need to be used in the future. Another problem that we created was technical debt! Much of what an organization classes as dirty data comes from data entering from outside the organization. Additionally, there is a lack of interoperability–semantic even more than syntactic.”

CIO Jason James agrees with Sadin when he suggests, “data is like oil in the sense; it is unusable unless it’s refined. Much like oil, it is expensive to store when it is not used. Dirty data is extremely common across all industries. Do you want data equivalent of kerosene or diesel fuel? Both come from refinement, but you have to know what you want from data.” Sadin agrees and says “data, being like uranium, has a half-life.” Sadin suggests that “the ability to Identify the half-life for a class of data is an important skill.” James agrees and claims “there is a data lifecycle. Data lives, it’s used, it ages out, and eventually, it dies. Retaining data forever can also come with risk in cases of breach or data exfiltration. As Tom Davenport put things several years ago, “you can’t be really good at analytics without really good data. (Analytics at Work, Tom Davenport, Page 23)”

Getting to Data Sufficiency

In terms of getting to data sufficiency, former CIO Tim Crawford says,” the biggest challenge isn’t in understanding the technology. It is in understanding your business. Understanding your business provides the necessary context to understanding your data.” For this reason, Butt says, “do the basic stuff, first. AI isn’t something a company typically does, it is something a company typically enables via data management, data governance, and other friction reducing approaches.”

This means that organizations need to think about their data processes. CIO Carrie Shumaker, for example, says,” create data definitions and clean data. Document your desired questions/outcomes. Clarify your privacy policies. Note that none of this is really technology.” Mc Breen suggests, “besides governance, you need to hire extremely good talent that truly understands all aspects of data science. Too many times, I see people that are barely good enough at data warehouse or reporting being thrown into this area.”

This means for James, “most organizations need to bring in new resources with experience to bear. This includes the ability to spend the time and investment in upskilling current staff.  There is a limited talent pool for those with proficiencies in these technologies in most industries.”

What Problems are You Having Integrating Your AI/ML/Data Science teams?

CIO Steve Athanas says he “isn’t really close to this yet, but we are working jointly on a few projects with the high-level goals of getting better alignment between the teams. The biggest challenge now is aligning the shared goals of security and stability with apps and data consumption.” For Topic, he says that his “organization doesn’t have teams dedicated to these initiatives yet, but he has clear understanding of expectation and communication are always of great importance.”

Continuing with this thought, James says, “many organizations haven’t even gotten to that level yet. They are still in the phase of determining what issues they are trying to solve and which solutions best fit.” To get after this CIO Paige Francis says, “problems are training all to learn and embrace. For that, we need less specialized language and jargon. Talk about how the sausage is made in the kitchen please, not the dining room.”

Parting words – Upcoming Webinar 

CIOs are clearly are clearly getting after data. There is work to do and partnerships to make data valuable. And data needs to be refined just like oil to have usefulness. With this, it is possible with the right top leadership to build a data-oriented organization. For many, this will be a big step forward. To get more perspective on data and analytics please join us for our panel of experts on October 2nd—Including myself, Tom Davenport, Marco Iansiti, and Dion Hinchcliffe – “Data Analytics for Competitive Advantage.” Register here.  

ABOUT THE AUTHOR:

Myles Suer is the Head, Global Enterprise Marketing at Boomi. He is also facilitator of the #CIOChat, and is the #1 influencer of CIOs according to LeadTails. He is a top 100 digital influencer. Among other career highlights, he has led a data and analytics organization.

]]>
6 Tips for Selecting the Best Cloud for SAP https://www.datamation.com/cloud/6-tips-for-selecting-the-best-cloud-for-sap/ Mon, 14 Sep 2020 08:00:00 +0000 http://datamation.com/2020/09/14/6-tips-for-selecting-the-best-cloud-for-sap/

I have worked on SAP engagement for Enterprise Customers for almost 30 years. While SAP continued to develop application features and content during these years, generally the hosting of SAP fell into the same classic selection process as any application: What’s the service level, what’s the cost, who has the happiest customers?

However, with the advent of public cloud (IaaS), those tried and tested criteria no longer give customers an accurate evaluation of each option. As such, below I’ll look at some of the key criteria SAP customers need to include in their evaluation of hosting options.

Cost

Cloud Storage and Backup Benefits

Protecting your company’s data is critical. Cloud storage with automated backup is scalable, flexible and provides peace of mind. Cobalt Iron’s enterprise-grade backup and recovery solution is known for its hands-free automation and reliability, at a lower cost. Cloud backup that just works.

SCHEDULE FREE CONSULT/DEMO

Of course, cost comes first in most scenarios. Nothing happens in an enterprise without a good business case. However, at first glance, negotiated costs can be deceiving. Enterprise agreements, short-term discounts, migration funding, etc. can all muddy the waters when it comes to getting a clear perspective of the pricing you are signing up for. In order to best predict what future costs will look like, it’s important to understand the hyperscaler’s attitude towards cost, and then extrapolate their pricing history.

Additionally, with hyperscaler infrastructure comes the great benefit of metered charging, where you only pay for what you use resulting in variable costs. While this is actually a very good thing in general, it can cause headaches for procurement and necessitate new processes for IT to properly manage these variable costs. When selecting a provider, you need to understand which hyperscaler/partner can best help you see and control ongoing metered costs. 

Reliability

Nowadays, we expect public cloud to be more resilient than on-premise. And, while this is generally true, not all clouds are equal – especially for applications such as SAP. You will need to evaluate the amount of downtime each hyperscaler has experienced over the last 12-18 months to get a sense of how they compare. SLAs are one thing – historic performance is a much better guide.

Publicly published statistics on hyperscaler downtime show that AWS fares far better than Azure and better or similar to Google Cloud Platform. SAP, as we know, is very sensitive to downtime – especially unplanned downtime. Choosing the most stable platform is an important part of the selection criteria for all your systems but particularly for SAP given its criticality to the business.

Speed of Innovation

The best innovation is happening in the cloud these days and, as everything is or will be in the cloud eventually, innovation and speed to innovation needs to be an integral part of your IT road map for the next 10 years at least. Right now, AWS is the leader in getting new innovations and new ideas to the market quickly. Azure categorizes itself as “fast followers,” which is an important but safer position in the market. Google, while very good at what they do around data items and other categories, does not display the same customer obsession and innovation focus in its cloud capacities as its competitors.

Why does this matter? When looking at innovation, particularly the speed of innovation, you need to also consider the technology adoption cycle. This is the timeframe from when the new technology is introduced to when it is ultimately retired. When the adoption cycles of innovation among hyperscaler’s reach a one to two-year difference, this becomes a critical differentiator. Some would say that right now, AWS is already one to two years ahead of its competitors meaning that the technology that gets introduced by them will release, runs its cycle and be retired by the time it gets to other cloud providers. Selecting the most innovative platform is critical for any long-term strategic decisions.

Performance

AWS has always led the way on the most performant technology, both on storage and compute. What AWS has done recently is launch all of their instances based on their nitro hypervisor which takes all the hypervisor load off the VM and allows the workloads to get access to all of the resource on compute. Nitro was, in effect, an add-on component to every VM. This allows for unparalleled performance.

Additionally, AWS is innovating into its own chips and own chip design, and is releasing its G Class of families which have already shown to be not just cheaper but higher performing than other chip providers. This gives every indication that AWS will continue to lead the way on performance. When running SAP, one of the biggest complaints end users typically have is a lack of performance. Overall performance and performance when you need it is one of the biggest benefits IT departments can give to their customers so choosing the most performant platform for your systems is table stakes.

Ecosystem

Another benefit of public cloud is that it has an open API. This means that it is a publicly available application programming interface so developers in offices (and garages and living rooms) all over the world are coding. This is an example of hyperscalers and their partner ecosystems adding a significant amount of additional innovation that their customers can access directly. As a result, we consistently see brand new use cases for BI, speech, chatbotting and other great technologies that can integrate very simply with public cloud.

This proves once again that public cloud is a platform best suited for future innovation. It also suggests that the amount of innovation is directly related to the number of partners that hyperscalers have as part of their ecosystem. AWS has, by far, the most, and that is very important if you want to have access to these third-party capabilities. You will probably find that they enable these capabilities for AWS before any other hyperscaler. The more customers that are on a platform, the more partners will do development there which means the more traction there is for the new customers. This “network effect” is something that AWS has done well for 10 years and is something that is very difficult for others to catch up on. 

Automation

Ultimately, automation is the most important secret ingredient of them all. Automation not only allows you to do things automatically, remotely and quickly, but also with more quality. Quality builds for installed software systems, like SAP, are essential.

Classically run on-premise, most people spend their time trying to “keep the lights on” for SAP and maintaining and fixing things manually. The downside of that is people make mistakes. Manual steps are inherently risky, and you could end up with situations where Dev might have a different kernel patch version than COS, which might have a different version than Production. Suddenly, you get unexpected defects when you run workloads on production. The way to avoid this is through automation. Automation will remove the manual errors and ensure that there is a repeatable and reliable process for both the build and maintenance of the SAP landscape. This higher quality ensures that you can reduce the noise in the environment and reduce the amount of work and cost to maintain the system.

Another plus of automation is the agility it enables. Suddenly, you can do things faster. So, when users want a system refresh, or a restore from a backup, or to patch a system, etc., these things can now be done much more quickly. And, with automation, you can surface it into a portal that will allow end users or project team members to self-serve on the maintenance of the landscape. This agility delivers satisfaction to the project team as they can try out new ideas quickly. This is, of course, the fundamental premise of innovation – the ability to try something quickly, fail at it fast or, if it does work, promote it quickly into production. If you want to innovate, you need to be agile, and if you want to be agile you must automate.

 

About the author:

About Eamonn O’Neill is the co-founder of Lemongrass and EVP for the Americas region. With over 27 years’ experience in the SAP arena, O’Neill brings a strong mix of technical and business leadership to the team. After founding Lemongrass in 2008, O’Neill pioneered the SAP on AWS line of business, which quickly became the only service Lemongrass offered.  

]]>
Optimizing Network Operations During Times of Change https://www.datamation.com/data-center/optimizing-network-operations-during-times-of-change/ Thu, 06 Aug 2020 08:00:00 +0000 http://datamation.com/2020/08/06/optimizing-network-operations-during-times-of-change/

As the impact of COVID-19 has shifted many aspects of our personal and professional lives, one such area of change is in how we use the internet. With many people staying home, the result has been usage of mobile usage at an all-time high, increased rates of uploading, and massive surges in video conferencing.

As many individuals and companies have shifted their internet consumption habits, this has meant that telecommunications companies have had to rapidly adapt to these changes. 

With many countries depending on this digital infrastructure to keep their economies going, this has also led to a much greater reliance on network performance.

With these changing usage and behavioral patterns, many network incidents that could be reliably predicted in previous years are now much harder to prevent. Along with these challenges, however, many telecom operators have increased pressure to grow revenue and profitability.

In this article, we’ll review a few of the specific challenges faced by telecom companies during these times of change. We’ll then look at how AI and machine learning are being used to monitor and improve network performance.

The Challenges of Managing Network Performance and Usage Patterns During Times of Change

Before the global pandemic, broadband consumption patterns around the world were quite predictable — the majority of people were at work or school during  the day and usage would predictably drop. Then in the evening, internet usage would increase.

Now, with parents and children both at home for most of the day, everything from social media to video conferencing usage has skyrocketed. As a result, without this predictable downtime when network incidents could be resolved, many network providers have been scrambling to keep up with demand. As a result, minimizing network downtime and continuous forecasting of network traffic to cater to changing usage patterns, especially as many countries who have emerged from the first wave of lockdowns are now potentially facing a resurgence and potentially a second period of quarantine.

Shifts in Uplink Traffic Leading to Network Issues

In particular, one common theme prior to the global pandemic was that most people’s internet consumption primarily consisted of downlink usage, including downloading web pages, video streaming, and so on.

Now with more and more people working from home, many people’s rates of uploading have drastically increased. Whether it’s for remote learning, video conferencing, or uploading to social media—networks simply weren’t designed to handle this much uplink traffic.

This means that while the companies need to monitor for faults in the network, they simultaneously need to be able to forecast future demand and re-engineer usage capabilities.

Similarly, now that many telco employees need to work from home, they also need to manage the network at a distance. As their existing tools weren’t built to do this, telco companies need to upgrade their tools to be able to detect issues autonomously.

In short, they need to utilize their resources more efficiently and take advantage of autonomous network monitoring. This is where AI and machine learning come into the picture.

AI & Machine Learning for Network Monitoring

Many operators are now monitoring and optimizing their networks with the use of big data, machine learning, and artificial intelligence.

Although we’re currently in the early days of AI adoption in the telecommunications industry, the fact that communications networks are so complex and so data-rich makes the potential for disruption increasingly important.

In order to take advantage of these emergent technologies, two core applications of AI and ML for communications network should be considered, including:

●  Anomaly Detection: As mentioned, with behavioral and data consumption patterns shifting dramatically, detecting anomalies with static thresholds simply isn’t viable. Instead, a branch of machine learning called unsupervised learning can be used to learn each individual metrics normal behavior on its own. As this normal behavior constantly shifts the anomaly thresholds also automatically shift, resulting in increased granularity and fewer false positives.

●   Demand Forecasting: With the increased reliance on network performance, accurate demand forecasting has become more important than ever. Similar to anomaly detection, AI and machine learning can take in 100% of the data in order to forecast user demand so the appropriate network provisions can be provisioned in time.

Now that we’ve discussed the application of AI for communications networks, let’s review a real-world use case and see how telcos are already taking advantage of their data.

Use Case: AI for Fixed Broadband Access Networks

Many telcos have transformed themselves into fixed and mobile service providers, which can include a complex mix of technologies, such as:

●      Fiber to the premise, node, or curb

●      Digital Subscriber Loop (DSL)

●      Hybrid Fibre Coaxial (HFC)

●      WiFi

●      Satellite broadband

Each of these technologies will experience subscriber uplink and downlink incidents such as throughput drops, packet loss, and many other KPIs, which means that each needs to be monitored in real-time. In the example below you can see an anomaly was detected in downstream throughput drops in a HFC network.

Through the correlation engine that accompanies each anomaly, the AI-based monitoring solution was able to link the incident to the following incidents:

●      A drop in uplink throughput

●      A spike in upstream Codeword Errors (CER)

●      A drop in upstream Signal-to-Noise Ratio (SNR)

 

Based on these incidents, the telco was able to be proactive about the anomalies and could notify customers in the region about the service degradation. What’s more, by having the exact correlated anomalies that caused the incidents, their technical team was able to resolve the issue much faster than previous cases.

Summary: AI for Telecom

As the global economy has become increasingly complex and connected, the reliance on network performance has never been greater. From shifting consumption patterns to surges in uplink traffic — telecom operators have had to adapt their networks rapidly, and in many cases remotely.

For these reasons, many telcos are starting to embrace AI and machine learning for network monitoring. In particular, two of the main applications of AI for communications networks are anomaly detection and demand forecasting.

Although we’re still in the early days of AI adoption, these global changes make it more apparent than ever that these emergent technologies present an opportunity to improve performance, increase efficiency, and stay ahead of the competition.

Author Bio:

Vikram Pulakhandam is Anodot’s Solutions Director for AIPAC, leading pre-sales technical engagements across the region. Prior to joining Anodot, he led the Big Data Engineering team for an Australia telco operator. He has more than 20 years of mobile telco experience with a variety of vendors across the globe

]]>
AI Your Staff Can Believe In, Now and After COVID-19 https://www.datamation.com/artificial-intelligence/ai-your-staff-can-believe-in-now-and-after-covid-19/ Fri, 31 Jul 2020 08:00:00 +0000 http://datamation.com/2020/07/31/ai-your-staff-can-believe-in-now-and-after-covid-19/

Seventy-three percent of executives are piloting or adopting AI in one or more business units, according to a survey conducted late last year, in conjunction with the Accenture Technology Vision 2020, our annual guide to the technology trends that we believe will have the greatest impact on businesses over the next three years.

Yet, until recently, AI was mostly being used to enhance automation and execution; its value was primarily viewed in terms of cost reduction and efficiency. Leaders have recognized that by combining the almost limitless capacity of intelligent machines with human originality, flair and oversight, businesses can unlock new products, services, operational models and much more.

COVID-19 as a Change-Agent

Over the past several months, organizations in every sector have scrambled to meet the challenges of the COVID-19 pandemic, and during this time, the value of human-machine collaboration has never been clearer.

In response to the crisis, AI applications have been used to augment and assist human workers across a range of new, short-term use cases. AI-powered chatbots are assisting health workers as they screen and triage patients, algorithms are helping healthcare suppliers reconfigure their supply chains, and AI is even helping in the race to find a vaccine. For example, Insilico Medicine, a Hong Kong-based biotech company, has repurposed its AI platform to help accelerate the development of a COVID-19 drug. The company is now using machine learning to expedite the drug discovery process. 

It’s not only in the medical field that AI-machine collaboration is proving its worth. Many businesses are struggling to manage with a reduced workforce and the need to comply with social distancing rules. AI is helping business leaders dream up new solutions to these challenges and enabling companies to become much more flexible in the process.

Acceptance Accelerating Adoption

As more workers are exposed to AI tools and learn how to work effectively with them, any concerns they may have about the technology will subside as and drive further adoption. This is important because, as outlined in a global study in 2019, employee adoption is one of the main barriers to scaling AI in enterprises.

The pandemic may provide the impetus needed to push past this barrier. Over the past few months, AI tools have helped keep people healthy and informed at work. Virtual healthcare assistants and AI-powered thermal cameras for fever detection, for example, are ensuring that people can return to the workplace as safely as possible. Other tools are helping to keep essential businesses running.

Innowatts, a startup that uses AI to manage surging electricity demands, is a case in point. The pandemic has brought significant volatility to energy demand as businesses temporarily shut down operations and employees were sent home. Innowatts helps organizations navigate this volatility through AI-enabled short-term forecasting. The intelligence derived from its AI helps companies make the timely adjustments needed to maintain their operations.

Bringing the Future Forward

The Technology Vision’s pre-pandemic survey of business leaders found that while 79% of respondents believed that collaboration between humans and machines will be critical to innovation in the future, only 37% reported having inclusive design or human-centric design principles in place. If we ran this survey today, I would expect the second number to be much higher.

Workers, governments and the public are seeing AI in the best possible light. Businesses have therefore never had a better opportunity to deploy AI tools. As they do so, they must be sure to design their tools in a human-centric way to ensure that workers remain on board.

Luckily, this has never been easier. Thanks to advances in technologies such as natural language processing and computer vision, AI tools can be completely intuitive for humans to use – as easy, in fact, as working with a human co-worker. Explainable AI, where the decision-making process of machines is laid bare for all to see, will be another important element to ensuring that the good will towards machines won during the pandemic is not lost.

If organizations can get this next stage of the AI journey right and deploy the tools at scale in a way that enables true human-machine collaboration,  then the sky is the limit. Once and for all, enterprises can sweep away the constraints that have traditionally held AI back and open whole new possibilities for their company and workers.

 ABOUT THE AUTHOR:

Michael Biltz leads Accenture’s Technology Vision R&D group and the enterprise’s annual technology visioning process. Through Accenture’s Technology Vision, Michael defines Accenture’s perspective on the future of technology beyond the current conversations about the social, cloud, mobility, and big data to focus on how technology will impact the way we work and live. The Technology Vision helps Accenture’s clients filter through the changes in the technology marketplace to understand the changes in technology that will impact them over the next 3-5 years.

 

]]>