Explainable AI might overcome the distrust that enterprise network engineers have for AI/ML management tools that have the potential to streamline network operations. IT organizations that apply artificial intelligence and machine learning (AI/ML) technology to network management are finding that AI/ML can make mistakes, but most organizations believe that AI-driven network management will improve their network operations. To realize these benefits, network managers must find a way to trust these AI solutions despite their foibles. Explainable AI tools could hold the key. A survey finds network engineers are skeptical. In an Enterprise Management Associates (EMA) survey of 250 IT professionals who use AI/ML technology for network management, 96% said those solutions have produced false or mistaken insights and recommendations. Nearly 65% described these mistakes as somewhat to very rare, according to the recent EMA report “AI-Driven Networks: Leveling Up Network Management.” Overall, 44% percent of respondents said they have strong trust in their AI-driven network-management tools, and another 42% slightly trust these tools. But members of network-engineering teams reported more skepticism than other groups—IT tool engineers, cloud engineers, or members of CIO suites—suggesting that people with the deepest networking expertise were the least convinced. In fact, 20% of respondents said that cultural resistance and distrust from the network team was one of the biggest roadblocks to successful use of AI-driven networking. Respondents who work within a network engineering team were twice as likely (40%) to cite this challenge. Given the prevalence of errors and the lukewarm acceptance from high-level networking experts, how are organizations building trust in these solutions? What is explainable AI, and how can it help? Explainable AI is an academic concept embraced by a growing number of providers of commercial AI solutions. It’s a subdiscipline of AI research that emphasizes the development of tools that spell out how AI/ML technology makes decisions and discovers insights. Researchers argue that explainable AI tools pave the way for human acceptance of AI technology. It can also address concerns about ethics and compliance. EMA’s research validated this notion. More than 50% of research participants said explainable AI tools are very important to building trust in AI/ML technology they apply to network management. Another 41% said it was somewhat important. Majorities of participants pointed to three explainable AI tools and techniques that best help with building trust: Visualizations of how insights were discovered (72%): Some vendors embed visual elements that guide humans through the paths AI/ML algorithms take to develop insights. These include decisions trees, branching visual elements that display how the technology works with and interprets network data. Natural language explanations (66%): These explanations can be static phrases pinned to outputs from an AI/ML tool and can also come in the form of a chatbot or virtual assistant that provides a conversational interface. Users with varying levels of technical expertise can understand these explanations. Probability scores (57%): Some AI/ML solutions present insights without context about how confident they are in their own conclusions. A probability score takes a different tack, pairing each insight or recommendation with a score that tells how confident the system is in its output. This helps the user determine whether to act on the information, take a wait-and-see approach, or ignore it altogether. Respondents who reported the most overall success with AI-driven networking solutions were more likely to see value in all three of these capabilities. There may be other ways to build trust in AI-driven networking, but explainable AI may be one of the most effective and efficient. It offers some transparency into the AI/ML systems that might otherwise be opaque. When evaluating AI-driven networking, IT buyers should ask vendors about how they help operators develop trust in these systems with explainable AI. Related content news Cisco patches actively exploited zero-day flaw in Nexus switches The moderate-severity vulnerability has been observed being exploited in the wild by Chinese APT Velvet Ant. By Lucian Constantin Jul 02, 2024 1 min Network Switches Network Security news Nokia to buy optical networker Infinera for $2.3 billion Customers struggling with managing systems able to handle the scale and power needs of soaring generative AI and cloud operations is fueling the deal. By Evan Schuman Jul 02, 2024 4 mins Mergers and Acquisitions Networking news French antitrust charges threaten Nvidia amid AI chip market surge Enforcement of charges could significantly impact global AI markets and customers, prompting operational changes. By Prasanth Aby Thomas Jul 02, 2024 3 mins Technology Industry GPUs Cloud Computing news Lenovo adds new AI solutions, expands Neptune cooling range to enable heat reuse Lenovo’s updated liquid cooling addresses the heat generated by data centers running AI workloads, while new services help enterprises get started with AI. By Lynn Greiner Jul 02, 2024 4 mins Cooling Systems Generative AI Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe