Three ways CIOs can successfully scale AI
Prioritizing AI initiatives to gain a network effect
As artificial intelligence (AI) has proven its value, many companies have jumped in headfirst, investing in data experts and technologies to get into the AI game. Yet, despite these investments only 8% of companies we recently surveyed are engaging in practices that enable AI (technologies that can perform cognitive functions associated with human minds) and other advanced analytics at the level of scale required to unlock the greatest value.
Such practices include having a unified AI vision among leaders, using standard methodologies and agile development teams, tailoring talent strategies to enable AI at scale, and embedding AI in decision making processes.
Clearly, IT’s vast experience in massive technology deployments enables it to help drive these and other efforts needed to scale AI such as workflow redesign, communication to all employees and value tracking. But with so much to do, where should CIOs start?
- EU publishes guidelines on building ethical AI
- Building reliable data pipelines with AI and DataOps
- Customer experience in an AI world
We’re finding CIOs who are successfully scaling AI focus on three key activities–forging partnerships with business and analytics, prioritizing initiatives based on a network effect, and building foundational technology and tools for adaptability.
Forging partnerships with analytics and business leaders
Close collaboration, shared accountability and joint effort among business, IT and analytics leaders is vital to create enterprise-wide AI capabilities and break down data, functional and cultural silos that often plague AI initiatives. At one large European bank, the CIO facilitated the assembly of this vital coalition. As part of his work, he shared with business leaders how key capabilities, such as the ability to extract and analyze customer data in real time, would become economically feasible and business-as-usual by 2025. It generated excitement for AI, prompting business leaders to fill whiteboards with ideas of how they could tap into such capabilities. This clear vision of the possible helped the organization create a cohesive roadmap for AI and built momentum for early use cases.
As the bank launches new use cases, the CIO dispatches IT SWAT teams to work side-by-side with business experts, data experts and designers as part of agile delivery teams to preempt any technology bottlenecks that may affect deployments or end-user adoption. When issues arise, such as a data access issue, IT is on hand to get them back on track quickly.
Prioritizing AI initiatives to gain a network effect
For many companies, the question isn’t whether to pursue AI, but where to deploy it. Certainly, use case value, feasibility (e.g., how difficult is it to implement) and time horizon are critical factors in use case selection.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
But beyond these factors, CIOs should consider the potential network effect they can create based on their deployment choices. The CIO at a large telecom provider worked with business leaders to prioritize three closely related sales and marketing use cases--to precisely segment customers, to identify next-best-product to buy, and to predict customer churn. By considering the totality of the data cleansing work necessary for these use cases, they could build a 360-degree customer view within one year, laying the groundwork for even more advanced use cases that would deliver hundreds of millions of dollars in additional value for the company.
Building a foundational technology and tools for adaptability
One of the biggest mistakes we’ve seen CIOs make is trying to use legacy technologies to power AI capabilities. They’re simply not flexible enough nor cost-effective for data-intensive, power-hungry AI models. While the CIOs of both the European bank and telecom company invested in different technologies, their selection criteria were very similar and included:
- Open architectures for data lakes, data managements tools, and AI software, either based on open source technology or compatible with it. This enables them to support a growing number of use cases across geographies and business units and easily incorporate rising data volumes, new data types (such as voice or image data), additional capabilities (such as real-time or streaming) and different AI techniques (from machine learning to deep learning)—all at a fraction of the cost of legacy systems.
- System independence so AI systems and supporting technologies, such as security, administration, storage and data environments, can run where needed--in a cloud, on premise, or in a hybrid environment. This enables organizations to easily scale up (or down) environments as data processing needs change and more seamlessly move AI solutions from development to production.
- Standard AI technologies, methodologies (for example, to reuse code), and partnerships with third party data brokers, IoT solution providers and others to ensure interoperability of AI systems and enable AI talent to quickly deploy where needed with a common set of reusable tools. Such standardization can, for example, allow AI teams to easily link a supply chain forecasting tool with a new AI-driven inventory management system for a more seamless order fulfillment process.
In each case, the CIOs implemented AI systems in parallel with their legacy systems, using a step approach based on their AI roadmap to maximize ROI, rather than trying to implement costly bottom-up data cleansing or system integration efforts that might not deliver a return for years.
Bringing AI to scale is no small task. Ultimately, by viewing their roles as facilitators rather than IT service providers, and focusing on these key activities, CIOs can make a significant impact in their organization’s ability to unlock AI’s full potential.
Tamim Saleh, Senior Partner at McKinsey
Tamim Saleh is a senior partner in McKinsey’s London office.
He is a leader in data analytics with expertise in using data modelling in strategic transformations across industries, including energy, technology, banking, and finance. As the head of our Analytics Practice in Europe, he works to enrich our capacity to employ sophisticated analytics when helping executives make informed business decisions and put in place practices that lead to stronger ongoing performance. His other areas of expertise are: M&A, Business and IT transformation, Cloud Computing, Shared services, Outsourcing. Prior to joining McKinsey, he was the global leader of Big Data in BCG and before that I was an executive / partner in IBM where he leds the Strategy and Change Practice area in Financial Services in UK, Ireland, Benelux and Middle East and Africa