Ce diaporama a bien été signalé.
Le téléchargement de votre SlideShare est en cours. ×

AI Readiness: Five Areas Business Must Prepare for Success in Artificial Intelligence

Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
By Jessica Groopman
Edited by Jaimy Szymanski
Includes input from 27 industry leaders
September 2018
Five Areas Businesses...
2
Executive Recommendations 3
Introduction 4
Definition of Artificial Intelligence
The Struggle for Readiness 5
The Five A...
3
Executive Recommendations
Although this research weaves best practices and considerations throughout each respective are...
Publicité
Publicité
Prochain SlideShare
The Future of PR
The Future of PR
Chargement dans…3
×

Consultez-les par la suite

1 sur 14 Publicité

AI Readiness: Five Areas Business Must Prepare for Success in Artificial Intelligence

Télécharger pour lire hors ligne

This research report from technology research firm, Kaleido Insights introduces a framework for organizational preparedness—not only of data and infrastructure, but of people, ethical, strategic and practical considerations needed to deploy effective and sustainable machine and deep learning programs. This research is the first to market to articulate the need for readiness beyond data and data science talent. Based on extensive research and interviews of more than 25 businesses involved in AI deployments, the report identifies and examines five fundamental areas businesses must prepare for sustainable AI. Download the full report: https://www.kaleidoinsights.com/order-reports/artificial-intelligence-ai-readiness/

This research report from technology research firm, Kaleido Insights introduces a framework for organizational preparedness—not only of data and infrastructure, but of people, ethical, strategic and practical considerations needed to deploy effective and sustainable machine and deep learning programs. This research is the first to market to articulate the need for readiness beyond data and data science talent. Based on extensive research and interviews of more than 25 businesses involved in AI deployments, the report identifies and examines five fundamental areas businesses must prepare for sustainable AI. Download the full report: https://www.kaleidoinsights.com/order-reports/artificial-intelligence-ai-readiness/

Publicité
Publicité

Plus De Contenu Connexe

Diaporamas pour vous (20)

Similaire à AI Readiness: Five Areas Business Must Prepare for Success in Artificial Intelligence (20)

Publicité

Plus récents (20)

AI Readiness: Five Areas Business Must Prepare for Success in Artificial Intelligence

  1. 1. By Jessica Groopman Edited by Jaimy Szymanski Includes input from 27 industry leaders September 2018 Five Areas Businesses Must Prepare for Success in Artificial Intelligence AI Readiness
  2. 2. 2 Executive Recommendations 3 Introduction 4 Definition of Artificial Intelligence The Struggle for Readiness 5 The Five Areas of AI Readiness 6 Strategy 7 AI Transformation is an Extension of Digital Transformation Strategic Approaches to AI Lay the Foundation for AI Governance Measuring AI’s Success Key Questions to Ask People 13 The AI Mindset Identify Key Personae and Ready Each Group Accordingly Address AI’s Limitations & Cultural Stigma Best Practices Key Questions to Ask Data 22 Assess Enterprise Data Strategy Ready the AI Data Feedback Loop Leverage AI for Ongoing Enterprise Learning and Knowledge Management Best Practices Key Questions to Ask Infrastructure 31 Assess Architecture Needs and Evaluation Criteria Prepare Infrastructure AI Software Solutions AI Hardware Best Practices Key Questions to Ask Ethics 39 Organization & Resources Bias In, Bias Out Transparency, Consent, and Data Privacy Best Practices Key Questions to Ask Next Steps Towards Artificial Intelligence 48 About the Author 49 About Kaleido Insights 49 Research Methodology 50 Ecosystem Inputs 50 Acknowledgements 50 Endnotes 51 Table of Contents
  3. 3. 3 Executive Recommendations Although this research weaves best practices and considerations throughout each respective area of readiness, companies interviewed surfaced a number of overarching areas of advice, based on their AI implementation experiences. Investment in AI = Investment in People: Early successes in the space show that the sum of human + AI is greater than either alone, thus business preparedness and investment in AI requires proportionate preparedness and investment in people. Start small, pilot problems, fail forward. Businesses can realize immediate incremental value with narrow pilot efforts that are focused on solving concrete problems; increase value by learning from mistakes. Data and AI preparedness must align with broader data and business strategy. Starting small doesn’t mean thinking small, instead, every AI initiative must begin by assessing data, people, and processes to support and align with broader business objectives, impacts and governance structures. Lockstep coordination between technical and business subject matter experts must support every phase of AI lifecycle, from design, research, and deployment, to management and optimization. Trust and transparency must infuse enterprise AI. From evangelizing the limitations of AI, to demonstrating the benefits, clear communication and user support fosters adoption and curbs fears. Systems and process documentation and explainability are critical to mitigate risks of AI’s complexity and autonomy. Maturity won’t be defined by any single application. Advanced AI deployments will be marked by the ability to infuse both user-facing services and interactions with back-end or enterprise process and supply chain optimization. Machine learning must translate to enterprise learning. Business leaders must create a culture of learning and adaptability, by supporting people with skills development, workflows, and infrastructure to learn from interactions and productize domain knowledge.
  4. 4. 4 Introduction The fundamental business question of “what do we do with all of this data?” is colliding with our endless fascination for technological biomimicry. Artificial Intelligence (AI) is an unavoidable concept in the digital age, if not for its pervasive application, then by virtue of the fact that it provides a technological solution to information overload and the quest for context. Recent research finds some 80 percent of businesses plan to invest in AI in the next three years.1 Even if the techniques and concepts underlying AI are more than 50 years old, the recent renaissance in commercial environments is a result of a rather sudden convergence of three key trends: colossal data generation; better algorithms; and faster processing hardware. AI is already yielding business impacts across functions, including product development, process and workflow optimization, recommendation and personalization at scale, risk mitigation, and cybersecurity, among scores of others. Enterprise interest in AI has been compounded by its promises for efficiency in the form of speed, accuracy, agility, and access to insights embedded in “dark data”—a reference to the estimated 80% of unused enterprise data.2 Meanwhile, the technology giants of the world are leading the charge, not only in the amount of data they possess to train models, the frameworks and libraries they’re open-sourcing, or the firmware and hardware for computing such colossal data, but in how AI-driven services are impacting consumer expectations. Every day we hear new stories of AI used for innovation, for ill, for intrigue; new techniques emerge; and data volumes and complexities grow. Innovators and change agents see the hype of AI everywhere but struggle to know where to begin. This Kaleido Insights report examines the need for AI readiness and introduces a framework for organizational preparedness—not only of data and infrastructure, but of people and processes needed to deploy effective and sustainable machine and deep learning programs. What follows is the culmination of extensive primary and secondary research to surface the barriers, enablers, and best practices of leading organizations as they go about deploying AI in business processes and services. Definition of Artificial Intelligence As often the case in emerging technologies, different market constituencies define terms differently. Artificial intelligence (AI) evokes a special challenge (and emotionalism) as it is almost always compared to human intelligence— a domain which also struggles to be defined. Kaleido Insights defines AI as an umbrella term for the variety of tools and methods used to mimic cognitive functions across three areas: perception/vision, speech/language; and learning/analysis. A machine’s ability to “cognate” is supported by multiple approaches—machine learning, deep learning, natural language processing (NLP), computer vision, and other existing and emerging techniques—multiples of which can be used at the same time for a given use case. Kaleido Insights acknowledges discrete differences among techniques, but for simplicity, this report will use “AI” interchangeably for applications involving machine learning and other techniques mentioned above.
  5. 5. 5 Although the race towards AI is on, the vast majority of organizations struggle to know where to begin. An estimated 85% of organizations want to deploy AI, but have not.3 But, the gap between aspiration and execution is wide, not only because data is a mess; the same study found less than 30% of enterprises surveyed had any kind of AI strategy in place. Barriers to readiness, never mind deployment, span the following areas: FIGURE 1 BUSINESS, MARKET,AND SOCIETAL BARRIERS TO AI ADOPTION Beyond the significant emotional, societal, and competitive challenges lie an array of barriers to practical AI deployment, some of which are mentioned in Figure 1. Now— in what are no doubt the most nascent days of commercial AI deployment— is the time for organizations to grapple with the business, economic, cultural, and technical questions AI engenders. Our research finds the most common stumbling blocks to deployment lie in data, technology, and talent—each of which vary based on the nature of the business. More often than not, infrastructure is lacking to execute on data; technology barriers around processing and integration stunt deployments; and people at all levels are resistant to change. Achieving the benefits of AI requires companies deeply understand more than its utility; it requires preparedness across five critical areas. The Struggle for Readiness Kaleido Insights, “AI Readiness: Five Areas Businesses Must Prepare for Success in Artificial Intelligence” September, 2018.
  6. 6. 6 AI readiness isn’t merely a state of preparation, but a willingness and facility for implementation. This is a particularly difficult endeavor in the context of AI, given we barely understand the implications of deploying information systems to behave like biological systems. FIGURE 2 THE FIVE AREAS OF AI READINESS Today, most of enterprise discourse and activity around AI preparation is in the context of data preparedness. Even if data is the paramount and most practical area “to ready” in order to deploy, it must be superseded with intent, with human assessment for impact. Our research finds AI readiness results from practical assessment across five areas: strategy, people, data, infrastructure, and ethics. The Five Essential Areas of AI Readiness Kaleido Insights, “AI Readiness: Five Areas Businesses Must Prepare for Success in Artificial Intelligence” September, 2018.
  7. 7. 7 For most organizations, the trouble isn’t usually how to get started with AI, but knowing where. Not only can machine learning and other AI techniques be applied across virtually every single industry, business function, and workflow, it is subject to immense hype, over-inflation, and promise. From chatbots to facial recognition, from automated reports to predictive maintenance, from procurement management to simulations for decision-making, AI’s applications are vast. But while identifying use cases may intrigue, the starting point for any sustainable strategy is never the technology. The starting point begins with identifying the problems and objectives and setting goals. AI transformation is an extension of digital transformation In corporate environments, objectives tend to start with high-level mission statements and parse into specific departmental, functional, product, team, and individual objectives in support. How can the organization improve its product experience, its employee retention, lead generation, its partner channels, or accelerate R&D? In this way, AI becomes an extension, not a bolt-on to existing digital transformation efforts. This distinction is an important one, as companies risk following the siren hype of “building an AI strategy” rather than evaluating how existing programs could function better and letting problems guide solutions. While, obviously, there would be no AI without data, many companies struggle to understand the differences and similarities. 1: Strategy INTENDED AUDIENCE READINESS ELEMENTS • Executives • Digital Transformation vs. AI Transformation • Innovation Strategists • Strategic Approaches • AI Leaders • Foundations for Governance • Measuring Early Success
  8. 8. 8 FIGURE 3 BRIDGING DIGITALTRANSFORMATION AND AI TRANSFORMATION To date, digital transformation has been marked by digitization of information—a sort of “phase zero” for bringing an organization’s programs and processes online. Born of the age of social media, and (often) driven by marketing, digital transformation has evolved from external (customer-facing) programs to internal and cross-functional. AI, on the other hand, is driven entirely by data. It is born of analytics, increasingly a discipline within every department. Digital transformation has brought about a proliferation of points systems; AI transformation benefits from systems integration. In both cases, people and culture aren’t just essential for adoption, they are essential for activating insights and customer success in the digital age. At the end of the day, thinking strategically about AI becomes synonymous with thinking strategically about data. Strategic approaches to AI Companies tend to design strategies based on current capabilities around data, where they sit in the value chain, who is designing the strategy, and what practical limitations exist. Depending on who is driving the program—business or technical leader—starting points can vary. Business leaders often look for areas of process automation, visibility, or cost savings, while those more technically minded tend to first think bandwidth and access to training data. Virtually every company we interviewed emphasized organizational structure as an essential enabler of success—for short-term wins and over time. Common Points of Origin for Enterprise AI Bottom-up: Single spokes or business functions select a few workflows to which applying machine learning will yield immediate benefits, then assess broader applications as early initiatives grow. Top-down: Executive-driven efforts to transform the entire business into a fully automated hub in which all interactions inform optimization and greater automation across every function, and related products or workflows. Cross-functional: Dedicated groups set up specifically to foster digital transformation efforts, efficiencies, shared tools, and activate data and emerging technologies to enhance business objectives are another point of origin for AI ideation and experimentation. Kaleido Insights, “AI Readiness: Five Essential Areas Businesses Must Prepare for Success in Artificial Intelligence” September, 2018.
  9. 9. 9 Bottom-up approaches emerge from vertical business functions and solve one problem at a time The most common testing beds for AI projects are business functions or vertical LOBs, namely their data analyst teams as they are most accustomed to dealing with lots of data. In some cases, these are experimental efforts driven by internal “change agents” or data-savvy employees. In other cases, these efforts are the fruits of mandates and investments from leadership to modernize or extract more data-driven insights, often pushing specific business objectives (cost reductions, higher conversions, etc.). These groups have the greatest context for where AI techniques could be applied for maximum immediate impact. Most companies interviewed recommend a bottom-up approach to support understanding and achieve relatively faster results. This is helpful for procuring more resources to expand efforts, as most organizations are funded at a project level. Many companies also point to the value of starting small because there are plenty of development resources available at low or no cost. Image recognition, captioning, natural language processing (NLP), clustering, recommenders, and text processing are reasonable starting places for enterprises given the variety of open source frameworks, APIs, and other industry tools and libraries available to all.4 “Start with big thinking, but work on one brick at a time,” recommends Toby Cappello, VP of worldwide expert delivery services at IBM who recommends beginning with projects that leverage both the business perspective with the technical development perspective. “Use the business perspective to break down big ideas into smaller initiatives that have substantial business value; leverage the development perspective to tool each one and aggregate learnings from the technical side.” For example, if applying AI to a help desk environment, start by offering service agents a recommendation system; use their feedback to improve the model, aggregate learnings, and build trust. Then, look at tackling a consumer-facing application for customer support. Top-down approaches are driven by central organizational hubs Hubs are responsible for designing corporate strategies and resource allocation. They are more commonly seen in tech companies or legacy enterprises aggressively investing in becoming a tech company. In these early days of AI, relatively fewer AI projects are mandated through top-down approaches. Instead, corporate hubs are the enablers for broader education, investment, governance, strategic metrics, and communications. While broader top-down, multi-disciplinary strategies are important for longer-term product innovation and coordination across functions, AI is unique in that tangible value can be achieved even in point solutions. “The nice thing about this machine learning vs. other tech like enterprise resource planning systems or even IoT is that you can pluck off a small part of a relatively low-risk business process and immediately drive efficiencies, eliminate tedium, or improve accuracies,” shares Eric Berridge, managing partner at Bluewolf, an IBM-owned cloud consultancy. While these efforts can prove value and indeed empower individual LOBs, they risk hitting a proverbial value ceiling if not integrated. Cross-functional programs are another common catalyst for enterprise AI programs Unlike hub or spoke, the core role of these groups is to connect the two, specifically to incubate new concepts, drive innovations and business objectives more efficiently, reduce technology and vendor redundancy, and increase collaboration, interdepartmental workflows, etc. As AI programs evolve (from very narrow efforts in single spokes, to more enterprise-wide learning and optimization), these liaisons are critical. Over time, more advanced organizations work toward an integrated and shared platform that provides AI- related services across the organization. Facebook’s FBLearner Flow, an enterprise-wide machine learning platform for Facebook’s optimization, is in an infamous example of what this looks like in a technology platform
  10. 10. 10 company.5 In other industries, it starts with people infrastructure. Capital One has a Digital Practice which supports all LOBs by providing dedicated expertise in data science, mobile, APIs, and beyond to collaborate with product leads on new solutions. Wells Fargo recently developed an Enterprise AI Solutions group, whose charter is to identify both opportunities (for new) and integration (of existing) AI applications across the entire company. “AI was starting to emerge all over the business and we realized that we needed a dedicated group (beyond the Innovation Group) to optimize Wells Fargo’s use of AI,” shares David Newman, SVP and strategic planning manager for Innovation at Wells Fargo. This group: • Takes inventory of all AI tools used • Identifies areas of duplication and opportunities to standardize • Surfaces best practices for idea-sharing and faster deployments • Develops new partnerships and integrations with its technology organization Starting small doesn’t mean thinking small: it means laying the foundation for AI governance While starting with smaller initiatives can yield relatively quicker wins, that doesn’t mean they are any less strategic. While enterprises may have some governance processes in place to support digital, even single AI initiatives introduce new considerations around workflows, onboarding, collaboration, program design, and ongoing management. The broader subject of AI governance will be covered in a future report, but for the purposes of readiness, companies we interviewed repeatedly articulate the need to conduct due diligence on both people and processes associated with any AI initiative, no matter how small. From a people perspective, user journey analysis is important for ensuring humans are comfortable using AI in context, and to make sure AI is the best tool for solving the problem in question. To make AI work for the business, you have to put it the path of existing work. The program has to be deployed in such a way that doesn’t just invite feedback, but implements it in a manner that shows people how and why this way is better. “It’s about using the interface to surface insights employees didn’t have, or had to spend searching for before,” shares Gregg Spratto, VP of Operations with Autodesk. Einat Haftel, VP product management at Informatica articulates why these early steps are strategic. “Lowering the learning curve and speeding up productivity with data widens access, and the more people that can become experts in data management, the more value to the organization.” Informatica includes the following in their onboarding process with customers: • Human support: sitting down directly with users to gain feedback on what is and is not important to show; where they can easily (or not so easily) make changes, and how • Iteration to reduce false positives; in early phases of deployment, this is essential for both model optimization (supervised learning involving users) as well as improving trust • Tailor UX/UI: Interfaces and dashboards, displaying or ranking confidence levels for outputs surfaced (e.g. 78% confidence these are product SKUs) • Make requested changes to User Interface (UI) or user experience (UX)—doesn’t mean you have to change the core model From a process perspective, assess how AI will impact existing governance structures such as triage, queues, feedback loops, employees’ access to data, etc. Assessing this early on informs where breakdowns are occurring in the process and if automation would help or hurt.
  11. 11. 11 For instance, if applying machine learning to routing particularly irate customers in a service context, you wouldn’t want a chatbot to route them to the end of the normal queue, effectively putting them to the back of another line. As is the case with digital transformation efforts, top-down and ground-up efforts combined create the best results. Initiatives that try to take a bolted-on, very technical, or altogether new approach will face greater challenges and steeper barriers to adoption. The art here is aligning AI workflows with existing workflows and behaviors to drive trust and adoption. Measuring AI’s success must go beyond measuring financial impacts It is important that early efforts yield tangible value, which naturally leads to the question of measurement. Although small prototypes are useful for demonstrating ‘the art of the possible” to decision-makers in concrete monetary impacts— e.g. cost reductions, lifts in conversion, pipeline optimization—companies we interviewed reinforced the need to begin by measuring more than ROI. “Even though we started with traditional cost metrics, we quickly realized we had to expand how we measured value,” explains Gregg Spratto, VP of Operations with Autodesk. “Success should really be measured by the ability to move an initiative from RD concept to deployment in production and scale in the real world.” This means measuring the impact on people. Begin with metrics that assess value, often in the form of productivity or sentiment, to user or function. For example: • Time saved, spent (for agents, customers, analysts, etc.) • Accuracy gained • # successful outputs (e.g. cases resolved, conversions, transactions, referrals) • Identification of fake, garbage, or fraudulent inputs (e.g. referrals, threats) • Improved sentiment (e.g. shopping, sales, support, invoicing process) • Improved Net Promoter Score (NPS) • Feedback from customers or partners • Impact on other existing metrics Whether building or buying, businesses should beware of vendor promises involving metrics, warn some vendors in the space. When you hear about other companies touting “97 percent improvements,” know there are two general buckets for measuring AI model performance: General metrics such as ‘of the millions of samples or questions asked; how many could be addressed given the ontology of the system; vs. Specific metrics such as ‘for a specific case, e.g. customer request for an address change, 97 percent of the time AI can resolve that one request.” As efforts evolve and programs mature, measurement itself must too. As AI constantly learns, “lift” can be a moving goalpost, and optimization in one area, (e.g. risk reduction) can impact another area (e.g. how to shift from reactive risk analysis to proactive risk mitigation). “You don’t want to automate processes that were crappy to begin with. Good candidates for automation can sometimes be good candidates for policy change.” Greg Spratto, VP of Operations with Autodesk
  12. 12. 12 Regardless of application, CX is true north Just because AI is applicable across business functions does not mean businesses should de-prioritize customer experience (CX) within strategic planning efforts. Impact on CX—ease, security, agility, access, speed, meaningful personalization, improvement across user journey—is the paramount assessment of value for any AI program, B2B or B2C. “No matter which lens you’re assessing readiness— of data, of infrastructure, of employees-— the ultimate question must always be how will this impact the end-to-end customer experience?” shares Gianni Giacomelli, Chief Innovation Officer at Genpact. Regardless of consumer, employee, or business partner, humans are the architects and consumers of AI in every application. Strategy: Key Readiness Questions How will AI initiatives support existing digital transformation efforts and data strategy? Where will design, development, and management for AI initiatives sit in the organization? What are adjacent use cases to which we can apply learnings, metrics, or similar techniques? How does our internal culture facilitate and enable innovation? How will organizational structures support governance and scale of AI deployments and collaboration? What is the level of awareness we have in data inventory, pipeline, and integration? How will ongoing efforts be driven from the ground-up but empowered top-down?
  13. 13. 13 This concludes the first of five essential areas businesses must prepare for artificial intelligence. While Strategy sets the course of an organization’s AI journey, it is only a plan on a page without anchoring readiness across PEOPLE, DATA, INFRASTRUCTURE, and ETHICS. Download the full research report to discover how to prepare your organization for AI across each of these areas. PURCHASE THE FULL RESEARCH REPORT Report Purchase Includes: • 54-page Research Report AI Readiness: Five Areas Businesses Must Prepare for Success in Artificial Intelligence, including real-world examples, pragmatic recommendations and best practices, frameworks to activate, and endnote resources, sourced from more than 27 research interviews • Twelve (12) high-resolution graphics and frameworks visualizing research findings • Two (2) Downloads per purchase • One (1) Complimentary Call with Lead Analyst, report overview discussion To learn more about this report, Kaleido Insights, or our ongoing AI coverage: • Visit the report page here • Contact us directly with questions or package inquiries • Find us at an upcoming event • Look out for more AI coverage and research by subscribing to our newsletter Download the Full Research Report
  14. 14. 14 RESEARCH METHODOLOGY This research was developed through extensive primary and secondary qualitative research methods. We interviewed 27 market influencers, vendors, and adopters between September 2017 – June 2018. We also conducted countless briefings and discussions with industry innovators in the artificial intelligence, big data, cloud and related software and hardware markets. Input or mention in this document does not represent a complete endorsement of the report by the individuals or the companies listed herein. ECOSYSTEM INPUTS • Gregg Spratto, VP Operations at Autodesk • Ashish Bansal, Senior Director Data Science Merchant Products Lead at Capital One • David Newman, SVP and Strategic Planning Manager for Innovation at Wells Fargo • Jeetu Patel, Chief Product Officer at Box • Clayton Clouse, Senior Data Scientist at FedEx • Tatiana Mejia, Group Product Marketing Leader at Adobe • Gianni Giacomelli, Chief Innovation Officer at GenPact • Toby Cappello, Watson and Cloud Platform, VP WW Expert Delivery Services at IBM • Einat Haftel, VP of Product Management at Informatica • Jeff Kavanaugh, Senior Partner at Infosys • Helena Carre, EMEA Omnichannel Analytics Lead, Kimberly-Clark • Julien Sauvage, Senior Director of Product Marketing, Einstein at Salesforce.com • Kumar Srivastava, AI Fintech Expert, Stealth-mode AI start-up • Jay Klein, Chief Technology Officer Sahar Dolev-Blitental, Director of Marketing at Voyager Labs • Brian Schwarz, VP Product Management at Pure Storage • Ian Collins, CEO at Wysdom.ai • Tom Kraljevic, VP Engineering with H20.ai • Mary Beth Ainsworth, AI and Language Strategist at SAS • Robin Hauser, Documentary Film-maker, and Director of “Bias” • Eric Berridge, Managing Partner at Bluewolf • Veena Gundavelli, CEO at Emagia • Girish Mutreja, CEO Founder at Neeve Research • Jem Davies, VP ARM Fellow GM of Machine Learning Rhonda Dirvin, Director of IoT and Embedded Systems at ARM • Srikanth Velamakanni, Group Chief Executive Executive Vice-Chairman at Fractal Analytics • Alan Anderson, Director of Enterprise Solutions at IPSoft • Ted Shelton, Chief Customer Officer at Catalytic • Martin Stoddard, Principal Director of Webscale Services at Accenture With additional inputs from Lyft, Western Union, Discover, Visa, Jet.com, Uber, Kia Motors, Microsoft, and Oracle ABOUT KALEIDO INSIGHTS Kaleido Insights is a research and advisory firm focused on the impacts of disruptive technologies on humans, organizations, and ecosystems. Our industry analysts provide business leaders with clarity amidst a fragmented technology landscape. Kaleido advisory relationships, webinars, speeches, and workshops are grounded in research rigor, impact analysis, and decades of combined expertise. Innovators are realizing that implementing each new technology isn’t enough, especially as business models are disrupted. Keeping up is becoming more dif cult. Our mission is to enable organizations to decipher foresee, and act on technological disruption with agility, based on our rigorous original research, trends analysis, events, and pragmatic recommendations. If you’re interested in building a relationship with our analysts, we’d love to hear from you. Please email info@kaleidoinsights.com to start a conversation, or visit www.kaleidoinsights.com to learn more about our offerings.

×