The McKinsey Global Institute and UNDP’s Human Development Report Office co-hosted a roundtable on “Harnessing the Opportunities of AI and Digital Transformation to Advance Human Development” with 30+ technology, business, and social sector leaders. It was a lively discussion with as many points of well-reasoned divergence as convergence. As we started the conversation, three broad questions were posed for us:
- To what extent is AI truly new and unprecedented? Is there something distinctive about what's happening in AI that makes a difference to prospects for advancing human development?
- What does it mean for people across the life cycle – for young children, teenagers, in the world of work, in social interactions, for older people, or people living with disabilities?
- What kinds of policies, institutions, incentives do we need to put in place so that these technologies augment what people can do, rather than replacing what people do?
Overall, there was excitement and optimism about the impact of individual AI use cases, but the jury was out as to whether in aggregate (or to what degree) the impact of AI on human development would be distinctive and different from that of other technologies. There were also concerns that a significant set of people would be left behind with more than half of us feeling the chances were better than even that that would happen.
Highlights of the discussion included:
1. “If AI is the solution, what is the problem?”
There was universal agreement that effective AI and digital innovation must start with addressing real human problems – not merely improving the efficiency of existing solutions but doing things that people really want and need. Participants highlighted novel on-ground AI use-cases across sectors that they were excited about for their potential to solve human problems – and because of evidence of successful scaling in many instances. Specific examples that people in the room are working on included:
- “Personalized development coaches” for underserved children in early childhood education to set a generation on track for long term cognitive development.
- Rationalizing drug design by using machine learning models, boosting not only efficiency of discovery but also widening the range of health problems addressed.
- Automating construction quality inspection and using machine learning to predict which housing is most vulnerable to shocks.
- Improving speed and effectiveness of humanitarian responses by using climate and weather data.
- Skills matching in the labor market using automated extractions of data from resumes and job descriptions, enabling matches that HR professionals might miss.
- Optimal distribution of work across various actors – human agents, automated/enabled agents, automation itself to meet business needs.
2. “Will AI’s impact be unprecedented? And is it enough on its own?”
We saw AI as something that adds exciting new capabilities to the world, but not without past parallels, and nor is it a standalone silver bullet. Past technological breakthroughs like electricity, vaccination, radio, and mobile telephony have had dramatic impacts, similar to what we might expect from AI. And AI has many important dependencies on “traditional” technologies and capabilities.
- Much of current AI innovation focuses on the “post literate, connected world” but basic connectivity remains a huge barrier to AI adoption in some regions. DPI provides the population-scale “rails” to carry AI solutions to all members of society; half of all value from DPI-enabled interactions could accrue to individuals.
- Another vital dependency is organizational capability. Users of AI will need to transition from being “consumers” of solution to “service delivery” entities who build capabilities in maintenance, network, repair for AI/technology deployments.
3. “Are we putting human agency and empowerment at the centre of AI?”
Even as it tries to tackle many real human problems, the question is whether AI keeps human beings in focus. The group felt AI should be augment, rather than replace, human capabilities and interactions – and how it does that is still evolving.
- People are relational beings and AI gives us new language to navigate “exchanges” between individuals. This new language could reduce friction in such interactions (e.g., by making knowledge transfer easier) but it can also raise asymmetries.
- Automation and AI can provide “nudges” to human beings to make optimal choices and decisions – but rather than monitoring and policing frontline workers to do work they can’t do it should help them experience joy and motivation in doing work they can do.
- It is, in fact, not clear what work humans can do better than machines – the border between human and AI capabilities is fuzzy and evolving (e.g., motor skills are on the “human” side of the border, but this might change). Deciding where that border is and whether to cross it is a complex societal question with implications for human agency.
4. “Are the incentives and accountability clear to those using AI?”
We agreed it was vital to get the underlying economic incentives right – but that the model to do so is not clear. Just as hunger is created not by lack of food but lack of purchasing power so also, the flow of money, investment, efforts into AI will respond to purchasing power – and accountability must go hand-in-hand with incentives.
- The economic models for technology development in the past have been created by large organizations (ranging from pay-per-use models to models driven by advertising or by access to training data). The question is how to create incentives (especially non-monetary ones) that influence ethical usage and adoption. Research and grants need to be channelled toward themes that are neglected because the market is not easily made.
- Innovation by large organizations may matter, particularly for some kinds of AI models where scale makes a difference. Small AI teams are at different levels of maturity and need mentoring. The model of collaboration on AI needs to evolve to take advantages of scale while not concentrating innovation in monolithic entities.
- Alongside incentives, most agreed it would be important to be clear about accountability. The consequences of AI cannot be brushed away as “unintended” - accountability for AI’s consequences shouldn’t just rest on the end user.
5. “AI is not something that happens to us – the choices are open. Are we gathering the right data to make the right decisions?”
As AI develops, many potential risks might unfold. New philosophies and cooperation mechanisms will be needed to quantify and measure AI’s impact and make the right choices.
- AI could lead to centralized solutions (monolithic and static models), biases can become pervasive and unavoidable. For example, there is a real risk of AI-based lending discriminating against people just as humans do, which would make it a tool for amplifying human biases. Culturally diverse training is vital. As we train algorithms, “cultures” embedded in the data we use – that reflect how we have interacted with each other for centuries – will be reinforced in insidious ways and influence global commerce.
- AI models can be wrong. They are incredible guessing machines and incredible dreaming machines – but this is troublesome when they’re used for high-risk applications. AI also makes it easier to create "fabricated realities" and subvert human development efforts around the world. Trust and verification mechanisms are critical.
- The community should define the benchmarks of what AI should achieve, based on what progress means for individuals. Such ground-up benchmarks should be used establish the baseline, measure impact, and institute A/B testing to catch bias early.
- Systematic trials and evidence are needed to reliably test impact and guide capital allocation. Large scale risk assessment studies are necessary – and collaboration on impact assessment, not just deployment, which makes it so much harder.
In conclusion, while participants believed in AI’s immense potential, many in the room were not confident that AI would be a driving force for positive change in its current form. A majority also believed there was a higher- than-average risk that AI would exacerbate existing divisions and inequalities, especially in response to current incentives. AI signifies a new and pioneering phase in technological evolution, but there are important lessons to draw from history to prepare for the future and that a human- and community-driven approach is needed to set incentives, collaborate on AI creation and deployment, and measure outcomes.
We hope this conversation provides stimulus for further engagement and collaboration on this important topic.