How might integrating AI enhance effectiveness rather than just efficiency?
Building your talent bench
Hear from the experts
Hear from the experts
What do AI development and jazz have in common?
Keeping artificial intelligence real
Lareina Yee: You said something that will probably surprise a lot of people, which is that government regulations are moving faster than we’ve ever seen. What are the four or five questions we need to look at beyond what’s in the existing regulation?
Navrina Singh: I think it keeps coming back to some common ground principles, which look beyond regulatory frameworks to that sort of trust quotient you need to build for your enterprise. And I would say, first, that it involves a really deep understanding of where and how AI is being used within your enterprise or your organization. Taking stock of your artificial intelligence applications and creating a registry of where these systems are actually used is a great first step and a common ground principle we are finding across all our organizations.
Lareina Yee: I love that you’ve taken it from a principle to something really concrete. What are the second, third, and fourth questions?
Navrina Singh: Once you’ve taken stock of where AI is being used, the second question is, “How are you understanding and measuring its risk? What benchmarks and evaluations that align with your company values do you need to be testing your systems against?” And that alignment is really the second core piece.
The third question is, “Do you have the right people to be accountable to these evaluations and these alignments? And who is at the table?”
And then once you have that AI registry, alignment on what “good” looks like, and a set of great stakeholders, the last question is, “Are you able to, in a standardized way, scale this with the right infrastructure and tooling?” And this is where a combination of your large language model [LLM] ops tools, your MLOps [machine learning operations] tools, and your governance and risk compliance tools come into play.
At the Edge podcast excerpt

Navrina Singh
Founder and CEO of Credo AI
Use of benchmarks
Use of benchmarks
Benchmarking can increase AI safety by assessing the fairness, accountability, transparency, and broader societal impact of companies’ AI systems.
While benchmarks have significant potential, our research shows that fewer than 40 percent of leaders use them. Of those who use ethical benchmarks, only 17 percent consider them to be important (exhibit).
- Do these numbers surprise or concern you?
- Is this true of your organization?
How to lead responsibly
Brief recap of what we’ve learned
