“Things are getting a little bit scary” - Workday’s take on responsible AI
Ethics, environment, and regulation - Workday outlines the considerations to AI building

There’s an undeniable hype and excitement around AI and its potential, with over $1 trillion invested into the technology so far - but the technology is not without its controversies.
TechRadar Pro joined Workday in Dublin to discuss the company’s “responsible AI” (RAI) practices and what that means for consumers and businesses.
“I think if you hold true to good core practices of thinking about resources and being responsible, you just start from the beginning,” explains Kathy Pham, Vice President of AI at Workday.
A dual approach
AI isn’t as simple as just helping workers with mundane tasks or acting as a personal assistant, and the Cambridge Analytica scandal taught us that algorithms can have serious and devastating real-world consequences.
This is why Workday has centerd its responsible AI policy, and is constantly assessing its impact, from energy consumption to gender bias, in two forms. The first is a “responsible AI council”.
“All of our executives across technology, product, diversity, equity, and inclusion leader, our chief legal officer, who get together and they talk about every AI use case that we think about at the high level," Pham explains.
This board meets regularly to review, approve, and advise on new aspects of the governance and development, aiming to ensure a “complete understanding of the technical, legal, and compliance details needed for governance” as well as serving as RAI ambassadors within their respective teams and as the local go-to resource for guidance and support.”
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
To support this work at every stage of development, Workday’s Kelly Trindle, Chief RAI officer, has built a ‘programme of champions’ to sit on all product teams.
This means that any ideas or prototypes are considered within the context of responsible practices, with “topics ranging from responsibility to ethics to environmental impacts” all examined even as an idea is in its infancy.
The framework the organisation uses is the National Institute of Standards and Technology (NIST) Risk Management - a voluntary set of guidelines used to “incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.”
“The Council and the champions, we've leaned hard into using the NIST Risk Management Framework and I bring that up because we helped contribute [to] the building of it in government and they work with a ton of other people too, and then we took it and used it internally,” Pham adds.
Pham points to this as an example of public-private partnerships pushing each other, and keeping a balance between innovation and regulation without compromising on safety or progress.
“I think in the history of the world, regulation has rarely stifled innovation, if we're being honest, because companies tend to move really fast,” she assures.
“You've seen companies [..] sometimes even develop stricter privacy and security, and even stricter than what government have even asked for, because they want to either be competitive or their field just requires a deeper level. So I think there's a dual purpose to it that's really powerful.”
The scary stuff
For anyone familiar with AI, there are a few elephants in the room here. The first, is its environmental impact.
Data centers used to power AI are causing huge sustainability problems, as they use tremendous amounts of water and energy to run, already accounting for an estimated 4% of carbon emissions worldwide.
Workday approaches this by “planning ahead”, predicting energy usage and weighing up the benefits.
“So we think about what is the consumption power if we go with a cloud vendor for our data systems, what impacts do they have if we need certain data systems to train our models,” Pham explains.
By assessing needs from the start, Workday can make smarter decisions on what resources to use and for what purposes.
“So we find the best possible tool with the right amount of energy consumption and resources to solve it versus wasting a lot of resources to solve something that isn't there right now” Pham adds.
Alongside this, concerns about the reflection of bias in GenAI models. These models learn from and replicate behaviours that they observe from datasets - from human information and interactions, and of course these humans have their own prejudices and discriminations.
This gets infinitely more complicated when you have, for example, an AI agent handling HR enquiries.
At this stage, personal or complicated enquiries are more than likely handled by human HR workers, but as AI evolves, it’s not hard to imagine some companies cutting staff costs and deploying HR AI agents, or shirking responsibility by leaving these agents to make decisions.
“I think our reference point is always humans,” says Workday Chair of Technology & Society, Professor Taha Yasseri.
“Humans are not necessarily the best benchmark,” he admits. “We are not super happy with what we do. But at least we can make sure that the machines are as bad as us, or maybe even better, but definitely not worse. So the accountability and responsibility remains with humans. I don't have any sort of out of the box philosophical thinking here.”
Yasseri admits that recent developments in the Trump administration mean that when it comes to policies on diversity and inclusion, and funding for research on its impact, things are “getting a little bit scary,” but that Workday will “find solutions because [we] want to do the research we think is important.”
You might also like

Ellen has been writing for almost four years, with a focus on post-COVID policy whilst studying for BA Politics and International Relations at the University of Cardiff, followed by an MA in Political Communication. Before joining TechRadar Pro as a Junior Writer, she worked for Future Publishing’s MVC content team, working with merchants and retailers to upload content.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.