AI in organisations: Is there hope for a human-centric future?
Rudolphina: Mr. Puranam, why are you interested in organisations?
Phanish Puranam: There is nothing that we accomplish that we do alone. We do it together in groups. When I think of an organisation, I don't necessarily think of a company or a government agency. It is any group with a goal. Most innovations occur within this social structure. Organisations are our oldest general-purpose technology, and that is what makes them so exciting.
Over the last decade or so, I have been struck by the fact that organisations serve two very different purposes in society. For one, they are a social tool to get things done. But they are also the natural habitat for our species, because we evolved to live and work in groups. This duality makes organisations fragile in the sense that many innovations aimed at improving their efficiency can undermine their role as a human habitat. This tension is fascinating and at the heart of my current research.
Rudolphina: What does a human-centred organisation look like?
Phanish Puranam: Three non-monetary drivers make people feel motivated, engaged, and happy when they work in groups: Competence, autonomy, and a sense of connection with others, which we call relatedness. We know very well from social psychology that these factors are crucial and conserved across cultures.
Reading tip: How to build human-centric organisations
Phanish Puranam, internationally renowned expert in organisational strategy, sheds light on the role of organisations as places of human connection, meaning and shared purpose. The book "Re-Humanize: How to Build Human-Centric Organizations in the Age of Algorithms" outlines a vision for steering the digital transformation in organisations in ways that serve both their goals and their people.
Rudolphina: In your latest book "Re-Humanize. How to Build Human-Centric Organizations in the Age of Algorithms," you posit that a successful organisation must be both goal-oriented but also human-centred. In achieving this balance, which challenges are created by AI?
Phanish Puranam: The rapid developments in AI seem to exacerbate the tension I described. The temptation lies in using algorithms solely to increase the organisation's efficiency: operating with a small number of people to accomplish impressive things. However, those few people might no longer feel comfortable in these environments. For instance, when AI is used for task division, work can become very modular. Everyone would work on their own without much interaction, whilst the algorithm coordinates between them. Some software developers, for example, are already often working like that, but it may not be for everyone.
Another issue is de-skilling: the better the algorithms get, the less skilled a worker needs to be. The problem with these situations is that they are disrupting the organisation's human-centricity by violating competence and relatedness.
Another threat is surveillance, which can be very subtle, such as tracing when you log in, or when you take a coffee break. This robs you of autonomy and agency. The more predictable you are, the less agency you have. Many applications of technology can improve short-term productivity but harm the key motivational drivers of your workers.
There is nothing that we accomplish that we do alone.Phanish Puranam
Rudolphina: Conversely, what role could AI play in promoting a human-centred workplace?
Phanish Puranam: There are indeed ways to utilise technology to enhance autonomy and competence, rather than diminishing them. For example, by improving the quality of deliberation among people. Consider this commonplace situation: Even though we speak the same language, communication is not perfect. We have different mental modes and happen to talk past each other. Present-day AI could already decode this to step in and clarify that we were talking about the same thing all along. By pointing out flaws in communication, technology can improve productivity while also enhancing human-centricity, as it preserves the value of relatedness.
Similarly, software can upskill employees, making them more competent. In a field experiment, we demonstrated that participants who received AI assistance on an initial task performed better on subsequent tasks without support, in this case, identifying cancerous tissue in a histopathology scan. This result demonstrates a learning effect from AI interaction. Of course, benefits depend on the software's design. If it is made primarily for mental offloading, such as using ChatGPT to think for you, it rather de-skills the user. In sum, the effect that technology has on an organisation's personnel depends greatly on how we choose to use it.
Rudolphina: What are some typical mistakes that leaders have made when implementing AI in their organisations?
Phanish Puranam: The first is deciding that the tools are not good enough yet and choosing to wait. This error of omission is problematic, because later you cannot easily catch up on the learning curve. Better start learning now.
The second mistake is using AI just for the sake of using it. Instead of hype, you need careful consideration as to where it fits in your workflow. The third is to overestimate AI capabilities, especially with the notion of "agentic AI" and the idea that it could automate an entire business process. We are not there yet.
The misconception is that an existing process can be completely automated as is. Instead, use the opportunity to re-imagine the work process and think about the optimal division of labour between people and AI, whilst preserving human competence, autonomy and relatedness.
...the effect that technology has on an organisation's personnel depends greatly on how we choose to use it.Phanish Puranam
Rudolphina: The University of Vienna is a massive organisation with more than 10,000 employees and about 90,000 students. How can universities navigate the challenge of implementing AI in meaningful ways?
Phanish Puranam: One way to look at this is by considering three time horizons: two years, five years, ten years. We can predict reasonably well what will happen in two, less so in five, and ten is mostly guesswork. For each period, the crucial question is: what is the optimal division of labour between humans and AI? This applies to every activity, such as admissions, course scheduling, teaching, grading, research support, grant writing.
Once we have that picture, we work backwards: what technology and human skills are needed? How do we train people to work with AI? Take hiring or admissions to grad schools. AI could automate much of the early work, such as reading CVs and interviewing, leaving humans for the final decisions.
Then comes the bigger question: what about people whose roles are automated? Can they be redeployed into more human-skill intensive tasks, like student counselling?
Being able to shift to community-serving tasks instead of putting up with admin work can be beneficial. Also, staff could be trained to become sophisticated AI users, possibly discovering new ways to use the technology. Downscaling staff should be the very last option when all other options are exhausted.
Rudolphina: Sometimes, large organisations are slow to adopt innovation.
Phanish Puranam: Universities often tend to commit the error of omission because they are optimised for stability. They do not want to change the rules in ways that are inequitable or lower their standards. This is a strength.
Now, I think the challenge for universities is to find pockets to experiment and learn new ways of working while preserving the stability of the older system. So that balance is what leadership has to strive towards in universities to meet their goals: to educate, to conduct excellent research and to foster critical thinking.
Rudolphina: Thanks for stopping by for the interview!
The Oskar Morgenstern Medal
The Faculty of Business, Economics and Statistics of the University of Vienna awards the Oskar Morgenstern Medal every two years for distinguished academic work in the field. Former laureates include Thomas Piketty and the Nobel memorial prize laureates Roger Myerson, Robert F. Engle and Christopher A. Pissarides. The award is given in honour of the economist Oskar Morgenstern, who was a professor at the University of Vienna until 1938 and founded the research field of game theory.
His current work focuses on different ways in which intelligent algorithms relate to organisations, in their roles as tools, team-mates, and as templates for organising (e.g. blockchain, metaverse).
- More about Phanish Puranam
- Podcast with Phanish Puranam: Why organisations need human-centric AI to survive
- Reading tip: The Business Case for Human-Centric Organizing: Implications for Artificial Intelligence (AI) adoption
- Faculty of Business, Economics and Statistics
- The Oskar Morgenstern Medal


