Nov. 26, 2025

Ethics of AI Management of Humans

Ethics of AI Management of Humans
The player is loading ...
Ethics of AI Management of Humans

AI managers are no longer science fiction.

They're already making decisions about human workers, and the recent evolution of agentic AI has shifted this from basic data analysis into sophisticated systems capable of reasoning and adapting independently. Our host, Carter Considine, breaks it down in this edition of Ethical Bytes.

A January 2025 McKinsey report shows that 92% of organizations intend to boost their AI spending within three years, with major players like Salesforce already embedding agentic AI into their platforms for direct customer management.

This transformation surfaces urgent ethical questions.

The empathy dilemma stands out first. After all, it can only execute whatever priorities its creators embed. When profit margins override worker welfare in the programming, the system optimizes accordingly without hesitation.

Privacy threats present even greater challenges.

Effective people management by AI demands unprecedented volumes of personal information, monitoring everything from micro-expressions to vocal patterns. Roughly half of workers express concern about security vulnerabilities, and for good reason. Such data could fall into malicious hands or enable advertising that preys on people's emotional vulnerabilities.

Discrimination poses another ongoing obstacle.

AI systems can amplify existing prejudices from flawed training materials or misinterpret signals from neurodivergent workers and those with different cultural communication styles. Though properly designed AI might actually diminish human prejudice, fighting algorithmic discrimination demands continuous oversight, resources, and expertise that many companies will deprioritize.

AI managers have arrived, no question about it. Now it’s on us to hold organizations accountable in ensuring they deploy them ethically.

 

Key Topics:

• AI Managers of Humans are Already Here (00:25)

• Is this Automation, or a Workplace Transformation? (01:19)

• Empathy and Responsibility in Management (03:22)

• Privacy and Cybersecurity (06:27)

• Bias and Discrimination (09:30)

• Wrap-Up and Next Steps (12:10)

 

 

More info, transcripts, and references can be found at ⁠ethical.fm

AI managers of humans are already here, and they’re getting smarter. However,

there are still a lot of ethical questions to be answered about empathy in management, data

privacy, and bias that could define future workplaces.

 

The rapid development of agentic AI over the last year has drastically changed how people

are approaching AI in the workforce. A couple years ago, AI management was mostly data

analysis and looking for trends. Now, AI is able to reason and make far more sophisticated

decisions that are closer to human behavior – and it’s already being integrated across

Industries.

 

This creates a whole host of new ethical considerations for how AI and humans interact,

which we’ll get into after some context on how AI is used in the workplace and where it might

Go.

 

Is this automation or a workplace transformation?

AI can automate some decision making processes like task allocation, scheduling, and

performance monitoring a lot faster and in finer detail than human managers. Offloading some

of those tasks frees up human managers to focus on tasks that require a more personal

touch. But it might not stop there – AI may end up replacing human managers entirely in

some companies, or at the very least, change what it means to be a human manager,

especially with the recent development of agentic AI.

 

Agentic AI is technology that uses AI to make decisions and perform tasks without relying on

human input or intervention. It adapts to inputs when making choices, a big step up from

following a set of predetermined rules like many computer programs do.

According to a January 2025 report by McKinsey, in the next three years, 92% of companies

plan to increase AI investments, showing organizations expect AI to keep evolving enough to

invest in the changes. Salesforce, a major company that makes marketing software, is

already working on incorporating agentic AI into its software that will directly manage

customer interactions.

 

Many employees are already willing to use AI in their jobs, even if their managers aren’t

aware of it. Three times as many employees are using gen AI for a third or more of their work

than their bosses think they are, according to the McKinsey study, and more than 70% of

employees think AI will change 30% or more of their work in the next two years.

However, simplifying work tasks with AI is a little different than having AI make managerial

decisions about how you work and your performance, which is the first ethical question we’re

going to address.

 

Empathy and responsibility in management

 

One of the key parts of managing employees well is having an understanding of who they are,

how they work best, and making decisions that account for the imperfection of being human.

Having empathy for employees means putting yourself in their shoes and considering how

your managerial decisions could make those shoes harder or easier to wear.

AI has been building the ability to mimic empathy in decision-making. Since a study in 2023,

AI has been able to answer 90% of questions correctly on the US Medical Licensing

Examination, which includes questions about communication skills, ethics, empathy, and

professionalism. An AI manager could make decisions that account for people’s feelings, and

potentially remove the damage bad human managers do.

 

However, there are a couple problems to be solved. Using AI managers opens up a lot of

doors for exploitation, manipulation, and abuse of employees because it removes a level of

individuality and independence that human managers have. Replace that with an AI, and now

decisions are being made for large numbers of people based on what the AI was

programmed to prioritize. It removes the opportunity for individual morality in management.

In other words, even though AI can make decisions with empathy in mind, it won’t do that if

the people in charge of it told it to prioritize profit over health and safety. It also adds more

distance between people making those decisions and the people affected by them, making it

easier for workers to be seen as a resource for optimization, which can get dehumanizing

very quickly.

 

This has already been happening. Amazon has been using AI to oversee and manage its

warehouse workers, along with other tasks like deciding which orders get packed first, routing

packages through their warehouses, and more. It’s drastically improved efficiency – but their

push for efficiency has also repeatedly endangered their warehouse workers for years despite

multiple investigations, OSHA violations, federal fines, and public backlash.

A 2023 survey of Amazon warehouse workers found over half of the people working there for

at least three years were physically injured, and more than half of the survey respondents

reported symptoms of burnout. More than two thirds took unpaid time off to recover from

exhaustion or pain from their job. In 2022, data submitted to OSHA by Amazon revealed their

warehouse workers were seriously injured at more than twice the rate of similar workplaces.

AI management of people could have some major advantages, and it’s not going anywhere –

so the ethical dilemma becomes how do we use it while keeping workers safe and hold the

people who exploit employees accountable?

 

Privacy and cybersecurity

 

There’s another major ethical concern looming on the horizon – for AI managers to make

decisions about people, they might need to collect a lot of very personal data about

employees on a scale that has never been seen before, and there are some serious privacy

issues with this.

 

When people talk to each other in person, they’re often unconsciously noticing a lot of tiny

details like posture, eye contact, facial expression, tone of voice, and other things about

people’s bodies and how they present themselves that give clues about how someone is

feeling, their health, their beliefs, likes and dislikes, and more. If someone is tired, they might

slouch a little more. If you mention something someone dislikes, they might make a face of

Disgust.

 

Reading people can be an important part of empathy and social connection. Everyone has a

slightly different baseline, and getting to know someone means recognizing when they move

away from that baseline, however it’s communicated. It’s a mistake to think noticing

someone’s body language is enough to draw conclusions about their internal world. For

example, you might notice someone is uncomfortable while talking about a topic, but they

might just be uncomfortable because they need to use the bathroom.

AI might not take that nuance into account, making decisions based off emotions it detects

that aren’t actually related to the issue at hand. It also has some major issues with what

happens to the data it collects.

 

According to the 2025 McKinsey report, about half of workplace employees are concerned

about privacy and cybersecurity vulnerabilities from AI in the workplace. There has been a

rapid increase in the number and scale of cybersecurity incidents over the last five years,

making people’s personal data accessible to bad actors. It’s clearly a growing problem, and AI

managers could mean a whole new level of data leaking onto the black market.

In addition to that, it’s already commonplace for companies to collect a lot of personal data

about people for hyper-targeted advertisements, including demographic data, interests,

income, social connections, purchases, location, hobbies, and more. Marketing having access

to the level of personal information AI managers might be using could unleash a targeted

advertisement hell – imagine ads targeted not just based on all the historical data collected on

you but on how you’re feeling at the moment.

 

Advertisers targeting people who are tired, desperate, or lonely are more likely to be

exploitative, pushing people deeper into negative spirals. Alternatively, it could also be used

to advertise supportive services from nonprofits, community organizations, and others.

Whether or not that type of intervention would be appropriate or a privacy violation is

something we’ll have to figure out.

 

Bias and discrimination

 

AI managers making decisions about people might be biased, whether from learning from a

biased dataset, an incomplete dataset, or by the way AI works: through association, not

causation. Associating statistical likelihoods with race, for example, might lead the AI to make

racist decisions instead of understanding the impact of discrimination on a marginalized

Group.

 

Bias has to be continuously monitored and counteracted in AI – it creeps back in, even when

the AI has been trained to account for it. It currently requires people to actively look for it and

think of ways to mitigate its effects in AI tools, which requires funding, time, and expertise. If

the people in charge of programming an AI manager do not think bias is worth investing

resources in, the AI will end up making discriminatory decisions about the workers it

oversees, worsening already severe economic and social disparities.

For example, if an AI manager is trained to recognize facial expressions without accounting

for bias, it may read people who are neurodivergent, people with different cultural

expressions, and people with some disabilities as having confusing or negative emotions

when they are not. This is still a problem with human managers, but building a personal

connection with someone helps to overcome that – a solution that doesn’t work for an AI

Manager.

 

Alternatively, a trained AI manager might make decisions that are less biased than some

human managers, who might not always be aware of or address their internal biases. Much

like training AI, combating internal biases is something that has to be revisited, and some

people may not have access to the resources and time to do that if they’re straining to keep

food on the table or they don’t know how to find good information. It’s also worth bearing in

mind that some people work very hard to spread and enforce bias, making it harder for people

with limited resources to fight it.

 

Having a well-trained AI manager could mean a reduction in workplace bias, removing some

economic barriers for marginalized people and reducing the impact of people who encourage

misinformation and discrimination. It could also help human managers by taking on tasks

suited for AI, leaving human managers with more time to pursue training and address

situations that require a more personal touch.

 

Summary

 

We’ve talked about some of the ways AI managers could affect the workplace, and some of

the ethical problems people are working on solutions to, including empathy and responsibility

for workplace priorities, new data privacy concerns tied to the rise of cybersecurity incidents,

and how bias and discrimination could be affected in the workplace. There are both negative

and positive possibilities, and a lot of gray areas to be explored – we’ve only covered a

handful of topics! Do you think AI managers would do well in your workplace? Are there

concerns you have about having an AI manager? How do we move forward ethically into the

future of work? Talk to your colleagues and friends – what insights can you come up with?