Dec. 18, 2024

Beyond Bias: The Future of AI Ethics in a Post-Woke World

Beyond Bias: The Future of AI Ethics in a Post-Woke World
The player is loading ...
Beyond Bias: The Future of AI Ethics in a Post-Woke World

What does the rise and fall of "wokeness" mean for AI ethics and in shaping the future of AI technology?

 

Our host, Carter Considine explores the historical roots of AI ethics with a focus on bias in machine learning algorithms. He’s looking at the emphasis on diversity, equity, and inclusion (DEI) frameworks. As DEI has dominated AI ethics, especially with concerns about racial and gender bias in AI systems, Carter’s questioning whether this approach will remain central as societal and economic dynamics shift.

 

Two main schools of thought have emerged within AI ethics: one focusing on existential risks posed by artificial general intelligence (AGI), and another concerned with algorithmic bias and its social consequences.

 

Today, we’re at a turning point of sorts in the evolving landscape of AI. We could call it a "Reformation" in which wokeness, once revolutionary, is now seen as increasingly outdated. As a result–with DEI-driven frameworks becoming less relevant–AI ethics will likely transition towards a more individualized, business-centric model that prioritizes technical solutions over abstract principles.

 

Looking ahead, moral quandaries around AI will probably move away from ideological frameworks toward a more practical, value-driven methodology. For users, this means a great deal more personalization, giving us more control over how AI systems behave, and making transparency a central concern. Companies will be under pressure to demonstrate real-world value, aligning AI practices with measurable outcomes and business goals.

 

As the technology evolves, we’ll see an emphasis on technical competence and individual autonomy while discarding the reliance on broad, one-size-fits-all ethical standards. Ultimately, the survival of AI ethics will depend on its ability to adapt to real-world needs, shifting from theory to actionable, transparent, and user-focused practices.

 

Key Topics:

  • The History of AI Ethics (00:00)
  • AI Safety and AI Ethics (05:51)
  • What’s Next? The “Third Way” (08:10)
  • Technical Solutions with Deep Philosophical Understanding (10:03)
  • Conclusion: The Survival of AI Ethics (12:52)

 

More info can be found at ethical.fm

The ‘woke’ movement has been described as a new Reformation: a wave of religious fanaticism, sweeping the world of the old and replacing it with the new vision of the good. But with inflation rising, spending in DEI dropping, and the final knife,  the rise of the Trump administration, the world has experienced a true vibe shift: woke ideology isn't as universally accepted as was once believed. 

 

Some argue that ‘wokeness’ is the mainstream religion that needs to be replaced rather than acting as the reformer. This means whatever comes after wokeness will be the true Reformation, "Wokeness” is, in fact, all that remains of a moral, material, and spiritual worldview that was once revolutionary but has grown calcified, decadent, and increasingly unfit for purpose," Mary Harrington, a self-declared reactionary feminist, writes, "And in response, a new Reformation really is coming: one that will sweep away the socially destructive and alienating anti-theology of “wokeness”, with the clarifying light of both scientific and also spiritual truth." 

 

Since its inception, the core of AI ethics has been defined by the DEI agenda, in particular, that racial and sexual bias in algorithms against minorities is one of the worst kinds of ethical violations and governance is the best way to ensure compliant AI. If ‘woke’ is not the new Reformation, what will creating a "good" algorithm mean, then? How does incentivizing building "good" models look?  The field of AI ethics will transform to keep itself alive; this episode explores the history of AI ethics and its challenges in adoption, as well as predictions on how the field will adapt to continue to exist.

The History of AI Ethics 

The re-emergence of AI through Machine Learning

Most philosophical discussions around the ethics of autonomous machines emerged in 1954 when Martin Heidegger published The Question Concerning Technology, inspired by the use of autonomous missiles in WWII. However, the field as we know it today emerged much later, shortly after the second summer of AI in 2012. 

 

Andrew Ng, founder of Deeplearning.AI and computer scientist, who was working at Google, trained a neural network to recognize cats using 16000 CPU cores after only watching YouTube videos and without being explicitly told what a “cat” is. The Netflix recommender challenge was blown away in the same year with a deep-learning algorithm winning first place by a wide margin.

The Birth of Modern AI Ethics as Bias

A few years later, an incident occurred with the Google Photos program in 2015, when Google Photos labeled a black couple as gorillas. This was the first striking example of an “unethical” machine learning algorithm, which proved a fundamental truth: AI models can have ethical values built into them. Technology is not neutral. However, instead of considering the other kinds of ways machines can produce ethical and unethical behavior, bias became the mainstream version of unethical AI. 

 

More examples emerged of unethical algorithms but were still centered around bias. In 2012, Amazon created a recruiting tool that preferred hiring males over women for software engineering positions, reflecting male dominance in the tech industry in historical data used to train the algorithm. 

Corporations: Ethical Frameworks

When algorithms were discovered to be biased, corporations already committed to AI needed a quick fix. To tackle this problem, the mainstream relied upon an already existing and functioning mechanism: ethical frameworks, a governance tool used by compliance officers to ensure companies avoid lawsuits and fines. Frameworks have been being implemented within corporations since the 80s, especially for anything DEI-related

 

In practice, Responsible or Ethical AI frameworks are a checklist that the development team, usually a product manager, goes through during development. Microsoft shared their Responsible AI Standard in 2022, hoping to help share their learnings, get feedback, and open a discussion about building better AI systems. These checklists are finite: they need to be preestablished by the team before building a product and constantly updated. However, AI is probabilistic. AI is developed without needing to explicitly program how it should behave. Frameworks and AI are essentially incompatible: the deterministic nature of frameworks will eventually be a roadblock to AI development, no matter how quickly we update and change the compliance rules.

AI Safety and AI Ethics

During this time, academia experienced the emergence of two rival branches of AI ethics, both trying to tackle the most important ethical questions related to AI: AI safety, which focuses on existential threat and AGI, and AI ethics, which focuses on concrete issues like bias. AI Safety originated at the University of California, Berkeley, where Nick Bostrom’s Superintelligence took the campus by storm. AI Safety has had deep affiliations with  Effective Altruism (EA), with EA’s career advice center, 80,000 hours, listing “AI safety technical research” and “shaping future governance of AI” as the top two recommended careers for EAs to go into and the billionaire EA class funding initiatives to stop the AGI apocalypse. EA claims that AGI is likely inevitable and their goal is to make AGI beneficial to humanity. For some transparency, the author of the NYT article is Timnit Gubru, one of the leading figures of the rival branch of AI ethics, claiming that bias against minorities is the most important threat AI poses to humanity.

 

The other branch, AI ethics, as opposed to the end-of-the-world, existential risk narrative, is a fear of a dystopia where human’s worst instincts, particularly about race and sex, are embedded and amplified by machines. In other words, algorithmic white oppression. Proponents of these concerns include names like Rumman Chowdhury, Meredith Broussard, Cathy O’Neil, Safiya Umoja Noble, and Joy Buolamwini. Sometimes this branch includes other liberal values, such as  This branch of AI ethics mainly has seen popularity on the East Coast, with academics from traditional elite schools like Cornell Tech, NYC, and Fordham. 

 

(As a side note, neither of the branches claims to be the leader in AI security. The reason for this is likely because AI security is less of an interdisciplinary domain and more specifically technical, thus remaining strictly in the domain of computer science.) 

What next: The “Third” Way

As DEI declines and inflation rises, more pressure will be put on AI Safety and AI ethics to focus on value for businesses. AI ethics, as a whole, will need to target real, specific problems rather than ones in the far-predicted future and no longer rely on companies to give lip service to empty liberal values. 

 

The Protestant Reformation focused on scientific and spiritual truth, with the light of reason shining itself through individuals. In a similar way, the next version of AI ethics will have a focus on individual autonomy, technical competence, and values connected with specific business issues.   

Focus on the individual and dissent

Under the previous umbrella of AI safety and ethics, the great disadvantage of these solutions was the lack of individual choice. Only one value system reigns and, as we have seen with the decline of ‘wokeness’, the inability of AI systems to handle multiple viewpoints does not only deny a fundamental truth about the world, that people everyone has a different set of ethical values but also it forces users into one viewpoint. There are countless stories of users needing to jailbreak LLMs for refusing basic prompts, reports of ideological bias within models, and public frustration those biases cause make AI to make mistakes.  

 

The paradigm of AI ethics will emphasize personalizing interactions according to individuals by giving them the autonomy to decide how they want models to behave. Although certain demographics hold similar values, there’s no one-size-fits-all for ethical values, pushing AI companies towards ultra-personalization.

Emphasis on technical solutions with deep philosophical understanding

Responsible AI has become the source of income for many employees with a humanities background looking for a way to apply their expertise to a lucrative market. Since most transitioned from a non-technical field, most academics and Responsible AI practitioners have never written a line of code. 

 

Normative evaluations of computer science are difficult without understanding how machines work. Since inflation is rising, AI ethics must justify itself as connected to RoI (i.e. return on investment) to companies. Business results that rely on an individual making a meaningful impact through technology are very difficult to ensure if that individual does not understand how that technology works. Practitioners need a theory as to how their ideas might actually alter the behavior of a product. Up until recently, AI ethics has relied too heavily on abstract principles and frameworks; it will be forced by the market to become more focused. The field has been moving towards real implementation, but now it will be much faster and more brutal. The practitioners who cannot provide value will likely be cut or laid off. 

No more lip service to empty liberal values

AI ethics has always been centered around principles: responsibility, transparency, fairness, accountability, etc. But as the field becomes more focused, the definition of these values will become more precise, especially in implementation. 

One good example is the value of transparency. The more transparent companies are with their AI models and training data, the more exposed they become to losing their moat from intellectual property. There is more pressure to reveal even the training data sets to the public, especially from big figures like the Mozilla Foundation, “When it comes to building trustworthy AI products, better is possible. We need to know the totality of how AI is trained so we understand its risks and limitations – and, most importantly, what needs to be improved to make it trustworthy and helpful for everyone on the internet.” As the new paradigm of AI ethics arises, companies will begin to see value in users having to describe how they specifically train and define values and train models with them. We will have more clarity on what exactly is transparent in AI: the datasets, the code, the mode of communication between users and companies, etc. 

Conclusion 

In the proceeding years, AI ethics will shift from abstract principles and frameworks to a more dynamic, individualized, and implementation-driven approach. The next evolution will prioritize empowering users with autonomy, bringing technical precision to philosophical depth, and delivering measurable business value. As ideological consensus fades, ultra-personalization and transparency will replace one-size-fits-all solutions. 

 

Ultimately, the survival of AI ethics depends on its ability to adapt to real-world demands, shedding ideologies in favor of practical, user-centered, and innovative practices. The era of lip service is over—what comes next must be actionable, accountable, and agile.