Use + Remix

The debate on AI ignores marginalised communities who stand to benefit most. Instead, AI can empower them, fight injustice, and level the playing field.

By offering personalised, responsive, and supportive tools, AI can help individuals with disabilities achieve greater independence and inclusion. : Elizabeth Woolner, Unsplash Unsplash license By offering personalised, responsive, and supportive tools, AI can help individuals with disabilities achieve greater independence and inclusion. : Elizabeth Woolner, Unsplash Unsplash license

The debate on AI ignores marginalised communities who stand to benefit most. Instead, AI can empower them, fight injustice, and level the playing field.

The debate around AI’s impact often resembles a dystopian movie — job losses, privacy invasions, and superintelligent machines taking over. But amid these anxieties, a crucial voice is missing: that of marginalised communities.

These communities, facing systemic oppression and limited access to resources, are rarely part of the conversation. They are the ones who would stand to benefit most from responsible AI development.

AI as a leveller

AI can actually empower the marginalised by offering tools and opportunities that bypass traditional barriers. Marginalisation due to religion, caste, class, gender and sexuality, cannot be solved through AI techno-solutionalism, the belief that technology can provide solutions to a wide range of social, political, and economic problems.

Yet, one of the most significant beneficiaries of AI technology could be the marginalised, especially those who lack cultural capital, privilege, access and resources.

The advantage of AI is its ability to democratise access to information and resources. For instance, AI-powered translation tools can help non-native speakers communicate effectively, breaking down language barriers that often act as markers of privilege.

AI-driven educational platforms can also offer personalised learning experiences, catering to the needs of first-generation learners and those from disadvantaged backgrounds. AI technology can empower these individuals by democratising access to information, mentorship, and essential services, challenging the status quo of privilege and inclusion.

Empowering people with disabilities

For people with disabilities, AI can be a game-changer. AI-driven applications can assist people with autism in developing social skills and recognising emotional cues, thereby enhancing their ability to interact with others.

Adaptive platforms can provide customised educational content for learners with dyslexia and other learning disorders, ensuring they receive the support they need to succeed. By offering personalised, responsive, and supportive tools, AI can help individuals with disabilities achieve greater independence and inclusion.

Challenging injustice

Marginalised communities frequently face systemic injustices, such as discrimination and exclusion from economic opportunities. AI can help challenge these injustices by providing equal opportunities and access to resources.

AI can assist small-scale entrepreneurs from marginalised backgrounds by offering insights into market trends and optimising supply chains, enabling them to compete with larger businesses.

AI-driven job matching platforms can also help individuals from disadvantaged backgrounds find job opportunities that match their skills, reducing the impact of discrimination in the job market.

The responsibility of AI should lie elsewhere

 

The primary responsibility for ensuring AI is ethical and beneficial should lie with those who have the power to shape the technology, such as policymakers, technologists and corporations. They have the capacity to create and enforce regulations to ensure AI is developed and used responsibly, without exacerbating existing inequalities.

There are three main areas that the responsible evolution of AI revolves around: development, deployment and existential crises.

Responsible AI development should address ethical concerns related to training data, energy consumption and the exploitation of workers, particularly those in African countries involved in supervised training of the model.

The responsible deployment of AI systems is essential, especially regarding their use for surveillance, warfare and privacy violations.

The existential risks posed by superintelligent systems range from catastrophic scenarios, like the overthrow of human society, to more immediate issues, such as significant job losses.

While these concerns drive the need for robust AI policies and frameworks, some issues remain speculative. For instance, there’s no consensus on what constitutes superintelligence, with much of the discourse influenced by science fiction.

The threat of job losses, particularly in white-collar sectors, is an urgent and tangible concern. Marginalised people are often already burdened with the struggles of navigating systemic injustices and fighting for their rights.

Adding the responsibility of AI to their existing challenges would only serve to further oppress them. Instead, society should focus on leveraging AI to uplift and empower these communities, using the technology as a tool for social justice and equity.

Shafiullah Anis is a lecturer in marketing at the School of Business, Monash University Malaysia. His research focuses on consumption, social inequalities, and AI technology, broadly lying within the domains of consumer culture theory.

Juliana A. French is the head of the department of marketing and a senior lecturer at Monash University Malaysia. She studies the complexities surrounding the issues of race and religion in Malaysia that are demonstrated in the everyday consumption behaviour of Malaysian women.

Originally published under Creative Commons by 360info™.

Are you a journalist? Sign up for our wire service