Articles
Inclusive Generative AI: Bridging Gaps and Empowering Equity in Business and Society
Motivation
Generative Artificial Intelligence (GenAI) is transforming business by improving efficiency, data collection, content generation, and customer support, allowing companies to focus on more strategic activities. While the adoption of GenAI is growing rapidly, its use remains uneven, particularly among small businesses and in lower-income regions.
This special issue invites research on how GenAI can foster inclusion and reduce inequalities by enhancing the capabilities of disadvantaged groups, such as those with lower income, skills, or access to technology. It also seeks studies on the potential risks, such as bias or alienation, and how these can be mitigated to achieve positive societal outcomes. Submissions exploring various sectors, particularly those combining multiple studies or mixed methods, are encouraged.
Topics of interest include, but are not limited to:
Overcoming barriers to adoption and accessibility
- What social, economic, psychological or trust-related barriers prevent underrepresented groups from adopting GenAI, and what strategies could promote equitable access and usage?
- What organizational, financial, cultural, and regulatory factors hinder small businesses from adopting GenAI, and how can equitable access be promoted and implemented?
- How do GenAI impact global inequalities in technology access, and what role do socio-economic, regulatory, or corporate initiatives play in reducing these disparities?
- How can GenAI help bridge the digital divide in low-income countries by addressing infrastructure, education, or access challenges?
Empowerment and Inclusion
- How can GenAI transform education for disadvantaged communities by providing personalized learning, reducing costs, and overcoming language barriers while avoiding bias in content delivery?
- How can GenAI tools empower underprivileged workers by improving skill development and job opportunities while mitigating risks of job displacement and exploitation?
- How can GenAI enhance accessibility for individuals with disabilities in education, employment, and daily life while addressing risks of over-reliance or exclusion?
- How can businesses leverage GenAI to design inclusive customer experiences that respect diverse needs and avoid alienating specific demographics through biased algorithms?
Ethical considerations and fairness
- What strategies can mitigate bias in GenAI systems, including gender and cultural biases, to promote equitable experiences across diverse user groups and markets?
- How does GenAI shape perceptions of fairness and well-being among users, considering emotional impact, consumer trust, and potential misuse in marketing and personalization?
- How can GenAI promote fairness in user experiences and reduce alienation, especially in underserved communities or emotionally sensitive interactions, such as AI companions?
- How can organizations implement robust corporate digital responsibility (CDR) policies, structure, and culture to ensure the ethical, unbiased and fair, and privacy protected use of GenAI in their organizations and wider ecosystems.
- How should governments and regulatory bodies address the ethical challenges of GenAI, balancing innovation, societal well-being, and protection against discrimination or manipulative practices?
Submission window: October 1st- December 31st, 2025.
Authors are encouraged to present their prospective contributions at AIRSI2025 The Metaverse Conference (9-11 June) for prior feedback and discussion, which may enhance the quality of submissions to this Special Issue.
Guest Editors:
- Daniel Belanche, University of Zaragoza (Spain)
- Eleonora Pantano, University of Bristol (UK).
- Jochen Wirtz, National University of Singapore (NUS), Singapore.
Managing Guest Editor:
- Carlos Flavián, University of Zaragoza (Spain) This email address is being protected from spambots. You need JavaScript enabled to view it.
More info: https://www.sciencedirect.com/journal/technology-in-society/about/call-for-papers