Beyond the obvious AI (adoption) conversation

TEXT | Asko Uuras
Permalink http://urn.fi/URN:NBN:fi-fe20251215119666
Firman työntekijöitä pitämässä kokousta, taustalla fläppitaulu jossa lukee AI.

In this text, I ponder the current state of artificial intelligence (AI) and future research and development initiatives, drawing on the AI2Business project’s results. The organizations are currently bogged down in experiments, and locally, in the Ostrobothnia region, the situation does not differ from global reports. When we look more closely at the reasons for adopting AI, we can see that there are challenges in getting employees to work with it. Recent research suggests that AI adoption factors, such as fear of job loss, should be considered to incorporate AI into workflows. While current legislation serves as a guardrail, new democratic voting mechanisms could be implemented to address individuals’ concerns about AI adoption.

The current state of AI adoption

Currently, the AI adoption is slowed, and moving from one maturity level to another requires addressing individual employee concerns within the organization. A recent McKinsey report (Singla et al., 2025) states that “for most of the organizations, AI use remains in pilot phases”. Similarly, Accenture (Vohra et al., 2025) reports that 63% of companies are AI experimenters.

The same observations have been made in local Ostrobothnian research initiatives. For example, the AI2Business project’s findings align with other reported maturity levels. Locally, Ostrobothnian companies can be categorized into three distinct personas: AI curious, AI explorers, and AI strategists (Peltonen & Pekkala, 2024). The main differences between the personas were the extent to which they had incorporated AI into their business processes. The AI curious had only individual advocates within the organization who had tested AI capabilities, whereas AI strategist organizations had “embraced AI as an integral part of their business strategy” (2024, p. 21). The key feature of an organization moving toward an AI strategist persona is decision-making and initiative at various levels of the organization.

According to Khanfar et al. (2025), individual and social factors, such as fear of job loss, trust, perceived safety, and the impact on the community, affect the adoption of artificial intelligence. In their systematic literature review of 90 studies, they mention that the adoption of AI has fallen short of expectations, and the found factors affecting adoption may have a hindering effect. The researchers state the need for future research on how to ensure the rational decisions of AI adoption (Khanfar et al., 2025). Identified factors may explain why individual decision-makers are risk-averse toward AI adoption. Therefore, there is a clear need for transparent collective conversations within organizations about AI and its use. Otherwise, the adoption of AI could be further slowed or halted altogether.

Addressing the concerns

One way to address individual employees’ concerns about AI, such as fear of job loss or loss of influence, is to strengthen constructive dialogue within the organization and to implement voting-based delegation mechanisms. Voting as a delegation mechanisms have been recently studied by artificial intelligence researchers such as Colley et al. (2020) and Kahng et al. (2021). By voting-based delegation, I refer to formal procedures through which employees or their representatives can collectively decide which tasks, workflows, or decision rights are delegated to AI systems. Such mechanisms would not only give individuals a greater opportunity to influence organizational decision-making but could also make AI adoption more transparent and legitimate.

In practice, this could mean majority voting on which parts of a workflow may be supported or automated by AI, or on which AI capabilities and tools are selected from the market. Employees could thus participate in defining the boundaries and conditions under which AI is used. Importantly, these internal voting procedures would operate within the limits set by regulation.

Democratic delegation mechanisms complementing regulatory frameworks

Existing AI regulations already define unacceptable or high-risk uses, and these legal constraints would function as guardrails: even a majority vote could not authorize the deployment of AI systems that violate regulatory requirements. In this way, democratic delegation mechanisms could complement regulatory frameworks by enabling employee participation and shared responsibility in AI-related decisions, while ensuring that collectively made choices remain within safe and lawful boundaries.

However, in the big picture EU regulation pushes a narrative about AI as a tool and focuses on human-in-the-loop (European Parliament and Commission, 2024; European Parliament, 2024). I want to narrow my pondering outside the EU regulation (EU Parliament and Commission, 2024), which provides a sufficient framework for AI development to identify unacceptable risk use cases. The use cases I am referring to are: social scoring, exploitation of vulnerabilities, manipulative techniques, real-time remote biometric identification by law enforcement, biometric categorization, individual predictive policing, emotion recognition in the workplace and educational institutions, and untargeted scraping. In these cases and in other types of harmful use of artificial intelligence should not be conducted. But this does not mean that exploring different use cases that pose no unacceptable risks wouldn’t be important and beneficial to organizations and human well-being in general.

A way forward

I understand that democratic decision-making, especially voting among employees, might sound radical to some. Humans, as a collective, also make mistakes and could jeopardize the organization through poor decisions. This is a fair and important point of conversation within the organizations.

However, democratic theories have been extensively tested in societal contexts, and several major political philosophers argue that, at least in theory, democratic decision-making provides substantial benefits. For example, Habermas and Rawls argue that democratic conversation would provide just and reasonable decisions, and Rousseau, Mill, Bentham, Riker, and Popper would go even further by claiming that democratic decision-making may deliver “the best outcome” (Setälä, 2003, p. 16). As Eidlin (1996, p. 139) describes Karl Popper’s (1945) idea about the progression of science, “if there are no mistakes, there is no learning,” I would extend this to all progress made in organizations.

Of course, caution is needed when delegating to AI, since, as we know, democratic voting results may not be infallible (Setälä, 2003). The conversation about the discussed delegation is essential to establish transparency through respectful, rational dialogue between autonomous human decision-makers (Habermas, 1996; Rawls, 1988; Setälä, 2003). Although centuries have passed, I believe that there is still reason to test Rousseau’s idea: could the democratic decision-making process actually deliver the best outcomes in organizations regarding artificial intelligence?

Project information

AI2Business – Sustainable business from artificial intelligence is a group project co-funded by the European Union. Project is implemented by VAMK Vaasa University of applied sciences and University of Vaasa. Duration of the project: 08/2023-12/2025

References
  • Colley, R., Grandi, U., & Novaro, A. (2021). Smart voting. In Twenty-ninth international joint conference on artificial intelligence (IJCAI 2020) (pp. 1734-1740). International Joint Conferences on Artifical Intelligence (IJCAI). https://doi.org/10.24963/ijcai.2020/240

  • Eidlin, F. (1996). Karl Popper, 1902-1994: Radical Fallibilism, Political Theory, and Democracy. Political Theory, and Democracy (1996). Critical Review, 10(1). https://doi.org/10.1080/08913819608443413

  • European Parliament. (2024). EU AI Act: first regulation on artificial intelligence. EU AI Act: first regulation on artificial intelligence. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence. Accessed 5.12.2025.

  • European Parliament and Council. (2024). Regulation (EU) 2024/1689 of 25 July 2024. Official Journal of the European Union, L 203, 1–15. https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng. Accessed 5.12.2025.

  • Habermas, J. (1996). Between Facts and Norms MIT Press Cambridge MA.

  • Kahng, A., Mackenzie, S., & Procaccia, A. (2021). Liquid democracy: An algorithmic perspective. Journal of Artificial Intelligence Research, 70, 1223-1252. https://doi.org/10.1613/jair.1.12261

  • Khanfar, A. A., Kiani Mavi, R., Iranmanesh, M., & Gengatharen, D. (2025). Factors influencing the adoption of artificial intelligence systems: A systematic literature review. Management Decision. https://doi.org/10.1108/MD-05-2023-0838

  • Singla, A., Sukharevsky, A., Yee, L., Chui, M. Hall, B. Balakrishnan, T. The state of AI in 2025: Agents, innovation, and transformation. 5 November 2025. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai#/. Accessed 5.12.2025.

  • Peltonen, S. & Pekkala, J. (2024). AI train is leaving, all aboard! Muovaaja, 1/2024, 18-21. https://www.vamk.fi/wp-content/uploads/2024/05/Muovaaja-2024_1_valmius-1.pdf

  • Popper, K., Gombrich, E. H., & Havel, V. (2012). The open society and its enemies. Routledge.

  • Setälä, M. 2003. Demokratian arvo. Teoriat käytännöt ja mahdollisuudet. Gaudeamus, Helsinki.

  • Rawls, J. (1988). Oikeudenmukaisuusteoria. Suom. Terho Pursiainen. WSOY, Helsinki.

  • Vohra, S., Vasal, A., Roussiere, P. Tanguturi, P. Guan, L. 2025. The art of AI maturity. https://www.accenture.com/fi-en/insights/artificial-intelligence/ai-maturity-and-transformation. Accessed 5.12.2025.

Related articles