A recent study conducted by researchers in the UK and Romania highlights a surprising aspect of technology adoption: for controversial and rapidly evolving technology such as generative artificial intelligence (GenAI), user acceptance is less influenced by its performance and more by familiarity and the perception that it is widely used.
The research, published in the journal Technological Forecasting & Social Change, challenges traditional models of technology acceptance that emphasize rational, utility-based factors. The study was conducted by researchers Raluca Bunduchi, Dan-Andrei Sitar-Tăut, and Daniel Mican from the University of Edinburgh and Babeș-Bolyai University in Cluj-Napoca.
GenAI, defined as artificial intelligence that generates text, images, or other types of original media from received inputs (called prompts), has seen explosive growth in recent years. According to McKinsey reports, its adoption within companies has increased from one-third to half of the organizations surveyed between 2023 and early 2024, according to the study cited. However, despite its benefits in terms of utility and productivity, GenAI raises significant concerns for some users, ranging from ethical and regulatory issues to a lack of trust at the individual level.
The lead author of the study, Raluca Bunduchi, and her team argue that traditional models, such as the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT), are not suitable for exploring the role of institutional factors. Instead, they have developed a “legitimacy-based model” to examine how users evaluate GenAI in its institutional context. This model focuses on three dimensions of legitimacy:
- Pragmatic legitimacy: The technology satisfies users’ interests by being easy to use or improving work performance.
- Moral legitimacy: The technology aligns with broader social norms, values, and regulations.
- Cognitive legitimacy: The technology is familiar and considered “self-evident” by users.
The researchers surveyed 483 computer science students—a relevant sample, as GenAI plays an important role in their education. Their findings offer new insight into what drives technology acceptance.
The power of perception and imitation
The study showed that the maturity of a technology, particularly its uncertainty and variation, influences how users judge its legitimacy. In the early stages of a technology’s life cycle, when there is high uncertainty and a wide variety of competing products, users often have incomplete information. In these ambiguous conditions, they tend to give in to mimetic pressures or “herd behavior,” copying the choices of others.
This imitation, the study shows, leads users to perceive a new technology as having greater cognitive and moral legitimacy. “Our findings indicate that the emphasis on widespread acceptance of a controversial new technology in the social context of users plays a crucial role in propelling the acceptance of the technology in these early stages,” the researchers explain.
Pragmatic and cognitive legitimacy, but not moral
Although the study confirmed that uncertainty and variation in technology positively influence pragmatic and cognitive legitimacy, a key finding was the lack of significance of moral legitimacy in explaining user behavior. The research showed that pragmatic and cognitive evaluations positively influenced a user’s intention to use GenAI, while moral legitimacy had no significant impact.
This conclusion contradicts other research on algorithm-based decisions, where issues such as fairness and privacy are essential. The authors suggest that this may be due to the specific context of GenAI for content generation, as opposed to decision-making. It may also be that, for a technology in such an early stage, users find it difficult to make consistent moral judgments due to the ambiguity of its functions.
The strongest relationship identified in the study is between cognitive legitimacy and the intention to use the technology. This suggests that for controversial technologies such as GenAI, a user’s beliefs about the technology’s compatibility with existing mental models and cultural values are even more influential than its performance benefits.
Practical implications for developers and organizations
The study offers several practical insights. For developers and organizations seeking to promote GenAI adoption, the findings suggest a new strategy. Rather than emphasizing only its innovative features and usefulness, they should focus on how GenAI aligns with existing practices and is already widely used.
The lack of influence of moral legitimacy on users’ intention to use GenAI, while it should be approached with caution, indicates that developers and regulators play a key role in raising user awareness of ethical and regulatory norms. The complexity of these issues may make it difficult for users to clearly assess moral legitimacy.
This research provides valuable guidance for understanding how controversial technologies can be successfully introduced and managed, proposing a shift in perspective from mere utility to a broader institutional and social context.
The results of this study are promoted by UBB Core, the Career Guidance Center for Researchers at Babeș-Bolyai University in Cluj-Napoca, Romania.