Key findings on public attitudes towards AI
To cite this research
Gillespie, N., Lockey, S., Ward, T., Macdade, A., & Hassed, G. (2025). Trust, attitudes and use of artificial intelligence: A global study 2025. The University of Melbourne and KPMG. DOI 10.26188/28822919
Public attitudes towards AI
Public adoption of AI is high, however AI literacy is lagging adoption

Self-reported AI training and literacy across economic groups reveals advanced economies are trailing emerging economies.
Two thirds of people (66%) report intentionally using AI on a regular basis for personal, work, or study purposes.
However, AI literacy remains limited; about half of respondents say they don’t feel they understand AI nor when or how it is used. This knowledge gap reflects that only two in five people report any AI-related training or education.
Despite low rates of knowledge and training, three in five say they can use AI effectively. This likely reflects the easily accessible interfaces of many AI systems (e.g. using natural language) and low barriers to use.

Self-reported AI training and literacy across countries.
AI adoption and literacy is higher in emerging economies. Four in five (80%) people in emerging economies intentionally use AI regularly, compared to only three in five (58%) in advanced economies. Similarly, three-quarters of people in emerging economies believe they can use AI effectively, compared to only half in advanced economies, and half report AI training or education compared to only a third in advanced economies.
The pattern of findings suggests AI literacy is a cross-cutting enabler: it is associated with greater use, trust, acceptance, and critical engagement with AI, as well as more realised benefits from AI.
Trust in AI cannot be taken for granted: many people are wary about trusting AI systems particularly in advanced economies

Self-reported AI training and literacy across economic groups reveals advanced economies are trailing emerging economies.
While most people accept and use AI, less than half (46%) are willing to trust AI systems, for example by relying on its output or by sharing information to enable it to work. This low level of trust holds across a range of AI applications, including generative AI systems like ChatGPT.
People are more trusting of the technical ability of AI systems and are more sceptical of the safety and security of using AI systems and their impact on people and society. This pattern holds across all 47 countries surveyed.
"only two in five (39%) people in advanced economies trust AI systems, compared to three in five (57%) in emerging economies"

Self-reported AI training and literacy across economic groups reveals advanced economies are trailing emerging economies.
People’s ambivalence about AI is also reflected in their mixed emotions: the majority are optimistic (68%) but also worried (61%) about AI.
There are stark differences in trust and sentiment toward AI across countries and between advanced and emerging economies: only two in five (39%) people in advanced economies trust AI systems, compared to three in five (57%) in emerging economies. Worry is the dominant emotion in many advanced economies, whereas optimism is dominant in emerging economies.
People are experiencing a range of benefits from the use of AI in society, but also negative outcomes

People’s ambivalence toward AI stems from the mixed benefits, risks and negative outcomes that are being felt from AI use in society.
Three in four (73%) report experiencing a range of benefits from the use of AI systems, including improved efficiency and effectiveness, enhanced accessibility to information and services, greater precision and personalization, improved decision-making and outcomes, greater innovation and creativity, reduced costs and better use of resources.
However, four in five (79%) are concerned about negative outcomes from AI use ranging from the loss of human interaction and connection, cybersecurity risks, loss of privacy or intellectual property, misinformation and disinformation, manipulation or harmful use, inaccurate outcomes, deskilling and dependency, job loss, and disadvantage due to unequal access to AI. Two in five (43%) have personally experienced or witnessed many of these negative outcomes, highlighting that these are not just perceived risks.

Respondents across countries share similar views and experiences regarding AI risks and negative outcomes, highlighting these as areas of universal concern. In addition, 64% worry elections are being manipulated by AI content and bots.
In advanced economies, opinion is divided on whether the benefits of AI outweigh the risks: 38% believe the benefits outweigh the risks, 37% believe the risks outweigh the benefits, and 25% believe the benefits and risks are balanced.
In emerging economies, half believe the benefits outweigh the risks, and more people expect and report experiencing benefits.
The public expect AI regulation at both the national and international level. Yet the current regulatory landscape is falling short of public expectations.

The majority endorse multiple forms of AI regulation.
There is a strong public mandate for AI regulation to mitigate risks and negative outcomes from AI: 70% of people believe AI regulation is required, including the majority in almost all countries surveyed.
However, the current regulatory landscape is falling short of public expectations: only 43% believe that the existing laws and regulation governing AI systems in their country are sufficient.
The majority of people expect a multipronged regulatory approach at the national and international level, with active involvement from both government and industry. International law and regulation is widely endorsed and supported by a clear majority in all countries. People also expect national government regulation and co-regulation with industry, with are typically preferred over self-regulation by industry or an independent AI regulator.

Public expectations of who should regulate AI across countries.
"the current regulatory landscape is falling short of public expectations: only 43% believe that the existing laws and regulation governing AI systems in their country are sufficient"
However, a majority (50%+) in almost all countries endorse all forms of regulation, in line with the broad reach, uptake and impact of AI across multiple sectors and levels of society.
There is also a clear mandate for stronger regulation of AI-generated misinformation: 87 percent want laws to combat AI-generated misinformation and expect social media and media companies to implement stronger fact-checking processes and methods that enable people to detect AI-generated content.
People in emerging economies report greater trust, acceptance and adoption of AI, higher levels of AI literacy, and more realised benefits from AI
A notable finding is the stark contrast in use, trust, and attitudes toward AI between people in advanced and emerging economies.
People in emerging economies report higher adoption and use of AI both at work and for personal purposes, are more trusting and accepting of AI, and feel more positive about its use. They self-report higher levels of AI training and literacy, more realised benefits from AI including at work, and view AI benefits as outweighing the risks. They are also more confident in industry to development and use of AI in the public interest and more likely to view AI regulation and safeguards as adequate, compared to people in advanced economies. These differences hold even when controlling for the effects of age and education.
In particular, six countries with emerging economies strongly and consistently show this pattern—China, India, Nigeria, Egypt, Saudi Arabia and the UAE. Of the advanced economies, Israel, Norway, Singapore, Switzerland and Latvia have comparatively high levels of AI adoption, trust, acceptance and positive attitudes toward AI.
This pattern may be due to the greater relative benefits and opportunities AI affords people in emerging economies, which may encourage a growth mindset, motivating trust and use of AI technology as a means to accelerate economic progress, prosperity, and quality of life. It may also motivate investment in AI training and literacy as a foundation for realizing and augmenting benefits.
Looking ahead, the nations that accelerate in responsible adoption may be uniquely positioned to gain long-term competitive and strategic advantage as AI becomes a more prominent driver of productivity, innovation, and progress.
Younger people, higher income earners, and the university-educated are more likely to use, trust and accept AI and have higher levels of AI training and literacy
Younger generations, those with higher incomes and people with university education are more likely to use AI regularly, have AI training, and self-report better understanding of, and ability to use AI effectively, compared to those who are older, particularly those aged 55 years or older, those on middle and low incomes, and those with no university education. People with AI training and higher-income earners also report greater benefits from AI use.
This suggests that older people, those with lower incomes and no university education may be at risk of being left behind due to limited AI literacy and ability to use and benefit from AI and the opportunities it offers.