Key findings on AI at work and in education

AI at work and in education

Trust, attitudes and use of artificial intelligence - Key Findings

The age of working with AI is here and is delivering performance benefits, but also mixed impacts

Intentional use of AI at work.

Nearly three in five employees (58%) intentionally use AI at work on a regular basis, with a third using it weekly or daily. Employee AI use is greater in emerging than advanced economies (72% vs. 49%).

General-purpose generative AI tools, particularly public platforms such as ChatGPT, dominate workplace use. 70% use free public tools, while just 42% use employer-provided AI tools. Three-quarters of employees report that their organisations use AI.

Most employees report positive impacts of AI integration at work, notably increased efficiency (67%), information access (61%), innovation (59%) and quality of decision making (58%). Almost half (46%) also report AI use has increased revenue-generating activity.

Alongside these benefits, AI adoption is having mixed impacts, with over a quarter of employees reporting increased workload, stress and pressure (26%), privacy and compliance risks (35%), and time on mundane tasks (39%). However, some employees also report that AI adoption has reduced stress (36%), workload (40%), mundane tasks (36%), and compliance risks (19%).

Human collaboration is also affected, with half of employees choosing to use AI tools instead of collaborating with peers or supervisors on work. This raises questions about how human connection and collaboration will be retained as work becomes increasingly AI-augmented.

Many employees are using AI in complacent, inappropriate and non-transparent ways that increase risks

Inappropriate and complacent use of AI at work.

While AI adoption is delivering benefits, many employees are using AI in complacent and inappropriate ways, augmenting security, compliance, and reputational risks and raising quality issues.

Almost half of employees admit to having used AI in ways that contravene organizational policies. This includes uploading sensitive company information into public AI tools. Almost half say they have used AI in ways that could be considered inappropriate (47%), and even more (63%) have seen other employees using AI inappropriately.

Two thirds (66%) report they have used AI output in their work without evaluating it, and over half (56%) have made mistakes in their work due to AI.

Non-transparent use makes these risks more challenging to manage: over half of employees say they have avoided revealing when they use AI to complete work and have presented AI-generated content as their own.

AI training and governance of AI is not keeping pace with employee adoption

A lack of training, guidance, and governance appears to be fuelling this complacent use. Despite widespread adoption of generative AI tools, only two in five employees report that their organization has a policy or guidance on generative AI use. Further, only half of employees in advanced economies report that their organization offers training in responsible AI, has policies and practices in place to govern responsible use, or a strategy or culture that supports AI.

Pressure to use AI may also exacerbate these issues, as half of employees report feeling they will be left behind if they don’t use AI.

These findings highlight an urgent need for organizations to invest in responsible AI training and AI literacy and provide clear policies and guidance to promote greater accountability, transparency, and critical engagement with AI tools by employees.

Employees who are younger, AI-trained, and higher income are more likely to use and trust AI at work

Demographic differences in use and trust of AI at work.

Younger employees (aged under 35), people who report high income, and those with AI training are more likely to use AI for work purposes and to trust AI in the workplace.

Higher-income earners and those with AI training are also more likely to report experiencing positive impacts from AI at work compared to middle- and low-income earners and employees without AI training.

For example, employees with AI training are more likely to report increased efficiency due to AI (76% vs. 56%) and increased revenue-generating activity from AI (55% vs. 34%), compared to those without AI training.

Most students are using AI and reporting benefits, but inappropriate use and over-reliance is widespread and challenging critical skill development

Impacts of AI use in education.

The findings for students (predominantly tertiary students) provide insight into how AI is affecting education and training and largely mirror those for employees, but are more pronounced.

Most students (83%) regularly use AI in their studies, with half using it weekly or more. The large majority use free, publicly available generative AI tools.

Many students recognise the benefits of AI integration in education, including increased efficiency, improved quality of work, idea generation, personalised learning, and reduced workload and stress.

However, AI’s influence on social dynamics, critical thinking, and assessment is mixed. Between a quarter and a third of students report reduced critical thinking and less communication, interaction, and collaboration with instructors and peers due to AI.

Moreover, inappropriate and complacent AI use is prevalent among students, with a majority acknowledging they have engaged in practices that contravene rules and guidelines or demonstrated overreliance on AI. Over three-quarters report that they cannot complete their work without the help of AI and have relied on AI to perform tasks rather than acquiring the skills themselves. Four in five admit to reducing effort in their studies and assessments due to their reliance on AI.

Contributing to this problem, institutional support for responsible AI use in education appears to be lagging: only half of students report their education provider has policies to guide responsible use of AI in learning and assessment, or training and resources to support AI understanding and responsible use.

AI adoption has increased markedly since 2022, but trust in AI has declined and worry has increased

Our research program provided the unique opportunity to compare data from the current survey with our previous survey data collected from 17 countries in late 2022, just prior to the release of ChatGPT.

As expected, adoption of AI in the workplace increased dramatically in all 17 countries: employee-reported organizational use of AI increased from 34% to 71%, and employees’ total use of AI at work increased from 54% to 67%. The largest increases occurred in Australia, Canada, the USA, and the UK.

However, this increased adoption is coupled with a trend toward lower trust in AI, likely reflecting that increased use and exposure have increased awareness of both the capabilities and benefits of these tools, and also their limitations and potential negative impacts, prompting more considered trust.

More people report feeling worried about AI and concerned about the risks, and fewer view the benefits of AI as outweighing the risks. Excitement also dampened over this time in several countries.

With this increase in concern, the importance of perceived organizational assurance mechanisms as a basis for trust increased in all countries, suggesting a greater need for reassurance that AI is being used in a trustworthy and responsible way.

Attitudes toward the regulation of AI remained stable, and there was no overall change to the perceived adequacy of regulation and laws.

These trends suggest that the hype of AI may be giving way to a more realistic and measured assessment of AI’s capabilities and limitations, benefits and risks, and a heightened need for reassurance around the trustworthy deployment of AI and proactive mitigation of AI risks.