Artificial Intelligence

Can Tech Companies Be Trusted With AI Governance?

Public-facing AI tools, including text-based applications like ChatGPT or text-to-image models like Stable Diffusion, Midjourney or DALL-E 2, have quickly turned into the newest digital frontier in terms of regulatory, legal and online privacy issues. Already malicious actors are committing criminal offenses and spreading mis- and disinformation aided by the capabilities of generative AI, with national governments struggling to keep up the pace and companies shifting the blame on individual users. As a survey conducted by KPMG Australia and the University of Queensland shows, the general public already doesn't trust government institutions to oversee the implementation of AI.

Surveying over 17,000 people across 17 countries, the study found that only one third of respondents had high or complete confidence in governments regarding the regulation and governance of AI tools and systems. Survey participants were similarly skeptical of tech companies and existing regulatory agencies as governing bodies in AI. Instead, research institutions, universities and defense forces are seen as most capable in this regard.

Although the people surveyed showed skepticism of state governments, supranational bodies like the United Nations were thought of more positively. The European Commission is currently the only organ in this category to have drafted a law aiming to curb the influence of AI and ensure the protection of individuals' rights. The so-called AI Act was proposed in April 2021 and has yet to be adopted. The proposed bill sorts AI applications into different risk categories. For example, AI aimed at manipulating public opinion or profiting off children or vulnerable groups would become illegal in the EU. High-risk applications, like biometric data software, would be subject to strict legal boundaries. Experts have criticized the policy draft for its apparent loopholes and vague definitions.

In the U.S., President Joe Biden unveiled a blueprint called the AI Bill of Rights in October 2022. The document outlines five guiding principles for the development and implementation of AI. Despite its name, the AI Bill of Rights is non-regulatory and non-binding. At the time of writing, there are no federal laws or binding specific policies on AI regulation in the United States.

Description

This chart shows the share of respondents most confident in the following institutions to regulate or govern artificial intelligence.

Download Chart
Premium statistics
Global search volume for "ChatGPT API", "AI API" keywords 2022-2023
Premium statistics
Attitudes towards ChatGPT and generative AI for legal work 2023
Global weekly interest in ChatGPT on Google search 2022-2024
Premium statistics
Most used AI devices and services in Japan 2023
Premium statistics
ChatGPT attitude among educational users in the U.S. 2023, by role
Premium statistics
Artificial intelligence (AI) use cases in payments, according to professionals 2024

Any more questions?

Get in touch with us quickly and easily.
We are happy to help!

Do you still have questions?

Feel free to contact us anytime using our contact form or visit our FAQ page.

Statista Content & Design

Need infographics, animated videos, presentations, data research or social media charts?

More Information