HomeWinBuzzer NewsJapan Considers Softer AI Regulations than the EU AI Act as the...

Japan Considers Softer AI Regulations than the EU AI Act as the Two Seek Collaboration

Japan is considering a more lenient approach to regulating artificial intelligence (AI) than the European Union's AI Act.

-

Japan is considering a more lenient approach to regulating artificial intelligence (AI) than the European Union, which has put forward a strict framework for the emerging technology known as the AI Act, a source close to the matter told Reuters.

The anonymous source said Japan is inclined to adopt a flexible policy that would allow businesses to harness AI for various purposes, as long as they adhere to ethical standards and ensure accountability and transparency.

Japan is also concerned about imposing excessive burdens on companies and hampering innovation, especially as it vies with China and the United States in the global AI race, the source added.

AI is a broad term that encompasses technologies that can perform tasks that normally require human intelligence, such as recognizing faces, understanding speech and making decisions. AI has the potential to transform various sectors, such as health care, education and manufacturing, but also poses risks to privacy, security and human dignity.

The EU has proposed a draft law that would ban some uses of AI, such as facial recognition for mass surveillance, and impose fines of up to 6% of global turnover for violations. The law, known as the AI Act, is expected to take several years to be adopted, but aims to make the bloc a leader in trustworthy and human-centric AI.

Japan is expected to finalize its own AI guidelines by the end of this year, after consulting with experts and stakeholders, the source said. The guidelines will likely cover areas such as data protection, human rights, safety and security.

Japan has been promoting AI as a key driver of its economic growth and social welfare. It has launched several initiatives, such as the AI Network Society Plan and the Moonshot Research and Development Program, to foster AI research and development and address social challenges.

Japan and Europe to Become AI Collaborator

Despite their differing approach to AI regulation, The European Union and Japan are looking to partner on artificial intelligence (AI) and chips as part of a broader effort to reduce their reliance on China.

EU Commissioner Thierry Breton said on Monday that AI would be “very high” on his agenda during a meeting with Japanese officials. The EU is looking to “de-risk” from China, and part of that strategy involves deepening ties with allies around technology.

Japan is also interested in working with the EU on AI. The country’s government has said that it wants to become a “leading player” in the global AI market. A partnership between the EU and Japan could help both sides to develop and deploy AI technologies more quickly and efficiently. It could also help to ensure that these technologies are used in a responsible and ethical way.

The two sides are also looking to cooperate on the development of chips. Chips are essential for many AI applications, and the EU and Japan are both concerned about their reliance on China for chip production.

Europe’s AI Act and What it Means

The AI Act, which is under negotiation by the EU Parliament and Council, would set strict rules for high-risk AI systems, such as those that use facial recognition or social scoring. The law also affects generative AI systems, which can make new content such as text, images, or music. The Act aims to regulate systems that pose an “unacceptable level of risk”, such as tools that predict crime or assign social scores.

It also introduces new restrictions on “high-risk AI”, including systems that could influence voters or harm people’s health. Furthermore, the legislation also establishes new rules for generative AI, requiring content produced by AI systems. Examples of chatbots using large language models with generative AI include OpenAI‘s ChatGPTGoogle‘s Bard, and Microsoft‘s Bing Chat.

Additionally, it requires models to disclose summaries of copyrighted data used for training. This could be a major challenge for systems that create humanlike speech by collecting text from the web, often from sources containing copyright symbols.

Last Updated on November 8, 2024 12:32 pm CET

SourceReuters
Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x
Mastodon