Unlock the full potential of AI with Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

Japanese AI Regulation— No Assumptions? or Do Nothing?
Latest   Machine Learning

Japanese AI Regulation— No Assumptions? or Do Nothing?

Last Updated on November 5, 2023 by Editorial Team

Author(s): The Regulatory Technologist

Originally published on Towards AI.

Japan has big plans for a digital migration, are they prepared for it?

Hit the clap button if you enjoy (Tip: You can hold it down to provide up to 50 claps!)

Bias, is one of the recurring considerations across regulatory jurisdictions with respect to any model.

Generative AI has spurred this idea into our mainstream again. My experience on this sees conversation go quickly to the impact on individuals using, or who are subject to decisions, made by products leveraging AI. It’s a sensible impact to focus on with the “trust factor” at a premium.

The challenges in biases, however, lie far beyond the applications at the user level. The core of bias starts at the foundations of the data on which models are trained. Not only that, it extends to the very core of the language we use day-to-day in capturing data and knowledge. Overlooking this comes with real costs.

This week I thought back to a campaign from a large bank in 2009.

Bank set to launch private bank rebrand

News, analysis and comment from the Financial Times, the worldʼs leading global business publication

www.ft.com

In essence, the campaign targeted large net worth clients with a message of “No Assumptions”. In the Western world, the campaign made sense. “No assumptions about how your account will be handled!” When basic translation was used in emerging markets, however, the translation was “Do Nothing”.

Why did this come to mind? Similar challenges are being faced with generative AI models that haven’t been seen at scale often in other applications.

Here's why Japan is gearing up to build its own ChatGPT

Japanese researchers hope that a precise, effective, and made-in-Japan AI chatbot could help to accelerate science and…

interestingengineering.com

Researchers in Japan announced their intentions to develop their own large language models (LLMs). Specifically, ChatGPT was identified as currently being inefficient from a Japanese user perspective, citing that it “cannot grasp the intricacies of Japanese culture and language” having been trained primarily on English language data.

An interesting development.

A positive catch for researchers, but it does bring into question what the Japanese regulatory thinking around AI governance as a whole.

Thus far in my posts I have spoken almost exclusively about areas where the “hard approach” to AI regulation is being taken. This includes public registration of AI models, explicit risk assessments of models prior to use within the market, and the explicit ban of AI models in use cases (particularly security and personally identifiable images and/or data).

An Empowering Process Against Ambitious AI Regulation: The 3 Step Solution from Oxford Research

Are you an AI Product Owner, Project Manager, Data Scientist? Oxford Research has identified YOU as a key player in…

pub.towardsai.net

Japan on the other hand, has largely advocated for a soft, or “Agile”, regulatory approach. Scouring the interweb, I can detect two explicit law changes made in response to Generative AI specifically.

1) The relaxation of copyright laws to enable model developers to innovate without fear of repercussions. Fans of Game of Thrones know this is a real issue, with Author George RR Martin (and others) taking legal action against Open AI (LINK)

2) The amendment of the “High Pressure Gas Safety Act” to allow for the use of technology, including AI and drones, to conduct safety inspections at Gas plants without closing them down for an extended period of time.

The second was initially jarring to read. Japan is further poised however, to systematically assess and re-write its regulatory framework with a fundamentally anti-analogue approach. To Quote:

“Analog regulations are rules provided in laws or ordinances that require human involvement, such as confirmation using a person’s eyes, face-to-face procedures, or a permanent presence of human personnel at a certain site. Such regulations are regarded as preventing society’s digitization.”

Government set to abolish 99% of 'analog' rules for path to digital U+007C The Asahi Shimbun: Breaking…

The government decided on Oct. 27 to abolish around 99 percent of analog regulations requiring a human touch at a…

www.asahi.com

With such a broad mandate to move digital, you would imagine that serious thought to governance over data, models, and technology would be at the forefront. There remains a hesitancy around AI specifically to put pen to paper.

At the core of Japanese policymakers thoughts on AI is the idea of Social Principles and Agile Governance.

Social Principles

1) Human-centric

2) Education/literacy

3) Privacy protection

4) Ensuring security

5) Fair competition

6) Fairness, accountability, and transparency; and

7) Innovation

These are deemed the core principles which “developers and operators engaged in AI R&D and social implementation should pay attention”. Adding that “to promote an appropriate and proactive social implementation of AI, it is important to establish basic principles that each stakeholder should keep in mind.

Agile Governance

An approach to governing the development and operation of AI models, in such a way that it conforms to the social principles above. The thinking is outlaid in the below diagram with multiple examples of questions and considerations:

– Planning and Design Phase: Context

– Planning and Design Phase: Overall Design

– Development Phase: Data

– Development Phase: Model / System

– Operating and Monitoring Phase

Much of the sample questions and challenges to developers overlap with the EU AI act, set to point Europe in a very prescriptive direction to tackling the challenges of AI. It’s encouraging to see alignment in the building blocks of risk management.

The major difference is the element of trust and time afforded to Japanese entities. There are assumptions, to a degree, that firms will voluntarily adhere to the principles and governance guidance outside of some very industry-specific rules.

The decision-making on this appears to take two forms, one is the eagerness to let innovation harness as much value as possible, the other is hesitancy to “lock-in” hard law until the full scope of implementation has been better understood.

No Assumptions? Or Do Nothing?

Thanks for reading! If you enjoyed this leave a clap and I will delve into the impact of lawmaking on risk, models, and data more.

If you have concerns on how this might impact your business, or want to hear more, get in touch!

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓