AIhub-coffee-corner
AI

Coffee Corner AIhub: AI Regulation

author
10 minutes, 7 seconds Read

The coffee corner at AIhub records the thoughts of AI specialists during a brief discussion. Our trustees met to talk about AI and regulation three years ago. Since then, there have been significant advancements in both technology and politics, so we felt it was appropriate to revisit the subject. [Note: This discussion occurred prior to the USA Executive Order on Safe, Secure, and Reliable Artificial Intelligence being made public. Here’s where you can read more about that.]

This time, Michael Littman (Brown University), Carles Sierra (CSIC), Sarit Kraus (Bar-Ilan University), and Sabine Hauert (University of Bristol) are joining the discussion.

Hauert, Sabine: A few months ago, regulation of AI was a highly hot topic, and interest in this topic has clearly not diminished. So, is regulation of AI necessary?

Sierra Carles: The consensus in Europe is that artificial intelligence has to be regulated. This effort has been ongoing for approximately four years, during which time a committee of high-level experts produced several proposals regarding the kind of applications that governments should be able to restrict or outright ban. That was incorporated into the June-approved AI Act. Governments must assume responsibility for some applications of AI, in my opinion, and these applications should be regulated. Large companies most likely would prefer that not to occur. But in my opinion, the safeguarding of our social values, privacy, and justice principles is significant enough to warrant the security of the applications. I believe that in order to ensure these principles for society, regulation is necessary and that public administrations should play a significant role.

Sabine Considering the advancements in large language models (LLMs), do you still believe that the AI Act serves its intended purpose?

Carle: The issue is that it takes five or six years to implement a law completely. Naturally, technology is advancing swiftly, making it quite challenging to stay up to date. When ChatGPT and other similar models took off, the AI Act was nearing its final readings. As a result, certain last-minute amendments were made to the bill. Certain things are still relevant today. In the field of education, for example, the government should closely oversee and monitor any use of AI assistants to assign grades or place students in schools. I believe it’s a good idea to identify these sectors and any potential application hazards. The law will likely need to be amended to include some LLM-specific provisions, but generally speaking, these kinds of changes have always been anticipated because the law was intended to allow for the expansion of the list of high-risk applications.

Sabine Yes, it’s a good place to start, at least. What function do scientists serve in the regulatory process, in your opinion?

Littman, Michael: It really is crucial, in my opinion. In my capacity with the National Science Foundation (NSF), we seek input from researchers during the policy-making process to see if the regulations are reasonable and effective in addressing the issues at hand. This is something I frequently witness: legislators attempting to prohibit a certain use of a technology blocking it. It’s feasible that you may restrict the use of the technology without restricting its advantageous features. Saying “there can be no collection of this kind of data or no handling of this kind of data” is not the same as using truly sophisticated concepts in encryption and other fields to safeguard privacy. Having scientists there in the room to offer these kinds of insights is fantastic.

Kristin Kraus: Legislation pertaining to AI is quite difficult. Even with software, regulation is extremely difficult to implement; with AI, this difficulty increases. In particular, machine learning is a “black box,” meaning that you never really know what data was used. Some outcomes are visible to you, but not all case results are. Even if we are able to create regulations, I’m not sure how we will implement them.

Carles: The establishment of monitoring agencies is one of the laws that have been passed in Europe; therefore, each nation must establish an agency to keep an eye on the use of artificial intelligence. An agency of this type has already been established in Spain and is preparing to begin operations at the beginning of the next year. The agencies will scrutinise the corporations over the applications they intend to release for sale.

Sabine They will therefore be authorised or accredited to carry out that task.

Carles: The AI Act will operate in this manner, yes.

Sarit AI is software; it’s a technology. I would like to know if there are any computer science subfields where regulations are followed? Why is additional regulation of AI specifically necessary?

Carle: First, there was the General Data Privacy Regulation, or GDPR. This governs all software in order to safeguard user privacy. That sets a precedent, then. It’s true that there aren’t many, though.

Sarit Was it beneficial, too?

Carles: Well, we may be sceptical about that because, as of right now, a lot of applications in Europe appear with data usage warnings, and people essentially just accept whatever. However, there are some rules, in a sense. Accountability requires businesses to give details when questioned about their practises. They must be open and honest. Legally speaking, I suppose it’s preferable because citizens may file complaints.

Sabine Certification and licencing are concerns for me. How will the smaller businesses accomplish this? Many small businesses find it prohibitive to join the market, unless it is executed flawlessly and involves a simple licencing procedure. I envision major players excelling in that area, either by discovering ways to licence their technology or by identifying the gaps.

Sarit The fact that large corporations are producing a lot of the more potent AI worries me more, in my opinion. Less dominance from them would be preferable as that would have a greater effect than regulation.

Carle: I believe that some exceptions were added for open-source and open-science techniques in the most recent drafting of the regulation, giving open source a little more legal privilege. Although I haven’t discussed the specifics, I concur. “We don’t need any regulation; we will regulate ourselves,” declared the large corporations. They pledged to uphold ethical standards in their applications and suggested forming a committee. Many saw that as just part of the game; they want the US government to stay out of their way and let them carry on as usual.

Sabine How are they going to control themselves? How that’s even an option is beyond me.

Carle: In US Congress, a Bill of Rights was being proposed a year ago. I’m not sure what transpired with that.

Michael To be clear, that was the White House’s science wing, the Office of Science and Technology Policy. Congress is not the same as that branch of the US government. Since it wasn’t actually offering any rights, they ultimately dubbed it the “Blueprint for an AI Bill of Rights.” It continues to be a hot topic of conversation. The National Institute for Standards and Technology’s (NIST) Risk Management Framework is the other topic of discussion. It is essentially a summary of the considerations you should have while implementing AI systems in the actual world, as well as the steps you should take to monitor and adjust them as needed. Although there are those who advocate for it, neither of these has become a national legislation. As of right present, they resemble guidelines more. Additionally, the White House persuaded a number of tech companies to sign a series of voluntary pledges that are now serving as a model for international talks around artificial intelligence. Therefore, a set of international norms pertaining to that may be released. It’s all being discussed pretty actively.

Sabine Because it moves quickly, sandboxes have come up in conversation. For instance, it would be helpful to have a secure location where you could test out robots practically unrestrictedly to determine what laws you would need in order to be able to employ them. Instead of creating laws that take years to complete, it would be a good idea to have quick-moving regulations that can be amended on a case-by-case basis, approved, and then learned from. You are nimble within the framework. I’m not aware of any sandbox laws in other places.

Carle: When considering the direction of regulation, verticals are typically regulated. For instance, you control health and logistics. It is rare to regulate a technology, and as Sarit pointed out, software is not generally regulated. However, one will be available for a specific type of software [AI].

Michael I find it helpful to compare artificial intelligence to metal. Both of them have a wide range of applications, some of which are risky and others of which are quite helpful. Metal, for instance, may be used to make both medical stretchers and weapons. We don’t have horizontal laws for metal, even though it behaves similarly in each of these situations.

Carle: It’s really difficult. I believe that the European Parliament combines the vertical regulations with the horizontal ones regarding technology. What is high risk in terms of health? For what is high risk in education? Because it doesn’t make sense to only regulate horizontally. The portion of the verticals that is somewhat clearer for businesses to manage includes health, education, and transportation. Therefore, businesses that create products that are used to grade students are aware that their work will be closely examined.

Sabine: It’s fascinating. Thus, all the startups that hopped on the AI bandwagon to get money may now jump off it in order to maybe dodge regulation and go by a little different name.

Sarit This is a really fascinating query. I mean, does anyone govern software that is developed to grade exams but isn’t artificial intelligence (AI)? As you pointed out, it’s quite challenging to identify the transition from software that does certain jobs to AI software.

Carles: Exactly. My assumption is that businesses will need to request certification from the national agencies when developing software for uses that the AI Act deems high risk. Nevertheless, once the agencies are established, we’ll see what the businesses begin to do.

Sabine It does seem like a good thing, isn’t it? It makes logical to have some kind of regulation since we want to develop these AI systems responsibly. In the course of this endeavour, inquiries have also been made into the target audience for these regulations and the degree of public support they have received. Is it to prevent false information, safeguard individuals, businesses, governments, and democracies, or all four?

Carles: Well, the idea is to safeguard both democracy and people. A group of people who create reports as impartial as possible for lawmakers is known as the technical assessment department in European parliaments. Since most politicians are not scientists or engineers, they generally know very little about technology. Well, I was in a meeting with the European Parliament, and we talked about health, education, labour, and AI and democracy (the lawmakers were concerned about how generative AI could harm democracy). The political elite is somewhat alarmed by the potential enormous social impact of AI, particularly generative AI.

Sabine That’s excellent to hear, though, because it sounds grounded. It seems firmly rooted in the industries, as though they are taking into account the practical, immediate applications.

Carles: For forty years, the Danish parliament has maintained a department dedicated to technological assessments. It’s just incredible. It’s been around for roughly thirty years in Germany. Hearing the experts and politicians talk about these issues was interesting.

Sabine There is currently a serious effort made in many EU initiatives to consider how their work connects to the AI Act. That will most likely be covered in any AI practitioner’s training, in my opinion. All throughout, there is a lot more of a focus for responsible innovation.

Carle: All around Europe, new degrees in AI are being offered. There are courses on AI ethics as well, and I believe that courses specifically on AI and law will be available in the future. Becoming an AI engineer requires you to understand the legal ramifications of your actions. This is, I believe, a component of an engineer’s training.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *