X

Vous n'êtes pas connecté

Maroc Maroc - EURASIAREVIEW.COM - A la une - 11/Jun 23:53

The AI ‘Black Box’ Conundrum – Analysis

By Prateek Tripathi The increase in the application of AI has been rampant as of late proving that the technology’s utility is seemingly endless. However, with increasing utility comes the inevitable responsibility of regulation. Governments and policymakers around the world have been scampering to put regulatory mechanisms and frameworks in place and given the disruptive and potentially dangerous nature of the technology, this increasing sense of urgency is certainly understandable. Before we can discuss regulation, however, a much more important question needs to be answered: How does AI work? The “black box” problem  While AI has been around for a long time, for the most part, it was lurking in the background. The technology took center-stage with the advent of generative AI models, ChatGPT, in particular, which is what really set things in motion. This further spawned Microsoft’s Bing Chat, Google’s Bard, and various other so-called “chatbots.” All these generative AI systems are based on;Large Learning Models (LLMs), which fall under the category of;Machine Learning (ML).; Figure 1:Building, Training, and Deploying ML. Source: Wong et al (2021). ML proceeds in three steps: First, we have an algorithm which lays out a set of procedures. Second, the algorithm learns to identify patterns after going through vast amounts of “training data.” Once the algorithm has sifted through sufficient data, we can finally deploy the ML model, like ChatGPT. If the process seems familiar, it is because deep learning was essentially;inspired by the theory of human intelligence. Similar to how a part of human intelligence relies on learning by example and subsequently extrapolating it to new experiences, AI learns in the same manner. But just as we cannot recall what exact instance inspired our understanding of a specific concept, AI cannot inform us of what particular piece of data or inputs resulted in a specific decision. Therefore, AI works essentially as a “black box,” implying we feed in the input and get a certain output but we cannot examine the system’s code or the logic that produced the output. As a result, in many cases the precise reasons why LLMs behave the way they do, as well as the mechanisms that underpin their behaviour, are not known,;even to their own creators. LLMs are inherently expensive endeavours, requiring processing of substantial amounts of data. This is essentially why;industry has overtaken academia;in creating machine learning models over the past decade. The difference is that while academia is much more open to releasing the source code for their models, this is not the case with corporate entities. The codes for applications like OpenAI’s ChatGPT are not public as of yet, and there is little chance that they will be in the future.;; Any of the three components;of an ML system can be hidden, or in a black box. As is often the case, the algorithm is publicly known, which makes putting it in a black box less effective. So, to protect their intellectual property, AI developers often put the model in a black box. Another approach software developers take is to obscure the data used to train the model—in other words, put the training data in a black box.;; Consequences of the black box approach  Using the black box approach leads to a multitude of problems. Potential flaws in the datasets being used to train AI models are obscured. This further leads to a lack of accountability.;For example, consider an ML model determines that a person does not qualify for a bank loan. If the algorithm being used is inside a black box, there is no way for the person to find out the reasons why they were rejected and hence, they are essentially incapable of rectifying them.;;;;;;; The black box approach also makes ML models inherently unpredictable, and difficult to fix when unwanted outcomes are obtained. This can have potentially lethal consequences, particularly when it comes to the military domain. There have already been instances where this has been the case. For example,;the US Air Force reportedly conducted a simulated test;in which an AI-powered drone was ordered to destroy an enemy’s air defence systems and ended up attacking anyone who interfered with that order.; The fundamental problem with AI regulation  Historically, the core issue with regulating emerging technologies, like the internet for example, has been their unpredictability, stemming from the fact that there was simply no way of knowing how society would utilise them. However, what we did have was a firm understanding of how they worked. The problem with AI is dual. Not only is there the aforementioned problem of uncertainty over how the technology will evolve, this is compounded by the lack of a fundamental understanding of how it works. When it comes to ChatGPT for instance, society has been essentially;following a black box approach, since we are oblivious to its internal workings.; The quintessential requirement for any technology to be regulated is a good understanding of how it actually functions. When it comes to AI, and Generative-AI in particular, this is a fundamental problem. The;recent upheaval;at OpenAI amid ethical concerns over the rapid advancement of the technology further corroborates the point. Regulation cannot be effective if the object in question is not fully understood. The uncertainty and mystery surrounding AI stems in large part from a large-scale ignorance on its actual working, and this needs to be rectified as soon as possible for any future regulation to be effective.; While the;EU AI Act;passed earlier this year does state the need for transparency and accountability, particularly in case of high-risk AI systems,;it is not clear regarding;who will be directly responsible for implementing these obligations and to what extent. It is also sufficiently;vague in its provisions relating to training data, which can be potentially exploited by Big Tech corporations like OpenAI.;;;;;;;;;;;;;;;;;;; The pre-requisite for AI regulation: Opening the black box Up until recently, ML models were used for low-stake applications like online advertising and web searches, and their inner workings were not of much consequence. With the recent boom of generative AI, however, they have pervaded into almost every aspect of our lives, making it vital to open the hood and look inside the black box.; Interpretable models;offer a more transparent and possibly, a more ethical alternative to black box models. These are also known as “glass box” models. An AI glass box is a system whose algorithms, training data and model are all available for anyone to see. Furthermore, the field of “explainable AI” (XAI) is working to develop algorithms which though may not necessarily be glass box, but may at least be better understood by humans. XAI techniques like;LIME;(Local Interpretable Model-Agnostic Explanations) and SHAP (Shapley Additive Explanations) are some of the tools being used to enhance the interpretability of AI systems.;;;; There is a;widespread belief;that the most accurate ML and deep learning models must be inherently unpredictable and complicated. This assumption has been proven to be false on several occasions, such as the;Explainable Machine Learning Challenge;held in 2018. Interpretable and glass box models have been;shown to be as effective;as black box models in several different cases. And while the EU AI Act is a step in the right direction, and the scrutiny of Big Tech companies by regulatory institutions;has been increasing, more effort is required in this direction. It is difficult and highly impractical for any AI regulation to be effective if the technology in question is hidden behind black boxes, be it training data or the algorithm itself. ; About the author: Prateek Tripathi is a Research Assistant at the Observer Research Foundation. Source: This article was published by Observer Research Foundation

Articles similaires

Sorry! Image not available at this time

Navigating the labyrinth: How AI tackles complex data sampling

techxplore.com - 24/Jun 19:00

The world of artificial intelligence (AI) has recently seen significant advancements in generative models, a type of machine-learning algorithm that...

Sorry! Image not available at this time

Navigating the labyrinth: How AI tackles complex data sampling

techxplore.com - 24/Jun 19:00

The world of artificial intelligence (AI) has recently seen significant advancements in generative models, a type of machine-learning algorithm that...

A New Era of OPPO Devices – AI Integration In Mobile Photography

thecekodok.com - 24/Jun 13:37

Artificial intelligence (AI) has become a norm lately especially in the world of technology. This is because the use of AI to some extent can help...

Apple Is Reportedly In Talks With Chinese Companies To Introduce AI On Their Devices

thecekodok.com - 20/Jun 21:48

The introduction of AI technology on smart devices is not something new, where it can be seen that all smart device manufacturers, including Apple...

Sorry! Image not available at this time

Study finds limited explanations in AI might benefit consumers

techxplore.com - 18/Jun 16:46

Recent algorithms in artificial intelligence (AI) are often referred to as "black box" models, meaning their inputs and operations are not visible to...

The Oppo Reno12 series will arrive in Malaysia on June 27, 2024

thecekodok.com - 19/Jun 11:46

Oppo shared the official launch date for the new Reno12 device for the Malaysian market. It is scheduled to take place on June 27, 2024 with two...

Sorry! Image not available at this time

AI key to Africa and the future of work

it-online.co.za - 14/Jun 09:20

Africa has a unique opportunity to influence what the future of work looks like in these early days as large language learning models (LLMs) are...

Artificial Intelligence: Future Political Decision-Making – OpEd

eurasiareview.com - 26/Jun 15:17

Artificial Intelligence has a broad impact on various aspects of human lives including the healthcare sector, financial sector education, and even...

Sorry! Image not available at this time

General assembly of Euro sawmill industry’s short and long term review

timberbiz.com.au - 17/Jun 00:17

The summer General Assembly of EOS, the European Organisation of the Sawmill Industry, took place in Helsinki on 12-13 June 2024, hosted by the EOS...

The Impact Of China’s Digital Silk Road On The Digital Domain Of The Philippines – Analysis

eurasiareview.com - 18/Jun 00:15

By Julio S. Amador III and Deryk Matthew N. Baladjay One of the first policy directives given by President Ferdinand Marcos Jr. to his government...