Це відео не доступне.
Перепрошуємо.

Navigating Accountability and Bias in Gen AI |The Risk Management Research

Поділитися
Вставка
  • Опубліковано 18 сер 2024
  • This video explains the risks of Gen AI due to lack of regulation. The critical risks are Biasedness, Accountability, Job Risk, Human Creativity, and Geopolitical Risk.
    Excessive use of Gen AI may kill human creativity, as people will start depending on information without verifying it. AI models use extensive volume data for training purposes and may introduce biases into their responses. Gen AI operates based on the reaction of patterns without human intervention; there is an issue with accountability for the actions-who owns the responsibility? AI automates jobs, and there is a risk of replacing jobs, More so where there are repetitive jobs: customer service, etc. Increase in Unemployment, challenge for Government. Excessive use of Gen AI may kill human creativity as they start depending without verifying the information. Advanced countries/ organizations with dominant AI development can obtain significant economic, political, and strategic advantages. The Risk Management Research | Sonjai Kumar
    #Risk, #riskmanagement , #enterpriseriskmanagement , #riskanalysis, and #Riskculture.
    #RiskAppetite, #RiskFramework, #riskidentification, #riskstrategy, #Governance,
    We know that chat GPT uses deep learning and machine learning algorithms to generate humanlike text; the first version of chat GPT used 117 million parameters and used unsupervised learning; as one may know, the mathematical models use different parameters, the simplest of the models is the linear model that is y= mx + C that has got two variables one is y and another X similarly there could be a model with three variables four variables here we can see that the variables are 117 Millions these are phenomenal. When the latest version was created, it used 1.5 billion parameters, and the third version used 175 billion parameters; these are a lot of parameters that is being used to generate almost like the actual data, and that is one of the reasons why the chat GPT the artificial intelligence is getting so popular it is almost replicating the real-life data because it is using so many parameters. However, having the one side, there is a lot of benefits of generative AI; on the other side as well, it produces challenges about biases because when these parameters are used using a training data, the models are being prepared any mathematical model, whether it is a linear model or it is a second degree
    degree or third degree of the model, it always uses data, and then it does the projection. That is what the mathematical model does. So it depends on what kind of data is being used in order to create the model, and it can create the issue of biases, lack of accountability, adverse impact on human creativity, and geopolitical risk challenges it can produce despite having so much of you know the buzz it has created so how it goes gen raises ethical concerns biases accountability economic disparities and political misuse AI model use an extensive volume of data for training purpose and may introduce biases for their responses because the data that is being used to train the model these data are being taken from the internet and a particular model maybe get trained for a particular data set that the user do not not know and the output the user may use based on that it is not biased and that's where the risk arises what data is used for the training purpose? This may depend on the user because it is a user who is going to train their model for the particular purpose the purpose with which the user is using to using the model to train the data and hence the result could be biased we are not saying that every model will be biased but the point is the model could be biased there's a difference between that there is an intent to be keeping the model biased or it is getting biased by chance so that differentiation is very difficult to basically identify the output could be suitable only for a particular purpose when such a thing happen, then the output may be suitable for a particular purpose only and not for all purpose, but the user may not know this and apply the result in all situation this is a dangerous thing because the user who is know sitting somewhere in other parts of the world is using assuming that the model is being trained not on the bias data this is true in any modelling exercise absolutely true this is true in
    any modelling exercise that can happen now, the question is that, you know um what are the regulations for that have we created? Had the different governments created any regulations for training of such kind of models, this could be a financial result where a decision can go wrong totally regulations are required to manage such risks so there's a need to have some kind of Regulation like say I can give example of solvency 2 in the insurance the regulator in Europe they basically check the solvency model

КОМЕНТАРІ • 1