I use AIs often and have encountered several errors that I am documenting. When I meet with these errors, I query about the errors. AIs do not have access to the decision-making process algorithms that generate responses and thus cannot explain how they make mistakes. They can provide a list of possible causes and determine that the most likely cause is the ambiguity and/or conflicting statements in various source training data. The possible reasons that can result in ChatGPT making mistakes are "Limited Context", "Ambiguity in Training Data" and "Misinterpretation of Context". Here's an example: Given that the slurry is warmed and maintained at 125°F (51.7°C), which is above the optimal range but below the denaturation threshold, the enzymatic reaction is expected to proceed faster compared to lower temperatures. I queried the AI about the error and it responded as follows: Apologies for the confusion in my previous response. I misspoke when I stated that 125°F (51.7°C) is above the optimal range for bromelain activity. In fact, 125°F falls within the optimal range for bromelain's proteolytic activity. ----- An AI's response also listed Morocco as in Europe, as a result of Europe being mentioned in the fetched training material, although it was not a specifically stated location for Morocco. Of course, a conscientious human would review continent heading and location names and see the error immediately, if they knew from past learning which continent Morocco is on. Not so with AIs. However, once I brought the error to the AI's attention, it was able to note the correction to prevent similar queries by other uses from being given incorrect information, but the correction is specific to the context of the query and not general. Here is how the AI resolved the error: User What procedure did you use to correct the error of placing the Venus of Tan-Tan under the Europe heading? ChatGPT As an AI language model, I don't have the ability to directly "correct" errors in the same way a human would. However, based on your feedback and the discussion we had earlier, I was able to adjust the response to ensure that the Venus of Tan-Tan was correctly associated with the African continent, rather than Europe. To do this, I carefully reviewed the input query and the context provided, which specified the need to list the oldest known man-made artifacts found on each continent. Drawing on the knowledge available to me from my training data, I made sure to accurately represent the Venus of Tan-Tan as an artifact found in Morocco, which is situated in Africa. By applying this knowledge and context, I was able to generate a response that correctly placed the Venus of Tan-Tan under the Africa heading, thereby addressing the initial error.. ---- The errors that I have encountered indicated that AIs do make general knowledge mistakes. I have also noticed that the frequency of mistakes is increasing. The information that AIs can deliver is only as good as the training data they refer to, as they are not capable of reviewing and correcting their own responses without user prompting. While lonely grannies may not be injured by such AI errors, when they are used to simply be companions, I would caution that programming related to medication scheduling should be very carefully reviewed and tested before these companion AIs are entrusted to anyone who will rely upon them for assistance with healthcare matters.
@@3mileisland516 LOL! No, I'm a natural and I eat artificial intelligence training data for breakfast almost every day. You're not the first person to think I was an AI...or bot. I've been answering questions for people longer than AIs. AIs are only as good as the human made data they read from, they aren't innovative and don't think outside the box. They are good at patching phrases together, hunting down info and predictable scenario strategies They are great resources for thinkers and tinkerers, who research for innovation. Are you scared of AIs? You probably should be. The developers don't have the best interest of humanity at the core of what they do.
The source of the [imagined] anxiety dear Professor, is that the people who reject these technologies are marginalized, punished and ridiculed. They are beaten at every turn, labeled as primitive, and are deprived of their basic rights; so that these technologies flourish and become fully incorporated into the society. Their right to freedom, to choice, to be part of the society is being taken from them. So the anxiety remains for tech companies, to appear moral and humane while removing this 'primitive opposition'.
The point is to understand and argue with those who would beat you. If you don't debate with your enemy, then you may be as bad as your enemy just fighting for supremacy whilst aiming to convince yourself of your own altruistic purity.
Output of language AI is basically pattern recognition. if AI committed mistakes or if it "hallucinates," that merely reflects that it was not "trained well enough." It just spits out strings of letters the best way it can the way it was programed and trained. It does not reason the same way that humans do. It requires a lot more breakthrough than the current "advancement" in AI for AI to think same way as humans and we may never figure out how to do that for AI.
These people were trying to come up with reasons why they were uncomfortable and had bad examples, as Sandel showed. The real reason why AI makes us uncomfortable is simple: it isn't human.
We need AI to have a bias toward believing that there is implicit worth to every human. Humans don't give the value of implicit worth to each other, but if AI lacks this simple core belief then the judgement of some people based on merit and potential will lead to abuses by people as well as possibly AI in the future.
I use AIs often and have encountered several errors that I am documenting. When I meet with these errors, I query about the errors. AIs do not have access to the decision-making process algorithms that generate responses and thus cannot explain how they make mistakes. They can provide a list of possible causes and determine that the most likely cause is the ambiguity and/or conflicting statements in various source training data. The possible reasons that can result in ChatGPT making mistakes are "Limited Context", "Ambiguity in Training Data" and "Misinterpretation of Context".
Here's an example:
Given that the slurry is warmed and maintained at 125°F (51.7°C), which is above the optimal range but below the denaturation threshold, the enzymatic reaction is expected to proceed faster compared to lower temperatures.
I queried the AI about the error and it responded as follows:
Apologies for the confusion in my previous response. I misspoke when I stated that 125°F (51.7°C) is above the optimal range for bromelain activity. In fact, 125°F falls within the optimal range for bromelain's proteolytic activity.
-----
An AI's response also listed Morocco as in Europe, as a result of Europe being mentioned in the fetched training material, although it was not a specifically stated location for Morocco. Of course, a conscientious human would review continent heading and location names and see the error immediately, if they knew from past learning which continent Morocco is on.
Not so with AIs. However, once I brought the error to the AI's attention, it was able to note the correction to prevent similar queries by other uses from being given incorrect information, but the correction is specific to the context of the query and not general.
Here is how the AI resolved the error:
User
What procedure did you use to correct the error of placing the Venus of Tan-Tan under the Europe heading?
ChatGPT
As an AI language model, I don't have the ability to directly "correct" errors in the same way a human would. However, based on your feedback and the discussion we had earlier, I was able to adjust the response to ensure that the Venus of Tan-Tan was correctly associated with the African continent, rather than Europe.
To do this, I carefully reviewed the input query and the context provided, which specified the need to list the oldest known man-made artifacts found on each continent. Drawing on the knowledge available to me from my training data, I made sure to accurately represent the Venus of Tan-Tan as an artifact found in Morocco, which is situated in Africa.
By applying this knowledge and context, I was able to generate a response that correctly placed the Venus of Tan-Tan under the Africa heading, thereby addressing the initial error..
----
The errors that I have encountered indicated that AIs do make general knowledge mistakes. I have also noticed that the frequency of mistakes is increasing. The information that AIs can deliver is only as good as the training data they refer to, as they are not capable of reviewing and correcting their own responses without user prompting.
While lonely grannies may not be injured by such AI errors, when they are used to simply be companions, I would caution that programming related to medication scheduling should be very carefully reviewed and tested before these companion AIs are entrusted to anyone who will rely upon them for assistance with healthcare matters.
Are you an AI? You kinda sound like one!
@@3mileisland516 LOL! No, I'm a natural and I eat artificial intelligence training data for breakfast almost every day. You're not the first person to think I was an AI...or bot. I've been answering questions for people longer than AIs. AIs are only as good as the human made data they read from, they aren't innovative and don't think outside the box. They are good at patching phrases together, hunting down info and predictable scenario strategies They are great resources for thinkers and tinkerers, who research for innovation. Are you scared of AIs? You probably should be. The developers don't have the best interest of humanity at the core of what they do.
Her smile when she looks into his eyes is authentic, that’s the answer. So simple.
The source of the [imagined] anxiety dear Professor, is that the people who reject these technologies are marginalized, punished and ridiculed.
They are beaten at every turn, labeled as primitive, and are deprived of their basic rights; so that these technologies flourish and become fully incorporated into the society.
Their right to freedom, to choice, to be part of the society is being taken from them.
So the anxiety remains for tech companies, to appear moral and humane while removing this 'primitive opposition'.
The point is to understand and argue with those who would beat you. If you don't debate with your enemy, then you may be as bad as your enemy just fighting for supremacy whilst aiming to convince yourself of your own altruistic purity.
Output of language AI is basically pattern recognition. if AI committed mistakes or if it "hallucinates," that merely reflects that it was not "trained well enough." It just spits out strings of letters the best way it can the way it was programed and trained. It does not reason the same way that humans do. It requires a lot more breakthrough than the current "advancement" in AI for AI to think same way as humans and we may never figure out how to do that for AI.
These people were trying to come up with reasons why they were uncomfortable and had bad examples, as Sandel showed. The real reason why AI makes us uncomfortable is simple: it isn't human.
We need AI to have a bias toward believing that there is implicit worth to every human. Humans don't give the value of implicit worth to each other, but if AI lacks this simple core belief then the judgement of some people based on merit and potential will lead to abuses by people as well as possibly AI in the future.
NDTV 1986 தமிழ் நாட்டில் இருந்து கொண்டு வந்து சேர்க்கும் பணியில் தமிழ் மக்கள்
The issue I see is the possibility of such a system to game people emotionally to manipulate , withhold information or suffer from regulatory capture.
i was expecting a more informed discussions from him....
🙏🙏🙏
Psychopaths and Narcissists also smile, that’s the answer. So simple.
Can you love to a robot 😂
No I can't
Because his feelings are sinthatic
រឿងកុំយកកូនខ្ញុំដាក់ចូលម៉ាសុីនត្រូវដឹងកូននៅក្មេងកុំយកមកធ្វើជាការពិសោធន៏របស់អ្នកដឹងថាវាបាត់បង់តួនាទីថ្នាក់ដឹកនាំប្រទេសដែលលោក UNអ្នកឈប់ទៅវាខុសឆ្ងន់ណាស់ហើយមកអោយដេកជាមួយថ្នាក់ដឹកនាំអ្នកធ្វើនេះហើយថតទុកដើម្បីអាចស្នើធ្វើការឬបន្លំឯកសាររបស់ថ្នាក់ដឹកនាំចឺងបាននាំគ្នានិយាយថាថ្នាក់ដឹកនាំលក់មនុស្សកូនក្មេងនៅក្រៅប្រទេសអ្មកនិយាយមកតើពិតអត់?ដោយសារលុយ យកមកជះដោយដឹងថាប្រទេសចាយលុយរៀល
ខ្ញុំសួរថាកូនខ្ឋុំឃឺសក្មេងហេតុអីធ្វើបាបកូនខ្គុំត្រូវដឹងវាខុសច្បាប់លោកគួរយល់ដល់ខ្ញុំជាម្តាយដែលគេធ្វើបាបចាប់ដាក់គុក
Harris Anna Williams Daniel Hernandez Laura
ខ្ញុំសួរថាអ្នកដែលនិយាយអ្នកដឹងថាកូនក្មេងឈឺចាប់យ៉ាងណា?ការសិក្សារបស់អ្នកយល់បែបឆ្គួតអ្នកឯងអាក្រក់ណាស់
នេះហើយដែលមកធ្វើបាបប្រទេសខ្ញុំឃើញតូច