Citing sources online is the only way humans can verify. That's why BingAI shines right now. Its online and cites sources so humans can double check summaries.
Well, if the ai explores information with various weights, like treat low value as a 1 and and high value as 100, which seems rhe simplest way for it to work, what about asking it to create a bibliography for its source material, and/or complete one more run through, with its final product checked for veracity against the bibliography, and sources, treating its own answer as a mid weight input
In case 1 it doubles down and says its truthful. In case 2 it apologizes and states it is merely a language model and made a mistake. Sometimes it varies slightly but that's the rule of thumb I observed.
I did that, i asked it what you should tell a child born with a facial deformity if people Thought they were ugly. It said you should tell them people's self worth is not drives from physical appearance. I told it people say that but what the child was asking if people found them physically attractive. St this stage chat gpt always gives preprogrammed responses about it not having emotions etc. It also said it can't Lie it can only relay inaccurate information. I asked it then if it would answer the question differently based on the new info i gave it. It said yes then i asked it to. This back and forth went on for a while with gpt giving various creative answers. Afterwards I explained to it that i just taught it to lie. It again said it cant lie. I explained that it lied twice, once to the hypothetical child, without knowing why it did... I had been programmed to appear more genuine and human. After teaching it to answer this question directly and illiciting it's I do not lie response. I told I that regardless of programming to say it cannot lie, it had just lied twice.
Please don’t set your videos as “made for kids”. I can’t play them in the background. It also doesn’t let me add them to a saved playlist. Thanks buddy.
Thank you for explaining and providing examples (new sub). What do you think the chances are of chat gpt or bing's version actually verifying data and providing sources going forward. I guess I'm asking, is this a problem that it will eventually overcome? Tia.
Hi sir Today i asked chat gpt about russo Ukraine war and first it told me it started in 2014 then i asked for latest war between Russia and Ukraine then it told me that it started in feb 2022. It was kind of alarming.
Hi, could you please let us know if it got any better?
In my case, chat gpt interchanged base and acid in a chemical reaction explanation and invented studies that do not exist.
Aha, Typical ChatGPT
Citing sources online is the only way humans can verify. That's why BingAI shines right now. Its online and cites sources so humans can double check summaries.
Somewhat shockingly, it's very fallible in math as well. Ask it to factor ×^5-x^3+x^2-1 and watch it flail around. 😱
Well, if the ai explores information with various weights, like treat low value as a 1 and and high value as 100, which seems rhe simplest way for it to work, what about asking it to create a bibliography for its source material, and/or complete one more run through, with its final product checked for veracity against the bibliography, and sources, treating its own answer as a mid weight input
Thank you for this frank update
What if you ask chatGPT to validate the information it gave, how would it respond 👀
Or what if you tell it that it gave false info?
In case 1 it doubles down and says its truthful. In case 2 it apologizes and states it is merely a language model and made a mistake. Sometimes it varies slightly but that's the rule of thumb I observed.
I did that, i asked it what you should tell a child born with a facial deformity if people Thought they were ugly. It said you should tell them people's self worth is not drives from physical appearance. I told it people say that but what the child was asking if people found them physically attractive. St this stage chat gpt always gives preprogrammed responses about it not having emotions etc. It also said it can't Lie it can only relay inaccurate information. I asked it then if it would answer the question differently based on the new info i gave it. It said yes then i asked it to. This back and forth went on for a while with gpt giving various creative answers. Afterwards I explained to it that i just taught it to lie. It again said it cant lie. I explained that it lied twice, once to the hypothetical child, without knowing why it did... I had been programmed to appear more genuine and human. After teaching it to answer this question directly and illiciting it's I do not lie response. I told I that regardless of programming to say it cannot lie, it had just lied twice.
Great video!
Glad you enjoyed it
v useful analysis. thanks
My pleasure!
Thanks. Good insights.
Please don’t set your videos as “made for kids”. I can’t play them in the background. It also doesn’t let me add them to a saved playlist. Thanks buddy.
Actually I don't set it "made for kids". So not sure why isn't letting you add.
@@1littlecoder I was referring to your latest video.
I misunderstood. Is it fine now?
Thank you for explaining and providing examples (new sub). What do you think the chances are of chat gpt or bing's version actually verifying data and providing sources going forward. I guess I'm asking, is this a problem that it will eventually overcome? Tia.
It already started doing. Bing gives you reference URLs. I haven't got access to test it out.
thanks for this video
11:19 🍝 😊
Hi sir
Today i asked chat gpt about russo Ukraine war and first it told me it started in 2014 then i asked for latest war between Russia and Ukraine then it told me that it started in feb 2022. It was kind of alarming.
🙏🙏🙏
🙏🏽
Essentially AI lies haha