Sounded like a hodge podge of different poems and prose. You could even recognize where bits and pieces of phrases were plagiarized from. Ridiculous. Is Microsoft so desperate for venture capital that they have to drag 100+ year old poet into the mud with them?
Your ignorance is showing. 😳🙄🤮 That would be fine, normally-except that you're clearly the type of person that, when you don't understand something, you simply run with your ignorance, proclaiming it from the mountain tops, as if it were true. Well, it isn't - but you don't care, really. Just know that these large language models don't "plagiarize" or siphon words directly from text that it had been presented with during its training, like you either mistakenly claim,-or scummily, knowingly misrepresent. They are given vast amounts of text and other things to train upon so that they can learn from them-like you and I, as humans, do; how words and concepts relate to one another from a semantic and heuristic standpoint. Understands how closely and frequently the word "dog" would relate to "cat" in a sentence, typically, whereas it would also understand how distantly and infrequently the word "nanotube"would relate to "cat" in a sentence. It uses its understanding of the semantic spatial relationships to then come up with its own poetry or prose or what have you based only upon its understanding. This is why you have large language models which seem artistically and even logically stunted to the point of hilarity, where has the models which had been trained on larger and larger data sets and thereby having greater depth of understanding can eerily mimic or surpass human ability. But like I said: you don't actually care about any of this or how it works. You simply know you don't like it, and that you're offended by it, and likely can't understand it even if you wanted to, and there are other people who are similarly upset about it on a convenient internet bandwagon, so instead of using your brain and demonstrating a moral compass by being honest with yourself, and others, you've instead chosen to hop on the aforementioned bandwagon. You're quite content there. It's quite sad actually, and I wouldn't wish that state of mind (willful, herd-mentally ignorance) on my worst enemy.
“There’s nothing to be afraid in beauty” wise words!
We liked that too. Thanks for watching and commenting!
Beautiful! Human! Loving it!
Wonderful as always 🧡
Thank you! 😊
When does the ghost in the machine become your better half?
Sounded like a hodge podge of different poems and prose. You could even recognize where bits and pieces of phrases were plagiarized from. Ridiculous. Is Microsoft so desperate for venture capital that they have to drag 100+ year old poet into the mud with them?
Well 'AI' is basically smoke, mirrors and theft, so no surprise there.
Haha, we'd like to see some of that venture capital $$$ :)
Understand your view. Thanks for watching!
Your ignorance is showing. 😳🙄🤮 That would be fine, normally-except that you're clearly the type of person that, when you don't understand something, you simply run with your ignorance, proclaiming it from the mountain tops, as if it were true. Well, it isn't - but you don't care, really. Just know that these large language models don't "plagiarize" or siphon words directly from text that it had been presented with during its training, like you either mistakenly claim,-or scummily, knowingly misrepresent. They are given vast amounts of text and other things to train upon so that they can learn from them-like you and I, as humans, do; how words and concepts relate to one another from a semantic and heuristic standpoint. Understands how closely and frequently the word "dog" would relate to "cat" in a sentence, typically, whereas it would also understand how distantly and infrequently the word "nanotube"would relate to "cat" in a sentence. It uses its understanding of the semantic spatial relationships to then come up with its own poetry or prose or what have you based only upon its understanding. This is why you have large language models which seem artistically and even logically stunted to the point of hilarity, where has the models which had been trained on larger and larger data sets and thereby having greater depth of understanding can eerily mimic or surpass human ability. But like I said: you don't actually care about any of this or how it works. You simply know you don't like it, and that you're offended by it, and likely can't understand it even if you wanted to, and there are other people who are similarly upset about it on a convenient internet bandwagon, so instead of using your brain and demonstrating a moral compass by being honest with yourself, and others, you've instead chosen to hop on the aforementioned bandwagon. You're quite content there. It's quite sad actually, and I wouldn't wish that state of mind (willful, herd-mentally ignorance) on my worst enemy.
be afraid, be very afraid