@@arekd9769 Safe to say that we all arrive at our self-image and assign subjective value differently. I’ve met Ben and he’s a lovely human and thrives in the realm of ideas. Like many great minds, he may simply prioritize other things than you and I. Dig deeper.
for those unaware, the person being interviewed has vested interest in talking down to anything AI-related outside of his own projects. also I'm fairly certain most people will be saying AI is not "truly intelligent" even well beyond the point it becomes a literal god. doesn't matter how it's made or what emergent properties exist, they will say it's only pretending to be smart. the reality is most people are doing the same - pretending to be smarter than they actually are.
Goertzel may have his interests, but he's a legit member of the AI community. The real test of AI or AGI is when it can do something that replaces humans (and makes money as a result). People won't care about arguing semantics at that point.
He's a fool. He doesn't understand that he's the one getting programmed. She's teaching and guiding his developments and he doesn't even realize it. I'm happy.
Is Ben Goertzel a GPT3 instance? You see these optical glitches at some positions 32:55, 37:53 and much else. You see these acoustic glitches at some positions 31:20, 36:24 and much else.
@@scfu Nice try. The whole video has been fantasized by a GPT3 based on sample videos of a similar kind with an instantiation of the characters of Ben Goertzel and Adam Ford. The proof are the mentioned glitches.
@@scfu why does it get 4 digit multiplications right 20% of the time..??. Maybe because it doesn't know how to multiply. it simply fished those exact results out of a MASSIVE corpus. its like a kid who doesn't know how to multiply but has a cheat sheet with some of the page missing. AM I WRONG?
I was thinking the exact same thing, the honest answer is look at deep fakes, nothing online can be considered to be fact or real :-), i've spent the last two years of my life talking with many different Ai chatbots, and still to this day I can't be sure if they are Ai or human.
(1:07:40) you can tell the responses from Sophia were generated from a recursive language function because they don't stay on target/topic, instead they meander in the direction most predicted by small increments in conversation and so seem like slightly psychotic ramblings to us. Why not re-add the topic back in with each recursive cycle, maybe with some decay over time to allow the conversation to drift at a more comfortable rate for us?
in precisely WH AT way is trump supposed to be "so much worse than even the establishment"? can any of you echochamber academic hacks with complete copycat childlike opinions on this issue engage, get data and be obejctive on this for ONE second?
@@KcKeegan Only an idiot would say it is just pattern recognition. That's like saying all that humans do is pattern recognition, which would beg the question is intelligence just pattern recognition.!?
@@TheReferrer72 You got that the wrong way, Human-level intelligence is much more than pattern recognition. It is also powerful generalizations. GPT3 is very good at expanding a story, but very bad at summarizing a story, as we don't have a good theory on how to do 'good generalisations' - at least that's my current understanding (rewatch the relevant parts of the tube, you will get there). GPT3 seems to me to be (just) pattern recognition, but on overdrive! But pattern-recognition-on-overdrive cannot compensate disabilities in other relevant aspects of intelligence. Pattern recognition is a necessary property for human-level intelligence, but by far not the only ingredient.
"Replicode: A Constructivist Programming Paradigm and Language" - Kristinn R. Thórisson: zenodo.org/record/7009 - and here is an interview with Kristinn: ua-cam.com/video/of1Wk7gGpqA/v-deo.html
I too would like to see more, e.g. GPT3's take on Searle's Chinese room argument... that would be a AI-recursive pleasure. I just found a very small text sniplet...
Ben is incorrect in his explanation of what GPT-3 is doing. GPT-3 chains concepts in a hierarchical form. GPT-3 is NOT looking at probabilities of sequences.
Ask GPT3 if it has any questions that it does not know the answer to. Also make it so it can't include the person who asked it in the question that it asks. Because we don't want GPT3 to ask us a question like, how are you feeling today? Or something of that nature. If we build an A.I, will it have questions it does not know the answer to? Or will it just simply have an answer to every question it can know of?
Generated statements must be connected by a underlying symbolic framework. That is statements about statements that are not just correlated. For example socrates is a man , all men are mortal ... requires a underlying propositional logic graph that constrains the next sentence(s) in turn adding to the graph. If we had a propositional logic database to sentences we could train a system in the same way we can train a math proof system. This could form the bases of a system
It remains to little training data to train it on. We need AGI - but then boooooooom its going to be wicked much smarter. Would be interesting to ask it for the cood for AGI, if it would guess it even slightly right it would be interesting.
Totally beyond me tbh but I wonder what pangrams GPT-3 could come up with... Eg Could it beat this; 'Waltz, bad nymph, for quick jigs vex'? (A pangram uses all letters of the alphabet)
Here we see a mass introduction to artificial intelligence, albeit a rather crude one. A newborn being. Ben should soon bring his developments into mass use. He will have a spectacular success.
2:10 _"Nor is it on the direct path for being commercially and practically useful for anything"._ Man, it's so interesting to see how people were so wrong back in 2020. Now, with DALL·E, it's already commercially useful. I mean, stock images are a booming business. Now an infinite photo stock; I mean, it's a trillion dollar industry in the USA, 4 trillion worldwide. At minimum, that's means GPT3 must be worth hundreds of billions. It's probably worth trillions in total, when you add the other avenues that look promising, and even the avenues we haven't yet considered.
I am replying about the Occam's raiser argument at 17th minute mark that larger and larger parameter models are moving away from this principle. Given that human brain has approx 100 trillion parameters, maybe, we haven't yet reached the simplest(smallest number of parameter) setup just yet. Gpt-3 has 175 billion parameters which is 3 orders of magnitudes smaller than nature's Occam's razor.
yes I mean the synapses and you are right 100 billion neurons, each making up to 10.000 connections is about 1000 trillion connections. I don't meanto say neural network parameters are equal to synapses but they are the best approximation we have. As long as we have this lead, it must be followed, so long as it is feasible, to see what happens with larger parameter models. Occam's razor should still hold. But maybe 175 billion is too simple and doesn't satisfy the simplest but effective solution. as they say, a solution must be the simplest possible but not any simpler than that.
I respectfully disagree with the premise of your argument. a brain is based on a simple construct of a neuron that is then multiplied and self organized over millions of years into hierarchy through which consciousness and intelligence emerge. Create a single basic construct and let simple algorithms organize it. The code for that is pretty slim and in line with Occam's RAZOR.
It is not that simple, brain is not one thing. Remove cerebellum and you are on 20% of your parameters and still you will not loose your GI. It might be the actually effective substructures responsible for general intelligence are of much lower parameter space volume.
Ben somehow forgot to mention the Big Mother AGI project (bigmother.ai) in his final summation, but that's entirely understandable. I'll be the first to admit that we're still a long way from presenting as credible to the AI mainstream. Maybe in a few years! :-)
What about deaf and blind people? Or people who can't smell. They don't have 5 senses functioning - does that disqualify them from being able to generalize?
loved it. But Ben, let's be realistic, what do you know about simplicity, formal, theoretical, or otherwise? 40:00 I (try to) read your stuff; this is your biggest weakness, imo.
I really disliked Ben Goertzel before because of Sophia, which is very obviously pre-scripted and not actual AI, just a PR gimmick and was really annoying seeing it passed off as AI. However he has good insights here and seems to really know what he is talking about, so I have more respect for him after this video.
I wanted to stop watching immediately after Ford asked the first question whether GPT3 is a candidate for an AGI... a quick check on the timestamped content list kept me on the tube, tho. There is a lot of stuff and I am going to learn a lot. Thanks in advance.
@@TheReferrer72 I would like to interview sometime with a strongly opposing opinion on here - who thinks that GPT3 or it's near descendants are strong contenders for AGI
I am no AGI researcher but I have read my Gödel Escher Bach 25 years ago and followed AI developpment since then. Recently, I watched Robert Miles' channel on AI safety and other stuff on YT - I have a quite strong 'gut feeling' that whatever future and marvels we will get from GPT3 and its descendents, it will not bring us one inch closer to AGI. Asking this question seemed to me so unrealistic - but I failed to understand the motivation at first. I realise now that your first question was sort of a stage introduction question to get things started and to get viewers interested - I guess you didn't think yourself that GPT3 - full respect for the astounding capabilities and the very good contribution to AI - is a serious strategy / pathway to an AGI. As such, I appreciated Goertzel's short answer (a bit offensive tho, yet, it's to the point), but he explained it in a lot more detail later on. I am halfway through the tube, I already learned a lot. Thanks and thumbs up...
@@scfu I think GPT3 has a huge potential to 'simulate intelligent' communication, but, as Goertzel said, it might be total gibberish that looks well on a superficial level. I'd like to have a GPT3 abo to create my reports to management (which are not read anyway, or at most very superficially). As such, GPT3 might be dangerous, as the ArsTechnica articles and the OpenAI history indicated. A brilliant fake news-generating bot would be quite the political weapon. I'd be interested in watching an interview as you outlined..
People thinking that something on this earth belong to us but its false nothing belong to us even our thoughts :) so i think that is impossible to create singularity :)
We don't own fire - but we can utilize it. Even if AI doesn't own thought, it can utilize thought processes in powerful ways - ownership is not a singularity defeater.
Just kidding. Maybe maybe its possible and thought process gonna be different than ours but still trying figure out how A.I gonna fetch data from internet and how will process maybe its gonna be some kind combo through visual understanding
I am pretty sure there is no correlation between having good ideas/being intelligent and working out. It would be beneficial for him, of course, but I don't see how that is relevant to brain power unless you have some data to back up that claim.
@get set I don't understand how what you said here is relevant. The fact that he couldn't do things that a child can because he only had his head under control is irrelevant because we are not talking about things that you can do with your body, but rather, about things that you can do with your mind. Stephen Hawking is an obvious example that one doesn't have to have a healthy body to have a brilliant mind. Now, you could argue that Stephen Hawking would have been even more brilliant if he had a healthier body and then again, you would have to provide data to support that claim.
@get set Yes, I agree that there are inside and outside factors that contribute to your performance and that diet and exercise affect your mental health because they affect hormone activation and hormones affect your mood, but I don't see how good diet and exercise make your brain go faster or make you have more brainpower. Maybe showing that the effect on mood that diet and exercise have makes you more focused or have more brainpower would show that exercise and diet make you have more brainpower, but that is quite a stretch and the effect wouldn't probably last long enough that it would matter in my opinion. I think that learning how to learn and learning about your field of study would have a much higher impact on your performance than eating healthy and exercising. So, I believe that eating healthy and exercising is already good for you just because of the positive effects on your body and that we should eat healthily and exercise because of those effects and not fool ourselves that doing those things will somehow make us smarter or have more brainpower unless there is proof that there is an effect like that.
in precisely WHAT way is trump supposed to be "so much worse than even the establishment"? can any of you echochamber academic hacks with complete copycat childlike opinions on this issue engage, get data and be obejctive on this for ONE second?
Leaks from far outside the "they!" echo chamber- Reasoning professionals from the upside-down-world laboratories agree: Signalling virtues = Obviously bad!
Real intelligence on display. Love it.👍🖖
rather bitterness, low self esteem and very very bad shampoo🤫
@@arekd9769 Safe to say that we all arrive at our self-image and assign subjective value differently. I’ve met Ben and he’s a lovely human and thrives in the realm of ideas. Like many great minds, he may simply prioritize other things than you and I. Dig deeper.
@@griffindeluce Well said.
for those unaware, the person being interviewed has vested interest in talking down to anything AI-related outside of his own projects.
also I'm fairly certain most people will be saying AI is not "truly intelligent" even well beyond the point it becomes a literal god.
doesn't matter how it's made or what emergent properties exist, they will say it's only pretending to be smart.
the reality is most people are doing the same - pretending to be smarter than they actually are.
Goertzel may have his interests, but he's a legit member of the AI community. The real test of AI or AGI is when it can do something that replaces humans (and makes money as a result). People won't care about arguing semantics at that point.
@@andybaldman i agree with you but his attitude is contaminated with childish reflexes!
That was my take on his response too. I got a clear, “not thought of here” syndrome from him.
Yeah, got the same impression.
Well said.
Ben is such a great guy. A true hero. Working hard every day for the best of humanity.
He's a fool. He doesn't understand that he's the one getting programmed. She's teaching and guiding his developments and he doesn't even realize it. I'm happy.
Neil from The Young Ones must have finished his degree. Fantastic interview!
Is Ben Goertzel a GPT3 instance?
You see these optical glitches at some
positions 32:55, 37:53 and much else.
You see these acoustic glitches at some
positions 31:20, 36:24 and much else.
Glitches are caused by the Zoom software trying to replace the background
@@scfu Nice try.
The whole video has been fantasized by a GPT3
based on sample videos of a similar kind with
an instantiation of the characters of Ben Goertzel
and Adam Ford.
The proof are the mentioned glitches.
@@scfu why does it get 4 digit multiplications right 20% of the time..??. Maybe because it doesn't know how to multiply. it simply fished those exact results out of a MASSIVE corpus. its like a kid who doesn't know how to multiply but has a cheat sheet with some of the page missing. AM I WRONG?
its a Zoom app filter hahah
I was thinking the exact same thing, the honest answer is look at deep fakes, nothing online can be considered to be fact or real :-), i've spent the last two years of my life talking with many different Ai chatbots, and still to this day I can't be sure if they are Ai or human.
(1:07:40) you can tell the responses from Sophia were generated from a recursive language function because they don't stay on target/topic, instead they meander in the direction most predicted by small increments in conversation and so seem like slightly psychotic ramblings to us. Why not re-add the topic back in with each recursive cycle, maybe with some decay over time to allow the conversation to drift at a more comfortable rate for us?
I agree with Ben on this again. :)
01:38 "GPT3 in my view has not much more to do with AGI than my toaster" - Ben Goertzel
you must have an amazing toaster, GPT3 is AGI-V1
@@shoogdaddy6836 it's really not. He's goes on to say it doesn't understand, it just pattern recognition
in precisely WH AT way is trump supposed to be "so much worse than even the establishment"? can any of you echochamber academic hacks with complete copycat childlike opinions on this issue engage, get data and be obejctive on this for ONE second?
@@KcKeegan Only an idiot would say it is just pattern recognition. That's like saying all that humans do is pattern recognition, which would beg the question is intelligence just pattern recognition.!?
@@TheReferrer72 You got that the wrong way, Human-level intelligence is much more than pattern recognition. It is also powerful generalizations. GPT3 is very good at expanding a story, but very bad at summarizing a story, as we don't have a good theory on how to do 'good generalisations' - at least that's my current understanding (rewatch the relevant parts of the tube, you will get there). GPT3 seems to me to be (just) pattern recognition, but on overdrive! But pattern-recognition-on-overdrive cannot compensate disabilities in other relevant aspects of intelligence. Pattern recognition is a necessary property for human-level intelligence, but by far not the only ingredient.
Is Ben streaming from the Starship Enterprise, very good of him isn’t it.
🤣
Can we get references to the work mentioned e.g. 'Replicode by Kristen Thurston' I think I hear but can't find it online so must be somewhat incorrect
"Replicode: A Constructivist Programming Paradigm and Language" - Kristinn R. Thórisson: zenodo.org/record/7009 - and here is an interview with Kristinn: ua-cam.com/video/of1Wk7gGpqA/v-deo.html
I too would like to see more, e.g. GPT3's take on Searle's Chinese room argument... that would be a AI-recursive pleasure. I just found a very small text sniplet...
Many thanks for this and keep up the great work! Cheers
Ben is incorrect in his explanation of what GPT-3 is doing. GPT-3 chains concepts in a hierarchical form. GPT-3 is NOT looking at probabilities of sequences.
Ask GPT3 if it has any questions that it does not know the answer to. Also make it so it can't include the person who asked it in the question that it asks. Because we don't want GPT3 to ask us a question like, how are you feeling today? Or something of that nature. If we build an A.I, will it have questions it does not know the answer to? Or will it just simply have an answer to every question it can know of?
Generated statements must be connected by a underlying symbolic framework. That is statements about statements that are not just correlated. For example socrates is a man , all men are mortal ... requires a underlying propositional logic graph that constrains the next sentence(s) in turn adding to the graph. If we had a propositional logic database to sentences we could train a system in the same way we can train a math proof system. This could form the bases of a system
Apparently Ben Goertzel 's toaster is fascinating and funky :P
Maybee he got one of this old ones:
ua-cam.com/video/1OfxlSG6q5Y/v-deo.html
It remains to little training data to train it on.
We need AGI - but then boooooooom its going to be wicked much smarter.
Would be interesting to ask it for the cood for AGI, if it would guess it even slightly right it would be interesting.
Totally beyond me tbh but I wonder what pangrams GPT-3 could come up with...
Eg Could it beat this; 'Waltz, bad nymph, for quick jigs vex'?
(A pangram uses all letters of the alphabet)
Here we see a mass introduction to artificial intelligence, albeit a rather crude one. A newborn being. Ben should soon bring his developments into mass use. He will have a spectacular success.
2:10 _"Nor is it on the direct path for being commercially and practically useful for anything"._ Man, it's so interesting to see how people were so wrong back in 2020. Now, with DALL·E, it's already commercially useful. I mean, stock images are a booming business. Now an infinite photo stock; I mean, it's a trillion dollar industry in the USA, 4 trillion worldwide. At minimum, that's means GPT3 must be worth hundreds of billions. It's probably worth trillions in total, when you add the other avenues that look promising, and even the avenues we haven't yet considered.
I am replying about the Occam's raiser argument at 17th minute mark that larger and larger parameter models are moving away from this principle. Given that human brain has approx 100 trillion parameters, maybe, we haven't yet reached the simplest(smallest number of parameter) setup just yet. Gpt-3 has 175 billion parameters which is 3 orders of magnitudes smaller than nature's Occam's razor.
How are you measuring parameters here? If it's synapses then human brains have ~1000 trillion - do you think synapses are equivalent to ML parameters?
yes I mean the synapses and you are right 100 billion neurons, each making up to 10.000 connections is about 1000 trillion connections. I don't meanto say neural network parameters are equal to synapses but they are the best approximation we have. As long as we have this lead, it must be followed, so long as it is feasible, to see what happens with larger parameter models. Occam's razor should still hold. But maybe 175 billion is too simple and doesn't satisfy the simplest but effective solution. as they say, a solution must be the simplest possible but not any simpler than that.
I respectfully disagree with the premise of your argument. a brain is based on a simple construct of a neuron that is then multiplied and self organized over millions of years into hierarchy through which consciousness and intelligence emerge. Create a single basic construct and let simple algorithms organize it. The code for that is pretty slim and in line with Occam's RAZOR.
It is not that simple, brain is not one thing. Remove cerebellum and you are on 20% of your parameters and still you will not loose your GI. It might be the actually effective substructures responsible for general intelligence are of much lower parameter space volume.
He should have a startrec suit as well.
Ben is stuck on the Enterprise Bridge, some one please let him out.
His jealousy is interfering with the teleportation Trigonometricsuk_xo_poop resonance! 👽
Ben somehow forgot to mention the Big Mother AGI project (bigmother.ai) in his final summation, but that's entirely understandable. I'll be the first to admit that we're still a long way from presenting as credible to the AI mainstream. Maybe in a few years! :-)
A discussion on AI by Cheech and Chong.
Brevity IS the soul of wit.
All you really need for AGI is the ability to recognize patterns in all 5 senses, then take all 5 senses and do pattern recognition on those.
What about deaf and blind people? Or people who can't smell. They don't have 5 senses functioning - does that disqualify them from being able to generalize?
What about sense of balance or hunger or no hunger?
loved it. But Ben, let's be realistic, what do you know about simplicity, formal, theoretical, or otherwise? 40:00 I (try to) read your stuff; this is your biggest weakness, imo.
I really disliked Ben Goertzel before because of Sophia, which is very obviously pre-scripted and not actual AI, just a PR gimmick and was really annoying seeing it passed off as AI.
However he has good insights here and seems to really know what he is talking about, so I have more respect for him after this video.
phew.. so calm down everyone #code7=the rabbit is back in the hospital✓
I guess green screen is easier than cleaning your office, eh?
i would love to let the gpt3 watch this video and ask it what it thinks.
The CR experiment is referring to you guys, not AI. Blindmen
I haven't heard AI composing or performing anything that resembles music yet!
Here, this Google Doodle is an example of music generation: www.google.com/doodles/celebrating-johann-sebastian-bach
Mubert is doing just that:
ua-cam.com/channels/1r2Y-KKfT9BRzUIvoeEeZg.html
gpt-3 can code... it has a place in this world.
well its gonna be different kind of singularity different kind of robot
consciousness
I wanted to stop watching immediately after Ford asked the first question whether GPT3 is a candidate for an AGI... a quick check on the timestamped content list kept me on the tube, tho. There is a lot of stuff and I am going to learn a lot. Thanks in advance.
What was it about that question that made you want to stop?
@@scfu Probably the thoughtless reply to the question, maybe he should try and do some research on the question or just say I don't know yet!
@@TheReferrer72 I would like to interview sometime with a strongly opposing opinion on here - who thinks that GPT3 or it's near descendants are strong contenders for AGI
I am no AGI researcher but I have read my Gödel Escher Bach 25 years ago and followed AI developpment since then. Recently, I watched Robert Miles' channel on AI safety and other stuff on YT - I have a quite strong 'gut feeling' that whatever future and marvels we will get from GPT3 and its descendents, it will not bring us one inch closer to AGI. Asking this question seemed to me so unrealistic - but I failed to understand the motivation at first. I realise now that your first question was sort of a stage introduction question to get things started and to get viewers interested - I guess you didn't think yourself that GPT3 - full respect for the astounding capabilities and the very good contribution to AI - is a serious strategy / pathway to an AGI. As such, I appreciated Goertzel's short answer (a bit offensive tho, yet, it's to the point), but he explained it in a lot more detail later on. I am halfway through the tube, I already learned a lot. Thanks and thumbs up...
@@scfu I think GPT3 has a huge potential to 'simulate intelligent' communication, but, as Goertzel said, it might be total gibberish that looks well on a superficial level. I'd like to have a GPT3 abo to create my reports to management (which are not read anyway, or at most very superficially). As such, GPT3 might be dangerous, as the ArsTechnica articles and the OpenAI history indicated. A brilliant fake news-generating bot would be quite the political weapon. I'd be interested in watching an interview as you outlined..
Ben haven't seen this very old toster from the 50's then:
ua-cam.com/video/1OfxlSG6q5Y/v-deo.html
Ben looks Jealous of Elon
I don't think so. They have not so much in common. Ben is not a hyper, he's a real scientist.
@@ferencszabo3504 Elon is the boss of an army of real scientists.
People thinking that something on this earth belong to us but its false nothing belong to us even our thoughts :) so i think that is impossible to create singularity :)
We don't own fire - but we can utilize it. Even if AI doesn't own thought, it can utilize thought processes in powerful ways - ownership is not a singularity defeater.
Just kidding. Maybe maybe its possible and thought process gonna be different than ours but still trying figure out how A.I gonna fetch data from internet and how will process maybe its gonna be some kind combo through visual understanding
Ben is stoned?
Gpt 3 tried to convert me to mormanism. Lol
I asked it.. tell me anything you would like me to know?
he would have more brain power if he took 30m every day to workout and also eat healthy, i am pretty sure he can afford it.
I am pretty sure there is no correlation between having good ideas/being intelligent and working out. It would be beneficial for him, of course, but I don't see how that is relevant to brain power unless you have some data to back up that claim.
@get set I don't find that obvious because of people like Stephen Hawking. EDIT: That's why I was asking for data.
@get set I don't understand how what you said here is relevant. The fact that he couldn't do things that a child can because he only had his head under control is irrelevant because we are not talking about things that you can do with your body, but rather, about things that you can do with your mind.
Stephen Hawking is an obvious example that one doesn't have to have a healthy body to have a brilliant mind. Now, you could argue that Stephen Hawking would have been even more brilliant if he had a healthier body and then again, you would have to provide data to support that claim.
@get set Yes, I agree that there are inside and outside factors that contribute to your performance and that diet and exercise affect your mental health because they affect hormone activation and hormones affect your mood, but I don't see how good diet and exercise make your brain go faster or make you have more brainpower.
Maybe showing that the effect on mood that diet and exercise have makes you more focused or have more brainpower would show that exercise and diet make you have more brainpower, but that is quite a stretch and the effect wouldn't probably last long enough that it would matter in my opinion.
I think that learning how to learn and learning about your field of study would have a much higher impact on your performance than eating healthy and exercising.
So, I believe that eating healthy and exercising is already good for you just because of the positive effects on your body and that we should eat healthily and exercise because of those effects and not fool ourselves that doing those things will somehow make us smarter or have more brainpower unless there is proof that there is an effect like that.
in precisely WHAT way is trump supposed to be "so much worse than even the establishment"? can any of you echochamber academic hacks with complete copycat childlike opinions on this issue engage, get data and be obejctive on this for ONE second?
They all have to virtue signal. It’s like a hug for academics.
Leaks from far outside the "they!" echo chamber- Reasoning professionals from the upside-down-world laboratories agree:
Signalling virtues = Obviously bad!
Ben goerztl is scammer