I had my doubts that AI could expand the Venn diagram of what we know in physics, so I had a chat with Claude about the recently discovered electrical plasma "shells", (Boundaries) in the SAFIRE experiment and asked it to generate a hypothesis to explain the position of these shells, even before we know what causes them. I am now reassured that it is capable of more than sycophantic responses; although i was unable to cure it of Carl Sagan's unfortunate homily of "Extraordinary claims require extraordinary evidence." a pet peeve of mine. Extraordinary is not defined therefore we are left with "Claims require evidence." Somebody *please* fix it.
Does the set of knowledge base documents I provide my GPT count against my context window when I perform queries? Or is it treated as a de facto part of the model itself?
I had my doubts that AI could expand the Venn diagram of what we know in physics, so I had a chat with Claude about the recently discovered electrical plasma "shells", (Boundaries) in the SAFIRE experiment and asked it to generate a hypothesis to explain the position of these shells, even before we know what causes them.
I am now reassured that it is capable of more than sycophantic responses; although i was unable to cure it of Carl Sagan's unfortunate homily of "Extraordinary claims require extraordinary evidence." a pet peeve of mine.
Extraordinary is not defined therefore we are left with "Claims require evidence."
Somebody *please* fix it.
Can you comment on the rumors that the Claude 3.5 Opus training failed?
Yes do that please
Thank you 🦋
Does the set of knowledge base documents I provide my GPT count against my context window when I perform queries? Or is it treated as a de facto part of the model itself?