Next time I will add in the acknowledgements: 'Apart from AI, I used a Mac with trackpad, a pencil, a pencil sharper, an eraser, a digital calculator, glasses (because, you know, age), two pens (one for thinking, one for writing), a printer and A4 paper to print and write on (obviously), two highlighting markers (to emphasize the really important stuff), a whiteboard (for those late-night brainstorming sessions that woke up the neighbors), three erasable board markers (because even geniuses make mistakes), a spell checker (thank you, technology, for saving me from embarrassment), LaTeX (the only way to make equations look sexy), at least three different coffee machines (a true researcher needs options), my wife who used at least one of these machines to brew my coffee (and kept me from caffeine overdose), at least three different programming languages (because one is never enough for world domination), a napkin for sneezing (allergies are a scholar's worst enemy), toilet paper to deal with the natural needs one has (let's keep it real, folks), LED lighting (to illuminate the path to knowledge), solar panels to generate the energy needed for the machines I used (gotta save the planet while conquering academia), some chilled water, occasionally with CO2 (hydration is key, bubbles are optional).' Oh crap, I just realized I forgot to keep track of all the food my body needed for getting the paper done! Should I list every apple, every noodle, every coffee bean? I hope the reviewers won't reject me for that! Maybe I should propose a new mandatory course: 'Ethical Consumption in Academic Research'. Forget plagiarism, the real crime is forgetting to cite your breakfast cereal! The mandatory ethics courses will get even more boring with this stuff than they already were (if that's even possible). (Thank you Google Gemini for making this post even better ;))
I wrote a paper. Then I used grammarly to correct mistakes. Then I fed this paper to a top of the line ai detector. The result was 85% ai. If that is the state of the recognition, I ain't disclosing anything. Regardless of disclosure, I have to take responsability for the results and I have to pay the publishing fee. So f off for the publishers.
I write educational articles mainly on Evolution, paleontology. My goal is to write as accessible as possible, so that even my friends 8 yo can follow, yet with using and explaining proper terminology, theories, rules, you get the idea. You can feed any article into 10 different detectors. You will get 10 different %, literally 0% all the way to 100% So I did an Experiment and wrote 6 paragraphs, 2 were solely by me, 2 with ai correction, 2 full AI. There is one detector i used that marked every sentence on likelyhood, it got 6% correct. Most of those detectors Claim 99% accuracy*. * Based on own testing, we cant guarantee you get the same results Its disgustingly stupid 😂 but also a bit funny, if people wouldnt falsly fail tests or loose jobs over it 😅
I had this same issue. I got accused of using generative AI for my work, when all I actually did was use QuillBot to clean up my phrasing and grammer. Thus, like yourself, I won't be disclosing Jack 💩 !
@@shaice i actually didnt try originality i think, but i shall as im curious xD The most frustrating for me was quillbot, tho not by a big margin. The most accurate was actually creating an own GPT in chatgpt with some knowledge and experimenting around with prompts to get it decent enough, tho even then i wouldnt base a decision based on that. Still thought thats relatively surprising.
There are several court cases in the US now challenging the legitimacy of detector platforms. The 85% score doesn't surprise me. I was at a conference on AI and academic integrity and several of the universities have stopped using detection. You'll see more of this.
I use AI all the time in my job with zero phux given. But I don't publish papers for free so that a publishing outlet can steal my work and charge for it.
@@gronagorIs Hans Zimmer’s soundtracks really his work although he has played no instrument there? Of course it is his work, because he is the mastermind behind all players of instruments! Same way, AxionSmurfs work is still his (assuming he’s not being lazy and is not using AI for mastermind part as well).
@@gronagor I didn't get the feeling that he was being protective of his work in that way, I got the feeling that he was being protective of his work so that the publishing companies cannot be predatory. It's possible that he's publishing on open source journals?
I'm a security analyst. I use AI to resolve command structure differences, bios regions, things like that. PhDs usually publish to say Nature and get nothing for it except recognition, while the outlets charge $75 to view the paper for 24 hours. Publishing is one of the biggest corporate slave labor outlets in the world. I'm only a Master, not a PhD, and refused a scholarship to become a PhD because I don't want one. @@robertlee8519
I actually clapped at the end! Thanks for turning all that lengthy 'grandmother' information into bite-sized videos. I feel better now knowing it's fine to use AI to proofread my papers!
The Publishing houses want to keep the appearance that peer reviewing works perfectly well, that there are no conflicts of interests of reviewers and that regular peer review is better than AI reviewing. I give that rule 3 months to be considered idiotic and Publishing houses will insist that is not.
Very simple rule -- use AI anywhere where you can use personal assistance. At exams - no. In peer-review - not really (as you usually not asking for help from others) but maybe for grammar. For other cases, where your (mom/peer/pet) could help, go on!
I am certain that someone else has pointed this out, but they are probably requiring this because they are doing their own models, and they can create workarounds or filters for the papers that have used AI to prevent the problem of AI's training on other AI generated content
@@tar-yy3ub When you train an AI model on written material, it gets smarter. If you train an AI model on written material that was generated by AI, the results stop getting better and in many cases it doesn't just stagnate, it gets particularly bad. Sometimes it is nonsensical or repetitive.
Thank you Andy! And What about the use of AI for search purposes? for example Elicit for search of literature. Is that allowed according to these publihsers' rules? Thank you very much for all your videos!
Andy, I want to restart a PhD, use AI on a restricted set of legal ACTs and regulations to run an independent verification of the study. I postulate it will tease out the Gaps. Thoughts? That meet the RULES? Cheers.
Hey Andy....Thank you for the update! Just wondering what your thoughts are on tools like Rayyan etc that have AI integrated, but are being used for better UX in Lit reviews etc as opposed to using clunky Excel, would you say we would still need to cite that even though not explicitly using it to generate content?
I'd say that it is a tool to support your research and not write about or analyse your data. So I wouldn't mention it. But, if in doubt speak to the editor that you are submitting the paper to.
I use AI to improve my writing, I'm not a native English speaker and I really like my manuscripts to be written in polished English, is it allowed? Also, I use it to generate ideas on how to write introductions when I provide the main ideas it should be based on, what about that?
I'm not sure what you did not understand from the video. It was clear that it is allowed, but you have to give a disclaimer that you used it for that purpose.
@@gronagorI wouldn’t blindly give such disclaimer. We never give a disclaimer if our ideas are from our summer trainees or master thesis workers or other advisor role team members. If GPT is strictly advisor, then there is never a problem. If it were a problem, then journal has a problem of double-standards.
Second, in universities already offer language services, where language experts can fix your manuscript grammar. Still, we are unlikely to give them credit even if they fixed 80% of our language. Same double-standard issue here is that if we don’t give humans credit, why must we give AI credit.
I'm still a bit confused. I am a non-native English speaker. If I submit my paper and ask ChatGPT to rewrite it in an academic tone, and ChatGPT provides results with 1) different sentence structures and 2) vocabulary but retains the same meaning as my original work, is that allowed?
Editing versus formulating the language is a fine line. Where is the line? If i upload each section of my original paper and ask it to clean up the language does that need to be disclosed?
AI writes the paper, AI reads the paper list issues in this topic list possible solutions go try them AI writes the paper, AI reads t... _science_ The "science" at the end sounds sarcastic, but it was made possible by science - we're now able to glance at the end of a loop.
I have written a research paper and i must publish it to graduate my master degree, can someone please help me publish it, i got rejected and don’t know what to do. My supervisors just don’t help much… if i don’t really need help so much i wouldn’t comment my request here so please of someone can help me please let me know. Thanks in advance 🙏🏻
@@shaneuniverse4965 I’m doing artificial intelligence and machine learning, my paper is about early detection of Alzheimer’s disease. I have written the paper but what holding me back is submitting the paper to a journal, i got rejected once and i don’t know what to do
@@shaneuniverse4965im doing master in artificial intelligence and machine learning, the paper is about early detection of Alzheimer’s disease, i have written the paper and submitted it to a journal and got rejected once, and i don’t know what to do
So it's okay to use a human research assistant but it's not okay to use an AI research assistant. It's okay to discuss your work with colleagues, but not with an AI like NotebookLM.
How stupid and irrational is this policy all these years you have been using Google search which have been using AI enhanced algorithms 😂 can you discuss about this please ?
Exactly! If both a language model and a search engine function similarly (retrieving and processing information) but differ in their interaction (one active and one passive), then it makes sense that the active one, like a language model, can be used for fast processing of large volumes of material. Since a language model generates text based on patterns and information from its training data, we can leverage it to quickly synthesise, summarise, or structure information. The key is that while the model facilitates faster access to organised insights, the researcher's critical interpretation and original conclusions still come. The researcher interprets and integrates these findings into the paper, ensuring the analysis reflects their unique understanding and perspective grounded in the sourced material. The AI, or the "algorithmic language model", can be used to describe the findings since it cannot create independent and original thoughts. However, the researcher still interprets the interpretation based on their expertise, experiences, background, etc. Then, why do the journals care if the "algorithmic language model" is used or not during this process if the end results actually contribute to the field with new and original knowledge? It makes no sense at all. It is like having a saw but cutting a tree by hand instead. This reminds me of elementary school when everybody had a calculator, but the teacher didn't allow them to use it during the tests.
@@gltrjp Just don’t understand who are these fools enforcing backward policies on forefront literate researchers completely illogical and this has to be strongly rebutted by all the literate mindsets fiercely !
Next time I will add in the acknowledgements: 'Apart from AI, I used a Mac with trackpad, a pencil, a pencil sharper, an eraser, a digital calculator, glasses (because, you know, age), two pens (one for thinking, one for writing), a printer and A4 paper to print and write on (obviously), two highlighting markers (to emphasize the really important stuff), a whiteboard (for those late-night brainstorming sessions that woke up the neighbors), three erasable board markers (because even geniuses make mistakes), a spell checker (thank you, technology, for saving me from embarrassment), LaTeX (the only way to make equations look sexy), at least three different coffee machines (a true researcher needs options), my wife who used at least one of these machines to brew my coffee (and kept me from caffeine overdose), at least three different programming languages (because one is never enough for world domination), a napkin for sneezing (allergies are a scholar's worst enemy), toilet paper to deal with the natural needs one has (let's keep it real, folks), LED lighting (to illuminate the path to knowledge), solar panels to generate the energy needed for the machines I used (gotta save the planet while conquering academia), some chilled water, occasionally with CO2 (hydration is key, bubbles are optional).'
Oh crap, I just realized I forgot to keep track of all the food my body needed for getting the paper done! Should I list every apple, every noodle, every coffee bean? I hope the reviewers won't reject me for that! Maybe I should propose a new mandatory course: 'Ethical Consumption in Academic Research'. Forget plagiarism, the real crime is forgetting to cite your breakfast cereal! The mandatory ethics courses will get even more boring with this stuff than they already were (if that's even possible). (Thank you Google Gemini for making this post even better ;))
😂
I wrote a paper. Then I used grammarly to correct mistakes. Then I fed this paper to a top of the line ai detector. The result was 85% ai. If that is the state of the recognition, I ain't disclosing anything. Regardless of disclosure, I have to take responsability for the results and I have to pay the publishing fee. So f off for the publishers.
I write educational articles mainly on Evolution, paleontology. My goal is to write as accessible as possible, so that even my friends 8 yo can follow, yet with using and explaining proper terminology, theories, rules, you get the idea.
You can feed any article into 10 different detectors. You will get 10 different %, literally 0% all the way to 100%
So I did an Experiment and wrote 6 paragraphs, 2 were solely by me, 2 with ai correction, 2 full AI. There is one detector i used that marked every sentence on likelyhood, it got 6% correct.
Most of those detectors Claim 99% accuracy*.
* Based on own testing, we cant guarantee you get the same results
Its disgustingly stupid 😂 but also a bit funny, if people wouldnt falsly fail tests or loose jobs over it 😅
I had this same issue. I got accused of using generative AI for my work, when all I actually did was use QuillBot to clean up my phrasing and grammer. Thus, like yourself, I won't be disclosing Jack 💩 !
@@ScienceIsFascinatingPlease name the winner detector.. Originality?
@@shaice i actually didnt try originality i think, but i shall as im curious xD
The most frustrating for me was quillbot, tho not by a big margin.
The most accurate was actually creating an own GPT in chatgpt with some knowledge and experimenting around with prompts to get it decent enough, tho even then i wouldnt base a decision based on that. Still thought thats relatively surprising.
There are several court cases in the US now challenging the legitimacy of detector platforms. The 85% score doesn't surprise me. I was at a conference on AI and academic integrity and several of the universities have stopped using detection. You'll see more of this.
I use AI all the time in my job with zero phux given. But I don't publish papers for free so that a publishing outlet can steal my work and charge for it.
If you use AI all the time, is it really your work? Then why be so protective?
@@gronagorIs Hans Zimmer’s soundtracks really his work although he has played no instrument there? Of course it is his work, because he is the mastermind behind all players of instruments! Same way, AxionSmurfs work is still his (assuming he’s not being lazy and is not using AI for mastermind part as well).
@@gronagor I didn't get the feeling that he was being protective of his work in that way, I got the feeling that he was being protective of his work so that the publishing companies cannot be predatory. It's possible that he's publishing on open source journals?
I'm a security analyst. I use AI to resolve command structure differences, bios regions, things like that. PhDs usually publish to say Nature and get nothing for it except recognition, while the outlets charge $75 to view the paper for 24 hours. Publishing is one of the biggest corporate slave labor outlets in the world. I'm only a Master, not a PhD, and refused a scholarship to become a PhD because I don't want one. @@robertlee8519
I actually clapped at the end! Thanks for turning all that lengthy 'grandmother' information into bite-sized videos. I feel better now knowing it's fine to use AI to proofread my papers!
Well done Andy!!!!! 🎉
The Publishing houses want to keep the appearance that peer reviewing works perfectly well, that there are no conflicts of interests of reviewers and that regular peer review is better than AI reviewing. I give that rule 3 months to be considered idiotic and Publishing houses will insist that is not.
Very simple rule -- use AI anywhere where you can use personal assistance. At exams - no. In peer-review - not really (as you usually not asking for help from others) but maybe for grammar. For other cases, where your (mom/peer/pet) could help, go on!
Amazingly helpful video and love the humour. Keep up the great work!
U make such academic videos so fun to watch.
Very crucial information shared here thanks for such content.
thank you for summarising that but also showing quotes really useful
Such policies are restrictive and don't improve scientific research. AI is going to be everywhere in academia. Let's embrace it.
Exactly AI is research based. Just like we moved from manual searches to e-searches. Let us stop pretending, lets embrace AI and soon we will😂
The FIRST one is the most important. Haha. Great man.
I am certain that someone else has pointed this out, but they are probably requiring this because they are doing their own models, and they can create workarounds or filters for the papers that have used AI to prevent the problem of AI's training on other AI generated content
Not sure I understand, can you say a bit more?
@@tar-yy3ub When you train an AI model on written material, it gets smarter. If you train an AI model on written material that was generated by AI, the results stop getting better and in many cases it doesn't just stagnate, it gets particularly bad. Sometimes it is nonsensical or repetitive.
1) Great video, I am going to cite it in an AI podcast I am doing.
2) I love your shirt. Do you have an affiliated link to get one of those?
He said once that he made his own shirts
Thanks for your help. Yes, we all were clapping at home. 😎
Dear Andrew, I recently used STORM AI which is designed by Stanford University. Try to to do a youtube based on this.
These rules will definitely have their boundaries pushed.
Excellent !
Greetings from Ecuador
Thank you Andy! And What about the use of AI for search purposes? for example Elicit for search of literature. Is that allowed according to these publihsers' rules? Thank you very much for all your videos!
No one could ever proof that
Andy,
I want to restart a PhD, use AI on a restricted set of legal ACTs and regulations to run an independent verification of the study.
I postulate it will tease out the Gaps.
Thoughts? That meet the RULES?
Cheers.
Hey Andy....Thank you for the update! Just wondering what your thoughts are on tools like Rayyan etc that have AI integrated, but are being used for better UX in Lit reviews etc as opposed to using clunky Excel, would you say we would still need to cite that even though not explicitly using it to generate content?
I'd say that it is a tool to support your research and not write about or analyse your data. So I wouldn't mention it. But, if in doubt speak to the editor that you are submitting the paper to.
@@DrAndyStapleton Thanks mate! Appreciate it!
If I use IA to create a code to analyze my data I can´t publicate?
I use AI to improve my writing, I'm not a native English speaker and I really like my manuscripts to be written in polished English, is it allowed? Also, I use it to generate ideas on how to write introductions when I provide the main ideas it should be based on, what about that?
I have the same question. Me too I'm a non-native English speaker. I also use AI to polish my manuscript writing.
I'm not sure what you did not understand from the video. It was clear that it is allowed, but you have to give a disclaimer that you used it for that purpose.
3:00
@@gronagorI wouldn’t blindly give such disclaimer. We never give a disclaimer if our ideas are from our summer trainees or master thesis workers or other advisor role team members. If GPT is strictly advisor, then there is never a problem. If it were a problem, then journal has a problem of double-standards.
Second, in universities already offer language services, where language experts can fix your manuscript grammar. Still, we are unlikely to give them credit even if they fixed 80% of our language. Same double-standard issue here is that if we don’t give humans credit, why must we give AI credit.
I'm still a bit confused. I am a non-native English speaker. If I submit my paper and ask ChatGPT to rewrite it in an academic tone, and ChatGPT provides results with 1) different sentence structures and 2) vocabulary but retains the same meaning as my original work, is that allowed?
Thank you for sharing the Information. But it would be good if you could give us some examples that used in the published papers.
Can AI like ChatGPT find research gap? If yes, Can we trust the the research gap it finds ?
Editing versus formulating the language is a fine line. Where is the line? If i upload each section of my original paper and ask it to clean up the language does that need to be disclosed?
Yes I would think so.
You could AI review your paper before it is sent for peer review.
It is interesting that publishers didn't require authors to disclose use of AFFILITATED editing services...
👏 I clapped 👏
I keep replying my comment and it keeps been deleted whyyy??
Some tools are extremely useful. How about tools like Grammarly
... and that is why it is allowed if you write it in the disclaimer.
3:54 don’t worry about it
"Rules? Where we're going, we don't need rules." ~ Future Back to
All my good ideas that I put in my papers come from AI.
Did you say there's something wrong with Nature? What?
AI writes the paper, AI reads the paper
list issues in this topic
list possible solutions
go try them
AI writes the paper, AI reads t...
_science_
The "science" at the end sounds sarcastic, but it was made possible by science - we're now able to glance at the end of a loop.
I have written a research paper and i must publish it to graduate my master degree, can someone please help me publish it, i got rejected and don’t know what to do. My supervisors just don’t help much… if i don’t really need help so much i wouldn’t comment my request here so please of someone can help me please let me know. Thanks in advance 🙏🏻
What's your field of research? What's your paper about? What issues are you having/what is holding you back?
You need to publish a paper to graduate from masters? WTF is that? 😂😂😂 where do you study? Really Strange.
@@shaneuniverse4965 I’m doing artificial intelligence and machine learning, my paper is about early detection of Alzheimer’s disease. I have written the paper but what holding me back is submitting the paper to a journal, i got rejected once and i don’t know what to do
@@tiotsopkamouolivier3031 one of the requirements to fulfil master’s degree to graduate is to publish a paper, I study in Malaysia
@@shaneuniverse4965im doing master in artificial intelligence and machine learning, the paper is about early detection of Alzheimer’s disease, i have written the paper and submitted it to a journal and got rejected once, and i don’t know what to do
👏🏻👏🏻👏🏻👏🏻👏🏻
So it's okay to use a human research assistant but it's not okay to use an AI research assistant. It's okay to discuss your work with colleagues, but not with an AI like NotebookLM.
Rules are meant to be broken 😏
How stupid and irrational is this policy all these years you have been using Google search which have been using AI enhanced algorithms 😂 can you discuss about this please ?
Wow I never looked at it that way.... You know I'm going to bring that up next time these ancients profs lecturer me about ethics
Exactly!
If both a language model and a search engine function similarly (retrieving and processing information) but differ in their interaction (one active and one passive), then it makes sense that the active one, like a language model, can be used for fast processing of large volumes of material. Since a language model generates text based on patterns and information from its training data, we can leverage it to quickly synthesise, summarise, or structure information. The key is that while the model facilitates faster access to organised insights, the researcher's critical interpretation and original conclusions still come. The researcher interprets and integrates these findings into the paper, ensuring the analysis reflects their unique understanding and perspective grounded in the sourced material. The AI, or the "algorithmic language model", can be used to describe the findings since it cannot create independent and original thoughts. However, the researcher still interprets the interpretation based on their expertise, experiences, background, etc. Then, why do the journals care if the "algorithmic language model" is used or not during this process if the end results actually contribute to the field with new and original knowledge? It makes no sense at all. It is like having a saw but cutting a tree by hand instead. This reminds me of elementary school when everybody had a calculator, but the teacher didn't allow them to use it during the tests.
@@gltrjp Just don’t understand who are these fools enforcing backward policies on forefront literate researchers completely illogical and this has to be strongly rebutted by all the literate mindsets fiercely !
Witchcraft to stop progress of AI.
Spot on! We humans (when e.g. afraid of unknown) easily make rules that are based on double-standards…
🕊
List the friends you bounced ideas off of and their net worth.
I want to collab and write reviews . I have all paid ai tools .my research field is in agriculture, extension, sociology.i
I would like to collab...my field is. Agriculture
@@uvineskithsenadheera7050 can you send your research gate profile to me
👏 👏 👏