Chapters: 0:00 Introduction 0:41 Scientific discourse analysis via Google Scholar 1:11 InfraNodus algorithm explanation 2:26 Analyzing a topic using a text network 2:40 💡Hint #1: Remove the search terms to see the context around 3:27 💡Hint #2: Reveal interesting patterns and concepts in a text graph 4:20 💡Hint #3: Writing down project notes (coding) 4:54 💡Hint #4: Look at the periphery of ideas for interesting concepts 5:39 💡Hint #5: Reveal high-level ideas using GPT-4 AI 6:44 💡Hint #6: Focus on high-level ideas at the periphery to avoid generic insights 7:36 💡Hint #7: Ask GPT-4 AI a freeform question to clarify your insights 9:53 💡Hint #8: Import more specific data from Google scholar 10:54 Filtering search results by specific search query 11:42 💡Hint #9: Zooming Out / Zooming In - Finding a new topic to focus on 11:56 Finding relations between concepts 12:25 💡Hint #10: Using GPT-4 AI model to connect ideas in a new way 13:20 The mushroom analogy - picking ideas in the forest of science 14:04 💡Hint #11: Structural gaps - connecting interesting topics 15:58 Human-in-the-loop AI workflow: feeding AI questions back to itself 18:08 💡Hint #12: Discourse connector points - how to embed your ideas into an existing discourse 20:30 Generating a summary of your research notes using the AI 21:23 Generating an article outline 21:55 Conclusion: summary of the workflow Try it on infranodus.com
Thank you very much for the demo! I'm intrigued by the idea to find gaps between topics and wonder if it's possible to check whether these gaps are filled when adding more texts (which are published later) into the corpus. My thoughts behind the question is the idea to analyse the emerging and resolving of research questions on a timeline. Could you please give me some hints?
Sure! Every time you add a text into the same graph using the file import feature, it will be added with a filter that reflects the filename. Then you can turn this filter on / off to see how it "fits" into the discourse. See 10:54 of the video to see how the filter works. You can also use the Analytics > Trends panel to trace how specific topics evolved over time. Hope this helps!
When testing the app, I would love to add larger files to see if it fits my research. Being limited to 3000kb is way too low. Is there a way around it?
The thing is that when you visualize a large file you'll get a very large graph. To make it readable, we'd have to compress it and simplify the content. As a result, you'd get a very generic simplified version of your content. So I'd suggest to use InfraNodus with the files that fit the limit. If you have something bigger, this type of visualization will simply not be useful for you. So you can split it into smaller parts to visualize as a graph or use another tool that would provide more generic answers and without the graph visualization.
Chapters:
0:00 Introduction
0:41 Scientific discourse analysis via Google Scholar
1:11 InfraNodus algorithm explanation
2:26 Analyzing a topic using a text network
2:40 💡Hint #1: Remove the search terms to see the context around
3:27 💡Hint #2: Reveal interesting patterns and concepts in a text graph
4:20 💡Hint #3: Writing down project notes (coding)
4:54 💡Hint #4: Look at the periphery of ideas for interesting concepts
5:39 💡Hint #5: Reveal high-level ideas using GPT-4 AI
6:44 💡Hint #6: Focus on high-level ideas at the periphery to avoid generic insights
7:36 💡Hint #7: Ask GPT-4 AI a freeform question to clarify your insights
9:53 💡Hint #8: Import more specific data from Google scholar
10:54 Filtering search results by specific search query
11:42 💡Hint #9: Zooming Out / Zooming In - Finding a new topic to focus on
11:56 Finding relations between concepts
12:25 💡Hint #10: Using GPT-4 AI model to connect ideas in a new way
13:20 The mushroom analogy - picking ideas in the forest of science
14:04 💡Hint #11: Structural gaps - connecting interesting topics
15:58 Human-in-the-loop AI workflow: feeding AI questions back to itself
18:08 💡Hint #12: Discourse connector points - how to embed your ideas into an existing discourse
20:30 Generating a summary of your research notes using the AI
21:23 Generating an article outline
21:55 Conclusion: summary of the workflow
Try it on infranodus.com
Thank you very much for the demo! I'm intrigued by the idea to find gaps between topics and wonder if it's possible to check whether these gaps are filled when adding more texts (which are published later) into the corpus. My thoughts behind the question is the idea to analyse the emerging and resolving of research questions on a timeline. Could you please give me some hints?
Sure! Every time you add a text into the same graph using the file import feature, it will be added with a filter that reflects the filename. Then you can turn this filter on / off to see how it "fits" into the discourse. See 10:54 of the video to see how the filter works. You can also use the Analytics > Trends panel to trace how specific topics evolved over time. Hope this helps!
This is very inspiring. Brilliant Project.
Thank you!
When testing the app, I would love to add larger files to see if it fits my research. Being limited to 3000kb is way too low. Is there a way around it?
The thing is that when you visualize a large file you'll get a very large graph. To make it readable, we'd have to compress it and simplify the content. As a result, you'd get a very generic simplified version of your content. So I'd suggest to use InfraNodus with the files that fit the limit. If you have something bigger, this type of visualization will simply not be useful for you. So you can split it into smaller parts to visualize as a graph or use another tool that would provide more generic answers and without the graph visualization.
Thank you very much for your answer@@noduslabs