Google's STUNNING Notebook LM | Personalized AI to Build Your "Second Brain" | Notebook LM Tutorial
Вставка
- Опубліковано 14 чер 2024
- Learn AI With Me:
www.skool.com/natural20/about
Join my community and classroom to learn AI and get ready for the new world.
GOOGLE NOTEBOOK LM:
notebooklm.google/
00:00 getting things done
01:48 building a second brain
03:00 [rumor] OpenAI Context Connector
03:47 Notebook LM
04:48 Google's Graveyard
05:32 how to create a new notebook
05:50 adding sources
06:55 tutorial and testing
20:50 Summary
#ai #openai #llm
BUSINESS, MEDIA & SPONSORSHIPS:
Wes Roth Business @ Gmail . com
wesrothbusiness@gmail.com
Just shoot me an email to the above address.
This is a great way to learn about your favorite products being discontinued 😅
Hangouts was the ultimate betrayal.
Just a matter of time before it becomes used against you in court. Act accordingly.
You’ll get caught eventually.
Meanwhile everything you say to GPT is tied to your phone number and stored in a LLM forever. Don't act like this is any different.
Live disconnected from technology in the woods alone if you want
As it should be
As most of the other information one generates throughout their life. Good generic advice, though. Acting responsibly-not being a dick-is the strategy that elevated our ancestors from animals to humans!
Love your gentle blend of humour Wes. Very useful video too 👍
I and others have been doing this forever. Theres a video by TechLead called Using ChatGPT with YOUR OWN Data. It is just doing a RAG lookup on the embedded data to get "sources" which are just the segments of the file that are most similar to the question, and then using those segments in the context to let the AI work on the relevant data. Even if Google kills this, it's just a UI with Google logo on it, someone else will make this webapp, or you can do it yourself with your own API keys.
I need this
Yo wes, heads up - this is the future of knowledge management systems for frontline staff in customer service. Just upload your knowledgebase and the frontline can reference it whenever they need to, currently KMS systems are expensive and dont really work that well... This tool would be perfect for the entire customer service industry. Watch that space explode with this.
Love your humor. You are Not trying to be a comedian, just a peculiar choice of words. Well done.
Yeah put all your personal data there, only for them to lock it behind a subscription in 6 months.
Im still hoping Obsidian implements ai like this into thier app. Love the graph view.
The economic driver in this, isn't subscriptions ... these kinds of systems will be very good, and absolutely free forever. Profit is in analyzing and exploiting the knowledge embodied in the very personal kind of data people put into such a system, to get the benefits from it.
Just think about what can be learned by watching millions of people work and think in realtime, with all the receipts and connections organized and exposed forever.
Scary stuff.
@@ZappyOh Definitely .
Obsidian has a good plugin API. It just needs someone rewriting langchain or a RAG-enhanced LLM interface to it and you're closer to what this notebookLM does
Who said anything about personal data?
@@snow8725 "Personal data", as in your actions/paths taken to arrive at whatever endpoints you aim at ... illustrating how you work as a thinking machine.
It doesn't get more personal than that.
Shoutout to anyone watching this drinking a crisp, refreshing Zima
No Zima, but drinking a Root beer i made in my new Soda Stream lol
Zima is winter in Russian. )
What about the cybersecurity of the information we put into models like this? My workplace is not in tech but i’m trying to incorporate AI where I can - the COO sent an eblast to the whole company saying we can’t use models like this for pulling data if it involves feeding client data in. I imagine lots of workplaces are thinking the same thing. This will definitely slow AI replacing jobs. Thoughts?
We need same tool but with local LLM.
Won't happen.
The profit is in analyzing your patterns.
They need you logged in all the time.
@@ZappyOh That’s the thing..! Wolf brothers! Gimme the tools and stay the fuck out of my boring but private business!
Actually, several open source versions are already being developed.
@@FloodGold How is open source going to help?
You will still have to login to someone else's system, as you havn't got the funds for compute.
@@ZappyOh yeah bro it's best to stay the fuck away from these shits
NotebookLM is currently
only available in the U.S.
NotebookLM is only available in the U.S. for users 18 and up
You know one day something actually is going to be stunning...
Notebook LM is a one-way street. Wes says it himself: Google will ultimately discontinue it. It looks like a weekend project of one of their engineers. And it's also not really a second brain when you're not able to really organize your stuff but only drop a bag of documents into a RAG search. I use Logseq as a second brain. Well organized, you can find most with the built-in full-text search and personal discipline. If anything, something like this needs to be an open source tool (for privacy) where you configure multiple text, knowledge and document sources (maybe even including LLM tools that can query a database). Basically your own private search engine with RAG and tools to aggregate all your little second brains (Logseq/Obsidian, your browser bookmarks, some file system folder or WebDAV directory, your CRM database etc.). I'm pretty sure that there are multiple open source tools that go into this direction. It would be better to look at those.
I agree. With personal discipline very simple hierarchical notebook may be used as a knowledge base. I know that OneNote is good for that but I use very old WebOrganizer and love it.I don't quite understand all that excitement about LLM finding a direct quote in PDF. Hey, you can do it with simple text search 100 times faster and predictable. In many cases there are keywords which could be used. And if you can’t remember at all what you want to find this is a sure sign of poor organization of materials. You won't be able to work with it effectively anyway.
You can build one yourself exactly the way you want form any source, automate it and more with langchain and if you can't code with langflow or flowise.
@@Dron008 One note, good that you mention that, such a pos too... TBH if they close Microsoft Google and Facebook tomorrow the world would be a much better place.
Keep them coming Wes. 🙏
Its only avialble in US for now. I can't wait.
Uploaded one minute ago... that's fresh out of the printing press 🔥
Keep being amazing Wes!
Needs a batch upload for pdfs and other docs from your computer
Most people need to work more on the extremely capable first brain they already have ;-)
Do you know about the Pirate Party movement? (Your profile pic looks like it could be a logo for them.)
Mine is defective, it keeps wanting to be shocked and stunned.
Not everyone has the same capabilities, you're risking to discriminate minorities like neurodivergent people with your statement
I assume this is supposed to sound clever, it does not.
I not even sure most people even have brains...
Love your content .. it’s amazing.
Amazing! Thank you.
looks promising, Wes you are on the pulse
I'm shocked how stunning this revolutionary groundbreaking breakthrough in usage of buzzwords is.
Thank you!
It is a fantastic application and i use it frequently for my math notes
I have been using this for a different purpose. Slowly, i have been importing the policies of my organization. When altercations, dilemmas or strange situations arrive, its been super helpful to reference what the related policies of the organization have to say. Game changer for me because i can also use the citations to locate where this may have come from. In short, it became a mini HR assistant. Policies are usually transparent so no harm there and sometimes it would combine and note other policies that i would never have taught it was related to. Definitely a good 👍 IMO.
Thanks, Wes. I've added it to my reference folder 😏
Your quick comment about wishing it would say that there was nothing about zima in the context is something that LLMs should do more if there's no answer about a threshold level of confidence, instead of hallucinating the best thing it can come up with.
Like if you asked a human the question they'd say wtf are you talking about?
A GPT that says it doesn't know or needs more specific information or clarification if something is ambiguous is much better at conversing in a way that is co-operative and guides you toward a solution.
You can force them to do all those things exactly through your prompt. telling it to rate its own response confidence based on whatever criteria and making it to ask questions in an iterative way so as to increase its response confidence before hallucinating. There are many articles out there that show more specifically how to do this that are easy to find.
Ps pro tip you can sort of miss use the chat gpt feature where you provide background information about yourself but instead put custom commands there. People usually put their job or level of expertise in certain things or whatever background info etc… but instead put whatever functionality there or however you want it to act. Like put the stuff there so you don’t have to repeat yourself in prompts. Hope this was helpful and makes sense! I should have AI rewrite it heh
So....this doesn't run locally right? So...we'd be giving all our inner thoughts and personal notes to Google to chew on?
Thanks for the video.
The little pun when you said this is getting “citing!”🎉 while getting excited about an example of an ai citing experiment, made me wonder .. that was so slick.. was it on purpose and if it was., I’m so happy.. and if it wasn’t.. I’m so happy.! You didn’t pause or give any space to make it land like a joke, which made it great
Also I wanted to add I enjoyed this video , as I watch all of your videos every day.. and I got more out of it than this “citing” comment.. that just stuck out to me and felt like it was either for people like me or an accident which would also be for people like me who find meaning and joy in happy accidents..
I am bamboozled!
Your "Second Brain" absolutely needs to be in the app agnostic future proof format like markdown files. Sure, you can use AI with it (to search/read/write to it and more), but it must still be usable if the app you're using it with gets nuked. I'm currently using Obsidian to interface with my second brain. It's all markdown files, I can just switch to another app or make my own if Obsidian dies or if something better appears.
Wave was AWESOME.
Does it output code from pdf or restructures it for another purpose? Should give it a try. Sound good
Good AI fodder. Love this presentation and your sense of humor. Google has some of the best stuff and also by far the largest graveyard of broken dreams 😮
Hey Wes, is there any chance you could release your vids i 1080p or higher? On occasion, some of your referenced graphics, charts, etc. are illegible due to the low resolution. Thanks!
Great document text search, summary, poem by google
😁
This is the productionized version of Tailwinds from last year's I/O. Its basically a RAG system with an interface. The same kind of problems with any RAG system still exists with this one.
What do you think the most seriously limitations and problems are?
@@sgttomas
Data Quality and Relevance: The RAG system is heavily dependent on the quality of the data it retrieves. If the retriever component fetches irrelevant or low-quality information, the generated output will be adversely affected.
Latency: Retrieving documents can introduce significant latency. This is particularly challenging when the RAG model is used in applications that require real-time responses.
Indexing and Updating: The corpus used for retrieval needs to be continually updated to ensure the information is current. The indexing process can be resource-intensive and complex.
Interpretability and Trust: Users might find it hard to understand how the RAG system came to a certain conclusion, making it difficult to trust the outputs without clear insight into the retrieval process.
Integration Complexity: Integrating the retriever with the generator can be complex, especially when it comes to managing the passage of context between the two components.
Scalability: The RAG approach can be resource-intensive, as it may require large-scale infrastructure to support data retrieval and processing, which might not scale well for widespread use or in systems with limited resources.
Bias and Fairness: The content retrieved by the RAG system might be biased, reflecting the biases present in the source data. This can propagate or even amplify biases in the generated responses.
Error Propagation: Errors in the retrieval process can propagate through to the generation process, leading to incorrect or nonsensical outputs.
Dependency on Corpus: The performance of a RAG system is directly tied to the breadth and quality of its corpus. A limited or overly narrow corpus can significantly restrict the system's effectiveness.
Cost: The cost of running retrieval over large datasets and the subsequent generation can be significant, particularly when querying large databases or when frequent updates are required.
@@sgttomas For piece meal tasks like writing some personal notes, or a not so important email...etc this is probably fine. But for more critical things like an executive brief or analysis that is suppose to influence high stakes decisions I think we still have quite a lot of things to solve:
1. Accuracy - regardless how many times you run (or even use the same/another LLM to check or use it in a GAN like approach) there is always a possibility for error. This is true for humans as well. But generally speaking (similar to autonomous vehicles) the probability of humans making a catastrophic mistake is probably less. This is likely due to a multitude of reasons, including having some innate sense from experience, e.g. the source data mistakenly says market size is 10M, and the LLM is more than happy to write 10M. But I know from experience (or a reasonably smart intern) that it must be a typo and must be higher than that or we shouldn’t even be looking at this market in the first place; Survivorship bias - people who are not careful enough to avoid catastrophic mistakes rarely gets high enough on the corporate ladder to be the final editor of a critical piece of work. (It doesn't mean they are better, it might just be a combination of luck and diligence).
2. Steerability/Consistency - In my personal experience drifting is still a significant problem. Say for example this is a 3 paragraph document. The second paragraph needs to talk about market potential (I'll get to why it must be the case in a moment). First parse is okay. New source documents are updated and I refresh the generation. Suddenly I find the paragraph starts talking about competition (which is supposed to be in paragraph 3), or starts the paragraph on market potential and drifts to talk about competition before the topic is adequately addressed. The rigidity in business document structure (e.g. paragraph 2 must talk about market potential) is often due to manager/decision maker's personal preference. And a lot of times to get things done, the packaging turns out to be more important than the content (e.g. Bezzos only read docs). Until this changes, unfortunately it often remains to be the most important thing.
3. Cost - when you have thousands (or hundreds of thousands) of live documents that you need to draw upon, which are constantly being refreshed, embedding generation, similarity search...etc all costs ramp up very very quickly. E.g. your document is a summary of 10 files, these 10 files each take input from another 10 files each, so on and so forth, one change in the bottom layer may require you to rerun the whole thing (this also leads to accuracy, consistency issues). And this is even worse when your source files references each other (often the case in business settings).
To address some the above issues:
- Accuracy: there is some interesting work on scoring arxiv.org/abs/2403.18802, but still early stages (and additional cost).
- Steerability: some kind of enforceable structure is needed. E.g. I can stipulate that paragraph 2 only talks about market potential.
- Cost: improving steerability and accuracy will help with cost. But probably a bigger paradigm shift is needed.
- Expectations: I think at least for now, a more "human in the loop" approach is needed, we are still very far from typing in a simple prompt to write a serious piece of work.
These viewpoints are not just my own but also lots of other people that have been trying to get RAG to uplift their work.
The sad thing about NotebookLM is, the underlying technology and issues have been known for >12 months now (indeed Tailwinds is more than 12 months old), but there doesn't seem to be any attempt to address these.
@@d279020 amazing response! Thank you so much. I didn’t realize that RAG was still so unreliable. I’m starting to apply this to my own domain. I really appreciate the cautionary words!!!
I really think you need to be able to voice chat with whatever ai notetaker you are using. If they add that feature I will definitely use it. Currently using chat GPT 4 as my notetaker/project assistant for that reason, but it doesn’t have access to my documents and it’s annoying having to copy and paste everything.
Ultimately I think Apple has the best potential for an AI assistant due to its ecosystem. Having access to not only all your notes and documents but also your calendar, maps, web browser, camera, email, contacts and other apps is just going to be huge. Like how siri should have been. Actually it would be game changing. I do wish google keeps up in this race though because I’m comfortable with google docs/sheets. Honestly this feature should have been imported directly into google drive.
Can it look through personally saved UA-cam videos, using their transcripts?
This could be very handy for me as an author to remind me of facts I've made up in my books: like what a minor character looks like, or where an incident took place, etc etc.
Wes, have you pushed it on data synthesis vs reference at all? So, maybe giving it 2-3 separate papers on a subject and asking it draw conclusions from the combined knowledge?
there are soo many companies doing exactly this. The only innovative thing here is their clean UI. A single programmer can bootstrap something similar in a weekend or less.
Can you ask it for a word count on a document?
I don't have access to NotebookLM, but I have been doing the same things with Gemini 1.5 pro 1 million context window all this time. It's pretty useful. 😊😊
What's the most useful thing you do with it?
Here are some ways the AI assistant in Notebook LM could help manage and organize various aspects of your life:
• Knowledge Management - By uploading documents, research papers, meeting notes, etc. into Notebook LM, you essentially create a personalized knowledge base that the AI can quickly search through and synthesize information from. This allows you to easily access and connect ideas across different sources.
• Task and Project Planning - You can feed the AI details about your current projects, to-do lists, schedules, etc. and then query it for next actions, priorities, deadlines and it can provide an organized view leveraging all that contextual data.
• Writing and Analysis - The AI can assist with writing tasks like reports, emails, presentations, and papers by summarizing relevant information from your sources. It can also analyze complex topics and break them down.
• Research Assistance - When exploring a new topic, you can upload source materials and have the AI extract key insights, identify knowledge gaps, suggest additional readings, etc.
• Idea Generation - Prompt the AI with a broad topic and it can creativity combine concepts from your personalized data sources to generate new ideas and thought-starters.
• Memory Aid - Rather than trying to memorize everything, you can offload information to your Notebook LM and rely on the AI to resurface relevant details when you need them.
The key advantage is having an AI system tightly integrated with your personal information sources to serve as an extension of your own memory and analysis capabilities. This could significantly boost productivity, learning, and creativity across all domains of your life.
Unfortunately the video doesn't provide specifics on the data limits for Notebook LM. However, it does mention a couple relevant details:
1. It states that each individual source file (PDF, text file, Google Doc, etc.) can contain up to 200,000 words.
2. In the demo, the narrator had successfully uploaded and used multiple source files simultaneously, including a 155-page PDF, a 50-page legal document, and several other PDFs/files.
So while an exact total data limit is not stated, it seems Notebook LM can handle uploading and parsing a substantial amount of source material across numerous files and documents. The 200,000 word per file limit suggests you could likely upload millions of words worth of data in total by splitting it across multiple source files.
The video emphasized that a key strength of Notebook LM is its ability to quickly find relevant information and insights even when searching across large document repositories. So having broad capacity for ingesting personal data sources appears to be an intentional part of the tool's design.
Unless Google has published separate technical specs with firm data limits, the implication seems to be that Notebook LM can practically accommodate very large personal knowledge bases made up of numerous text sources. But you'd have to split larger documents into 200,000 word chunks to abide by the per-file limit.
Really interesting
The types of queries you are trying include textual words, character-by-character matching information from the sources, that's pretty simply for a search engine, then the results are re-redacted by LLM. It's probably more interesting to use queries that require analysis and contrast of multiple documents.
I use this for the research stage of my uni essays. It helps me make connections between sources, and it’s good for reflecting whilst working. Also the way it cites the text is very very useful.
I wasn't clear -- does this program only work with Google docs? Or can you give it access to other stored or perused stuff?
You can, indeed, give it all of your personal information. 😂
Love to find an open source version, although if you use Google docs it makes sense!
So is this just a basic RAG assisted LLM for people who aren’t up for making one with like langchain and OpenAI’s api?
The more software that comes out to "help manage personal information" The more I recoil from technology. How can we ever feel confident putting all of our information into these products with corporations constantly collecting and selling our information?
Right. But if you don't use it, will you be able to keep up with productivity? That's the question.
Zima is indeed a beverage-it didn’t quite hallucinate that!
Imagine this, but a local version running on Windows, MacOS, Android, iOS, etc.
I must be dumb but I could have sworn that I saw several other AI software that you could upload a pdf ask a question and it searches the file and gives an answer. So what is special about this? Or couldnt I simple make an agent and upload files and do the same thing?
Give Google the few parts of your life it hasn't yet exploited or sold.
It should be noted that this is currently only available in the US. Also, doesn't Notion AI already have this feature?
have you seen obsidian, with AI chat integrated via plugins?
I have tried it but have a pdf that is an export from my Facebook page that is approx 2500 pages. I have broken down the PDF into 5 files of 500 pages and still when loading the pdf into notebook lm it simply locks up. I need something like that which can read and use 100MB pdfs or something consisting of over 2,734,519 characters according to notepad... Please help and give me some direction. What can I use to read and examine my massive Facebook page exports?
Wes, what is your best option for similar software organizer you can use only on your computer, off line? Or you guys browsing the comments- what do you like the best?
Obsidian and AI addons including local LLM support.
Yes, it looks very useful. Many years ago Google had an app that ran in my PC and allowed me to search the contents of my drives! It was great and I wish I had it today! Of course it no longer exists...
Feels like I got hit with a Stone Cold Stunner!
What happened to rabbit r1? It got really quiet in that front
My channel is a victim of UA-cam's unknown glitch. I checked online, no threads found for the same problem. I asked customer service, they gave me an illogical explanation. How can I escalate the issue ?
What is the image that flashes on screen around 11:17 I cant pause on it
"Stunned and Shocked"
Oh my, I am simply stunned,
Utterly shocked, I say!
They kept on repeating this news,
Much to my vast dismay.
Who would have ever guessed it?
I'm flabbergasted to my core.
This revelation rocks my world,
Shakes me to the floor.
Gosh, golly, I'm thunderstruck,
Bowled over and blown away.
My mind is thoroughly boggled,
What more can I possibly say?
I'm staggered, stupefied, startled,
By this astonishing talk.
Color me astounded and amazed,
I'm reeling from the shock!
Truly, profoundly, I'm taken aback,
I simply can't believe my ears.
This bombshell knocks me off my feet,
Brings me nearly to tears.
So STUNNED am I! So very SHOCKED!
By these words you've said.
But do tell me once again, my friend,
Because it hasn't sunk in my head.
I fell asleep last night watching this, but still a great video, maybe I will fool with it, don't know.
My path forward is likely going to involve Q pi learning amongst about 400+ other projects in my q aside from taxes and seeing where I can see the eclipse without going blind 😅
You mentioned sheets - how about exporting as comma separated file then you should be able to pile everything in there
For that paper example, find in document could have done that, no?
"TOO MUCH STUFF coming into our lives". You just summed up my stress demon.
Good intro
22:50 what !!! I started using it like 2 months ago
Basiclly a Chat with RTX online, with a bigger LLM and probable a 1M context window. What about the terms of service ¿Privacy?
This! All this information but little to no talk of privacy
I am currently building a Research Assistant that can capture anything you want, with a single mouse click, and transforms it into an entity in your LOCAL Second Brain, which can then be used like Notebook LM. But, everything is hosted on the client's local machine, and completely private.
Claude 3.0 Opus’ response: “According to Table 3 in the paper, the ReALM-250M, ReALM-1B and ReALM-3B models all outperformed GPT-4 on the conversational dataset:
GPT-4: 97.0 accuracy
ReALM-250M: 97.8 accuracy
ReALM-1B: 97.9 accuracy
ReALM-3B: 97.9 accuracy
So the ReALM models with 250M parameters and above were able to beat GPT-4's performance on the conversational dataset.”
ChatGPT 4’s response: “The ReALM models that outperformed GPT-4 on the conversational dataset include the ReALM versions with 250M, 1B, and 3B parameters. Specifically, for the conversational data, the accuracies were as follows:
GPT-4: 97.0%
ReALM-80M: 96.7% (slightly below GPT-4)
ReALM-250M: 97.8%
ReALM-1B: 97.9%
ReALM-3B: 97.9%
This indicates that the larger ReALM models, starting from the 250M parameter version and above, were able to surpass GPT-4 in performance on the conversational dataset.”
What we need for UA-camrs is functionality to search our 10 TB of videos stored locally to re-purpose old content. That would be sick!
i miss google+ 😢, thanks for reminding it again :)
So sad for ya
It'll be cool if an A.I was built on the "Second Brain", Personal Knowledge Management idea.
FYI This was release months ago. Gemini 1.5 pro is better IMHO. Not sure which LLM notebook uses - gemini pro?
They will kill it off when you and I, because yeah I have found it handy for the same reason, have helped them develop a better agentic discovery algorithm. Or at least that was my guess when I saw it and started using it. This doesn't work well to develop labelling for better fine tuning because people are interested in a lot of things. But...if you can determine how people label. Now that is very very useful.
They could add this to Google Keep. Tried NotebookLM but only available in the US 😢
If you want your Second Brain to be actually YOURS & PRIVATE, you can use Open Source services & Host your data locally.
You can achieve this using Obsidian & Locally running LLM
This would be useful to use with Google Drive.
I need a second brain... and depending on who you ask... i the help with the first.
If I'm to do that I want my own LLM.
Seems like I could use the old search in a PDF viewer to find words like Spark or Zima.
I agree - burned too many times to trust Google. I'd love something like this, but it's only realistic on a Local LLM. Partly for privacy, but also for longevity
I've been using Notebook LM for a few months now. I love it to death.
That looks pretty impressive. At least to me, because I read a lot of papers. But yeah, the danger of them discontinuing it and you losing all your notes is way too big.
I love it but yes, chances they kill it are high. However it does inspire me to make one, maybe open source or locally run. Feels doable.
Make one that tells you when it does not know or needs more clarification please. Anything to avoid "hallucinations". Users need to know when the machine is BS ing them in order to have confidence in the answers it gives.
This should be part of the operating system eventually… so clearly a Microsoft and Apple thing.
My reference folder (mine is called STUFF) is 21 years old, has 14512 files, 1151 folders and 4GB in size, the day is coming! ;) 🙏
The principle of the second brain is to do the synthesis yourself. This is called reflection. Not doing it is just like wanting to have muscle gains from a sofa
The ways these companies use to get your personal data and documents are endless. Never gonna use this, only local installations for me.
Yeah I'd like to find a way to do this locally. There is Chat RTX, but I haven't tried it. If anyone knows of other options please share.
@@goldmund22 I think Quivr may be an option
@@chrisgiles5653hey awesome thanks for the suggestion! it looks like it could be just as useful if not moreso than this, and it is open source. At first I thought it was not possible to use offline but there does appear to be an offline mode.
@@goldmund22 maybe it’s not as good but ‘anything llm’ allows you to upload/embed files and also web links.
@@lpanzieri thank you for the suggestion I hadn't heard of that either. How do you feel about the privacy situation with that? I am looking for something to help with my work, and definitely need to be able to upload docs like PDFs. I've done minimal testing with Claude 3 Sonnet and it's very impressive what it can do..but if I could find a way to accomplish something similar with a Local and private set up, that would be incredible.
Looks good, but for me to use something like this with all my notes about game design, story/lore, and other ideas related to projects I am working on (which are more than 2000 pages, and I get a headache each time I need to find something specific in the last couple of years), it will need to be local and open source.
Nice, but I feel you can kinda already build this with a CustomGPT and knowledge base files…
The days of needing a Lawyer are numbered. 😅
or even the judge
@@dvoiceotruthYou, AI, and what army will get rid of them? We don't currently "need" them...they're our parasites, and most people are so utterly fouled up by government-run-school indoctrination that they think they "need" bar-licensees as their plantation overseers.
Maybe Congress should consider this. At least they can know what's in a Bill since they never read it all anyway, lol
That's a great idea -- push hard on it -- perhaps a few ears in Congress will hear it.
@@The-Spondy-SchoolYou're "solving the wrong problem." The totalitarian shit-heads in congress know that what they're doing is evil, incoherent, self-contradictory, and parasitic/thieving and they pretend to not know better, i.e. they pretend to be as dumb as their constituents. Legislators are dumb...compared to Peter Voss or Ralph Merkle...but they (the legislators) are also wittingly evil.
If we want our rights respected, we have to force that respect ...even though doing so must happen in a nation as full of servile idiots as the Weimar Republic in 1932.
If you're "good", that makes two of us. Most people are "bad" ...not by conscious choice, but by acting as agents of evil when called as jurors. (As in the Milgram "Obedience" study.) When asked to stand strong against the status quo in defense of morality...most jurors today collapse like wet toilet paper.
...in the one role that determines whether we follow the US Constitution or Nazi Germany.
Notebook LM is only available in the USA _(why wouldn't it?)_