I know I’m late to the game, but a few days ago I bought your neural networks hard cover book and OMG this book is amazing! I started reading the digital version and now I’m understanding things so much better. I can’t wait to get the actual hardcover in my hands! Awesome job!
I didn't have proper sleep after watching a server in our data centre with 512 Cores, 2TB RAM and 77 TB nvme SSD and this is gonna make me another sleepless night 😂
Nice we got a few Large Language Models to communicate with each other with minimal prompting at the U.S. Air Force Hackathon last year called the Bravo Hackathon. Next up, working on a Neuromorphic Hypergraph database on photonic compute engines.
Question: I noticed that 3090's (unlike 4090's) support NVLink; does it seem reasonable to build a server with a couple 3090's that would allow shared VRAM for large model training? Have you looked into this and steered away for some reason I'm missing?
There is a company in the Bolton -United Kingdom that market a desktop systems that support 6 x Nvidia RTX 6000 ADA or 6 x RTX 4090. The company name is Scan.
Im unconvinced that water is a better means to pull heat away that pure metal. I think the actual area the heat is being pulled from is by far the most important metric. Water cooling seems completely dominantly used as a marketing term with no actual meaningful specs... its just assumed that "oh its gotta be better".
I'd love to try one out. Skeptical of how well it'll do given lack of overall support for deep learning architectures especially every time a new concept comes out, but who knows. Would definitely want to try one out!
So how do they justify the pricepoint of 35000 USD ? A 4090 currently is 2000 USD end-consumer price, 6 of them at that price is 12000 USD - but that's with a full end-user cooling solution I'm sure OEM 4090 would cost significantly less. Custom cooling a few hundred bucks in addition per card (if you assume end price) so we are at 13000 USD. Now we have a nice case -> 500 USD 4 good power supplies -> 600 USD Custom building and assembly -> 500 USD Total price: 14,600 USD So they have 20000 USD of profit on each of the servers sold, and that assumes they purchase as end-consumer pricing and refit them.
What if you prompted with a sample of a debate, then directed the conversation with a question? Would the debate have to be relevant to the question, or does the 'debatiness' transfer?
'You are what you eat' it is said. I'm inclined to believe this is also true of language models (aka ,incorrectly, AI) . If LLMs are internally consistent, i.e. same input produces same output (as I believe you confirmed) then what you have is a complex look-up table . For it to 'think' , and I believe it could be done, it would have to be continually, adapting its own internal 'wiring' . Ideally improving its 'opinions ' based on fact and feedback, rather than rumour like a typical social media user😁
Do you think humans are any different? We have trillions of connections in our brain + chemical soup floating around our neural receptors, but given any specific state, with the same inputs I believe the brain would produce the same outputs. And stop moving the goalpost with the term AI. 5 years ago if I showed you chat-gpt you'd absolutely not be able to tell if it's a person on the other side of the chat, just because we've learned the capabilities doesn't mean it's any less 'AI' than before. You're talking about a consciousness which is on a completely different axis than intelligence, and I personally don't believe anyone has. Our brains are just so complex our behavior seems unpredictable and "intelligent", but what is a thought if not for just a specific firing of neurons in our brain?
@@charlielarson1350 No I don't believe biological brains are so different to a computed model.My point is that until the models can modify themselves, and act dynamically they remain complex look.-up tables. What makes biology different is its dynamic nature, thinking, intelligence and , perhaps as an emergent property, consciousness, come from continual rewiring - even long term static information can be changed or updated. Once the model can truly learn, then it can start to filter the training data properly, (hopefully gleaning facts from it, instead of instead of just 'pleasing' the trainers).
I am planing my own workstation build right now. I thought 4090 could not be linked together like this. I can't afford a single RTX 6000 but I might be able to do dual 4090 (or 3090ti). I am now exploring edge inference cards like Neuchips N3000 quad or Qualcomm AI 100 Ultra which do 128GB LPDDR5. I really want to try 35B models at fp16 and inference only. For my undergrad thesis I am making a benchmark for code evaluation and models like codellama-34B or deepseek-coder-35B would be great data points. Without funding it will be difficult tho. Sadly Intel isn't selling GPU Max 1100 PCIe or Gaudi2 for workstation.
I'm curious and hope you'd be willing to answer. What's the difference between say Neuchips N3000 quad or Qualcomm AI 100 Ultra vs say, something like M1076 Mythic AMP? I don't know much about these, I have never even seen one with my own eyes, but I love AI. I was always curious why the M1076 is not very popular compared to the other offerings.
Not sure if you know this, but you can use lua scripts on a RC transmitter to communicate with your drone. But not sure how you would get camera feed for that, maybe DJI 2.4ghz receiver, with a HDMI capture card.
I'm not entirely convinced it's a better option than other options available. I would love to check one out when they're ready and maybe my opinion would be changed, but as it stands right now I would not want to deal with the headache of implementing new/current models on an entirely new framework, mainly.
Nope haven't tried any SLAM. I was actually very impressed with tello stability. I even have a window unit ~ 10 or 15 ft from where i tend to fly/test the drone and its constantly blowing air on it, it does a great job holding steady there and outside with some wind. I wonder if yours has some gyro issue or needs to be calibrated?
@@sentdex yeah probalby needs calibration 🤔 i hvent done that at the very least 😅 - but the idea i have for my drone doenst work too well with the tello anyways - since the access to the battery charging is all hidden and i wanna charhge it on the fly so i can always have one in standby to do missions
Just a thought but there are SBC with up to 6 or 8 TF TPUs and take m.2’s , I realize there is a cache issue, and IO limits but I think you could still do very well, or you could wait another year for much faster , more human like ( memory , processing execution in same place arch that are compiling out soon). Just a thought, can even get gpu’s working on these SBCs and some video cards now take m.2’s not sure of cost for these though, also Ithink it is possible to directly boot very small microkernel on video cards, not sure extent of m.2’s but booting an m.2 micro coprocessor, etc might be possible , optimal word is might)
Hi sentdex! Long time subscriber here. Love your content but I was wondering if you plan to do any sort of tutorial series again ? You've mentioned revisiting the GTA V series, that would be cool. Any other tutorial for building and deploying an AI app would amazing as well :) Kind regards!
I don't think you can really take these responses from various LLMs to the question whether AIs should have rights to be "what most people think". First of all they are going to be finetuned in a variety of ways, second you can't be sure in how far they understand the difference between AIs and humans. They may be biased from the get go to say "x should have rights" as a variety of sensibility efforts is gonna steer them towards "[arbitrary population group or demographic] should have rights" and if the LLM sees the term "AI" sufficiently closely related to demographic groups somehow, *of course* it's gonna declare AIs ought to have rights.
Anyone here got a 4090 willing to share some power consumption numbers under inference? Mine draws less than 120w doing inference. According to the screencaps of nvidia-smi in the video you can see the 4090's are sipping electricity.
Regarding the point on the philisophical question of whether or not Ai should have rights similar to humans, you mention briefly at 14:06 that internet data is a pretty decent representaion of what the average person "thinks". I would tend to disagree with this, and to remain as pedantic and romantic as possible, I will address my point through a line of poetry... (That I wrote without the help of Ai, mind you.) "An awful lot is lost between the thoughts I've had alone inside my head, and all the things I've never said." The internet is not really a complete representation of all human thought, moreso, it's a set of the loudest thoughts that have been broadcast out there, by (for the most part) a very small subset of humanity, that know HOW to use the internet to get their thoughts out there. But that's just my opinion on that point, of course.
You can't just have them talk to each other. It would be like listening to complicated parrots. You have to have small models which do very specific things, then use MOE, COT and TOT to create a cognitive structure. Then you can get something more resembling AI taking on a perspective and arguing points.
Qwen 72B WSB fine-tune when? I tried to get a gauge for what's needed to fine tune it and their github page says that you can't merge the qlora back with the model, so I guess you need to do 16-bit Lora at least? As for llm's talking with each other, seems like you used chat/instruct versions that were RHLFed to oblivion. You will absolutely not get what internet and random people think about it, you will basically get information about what chatgpt was RLHFed to think about it - all of those models are very likely fine-tuned on instruct data from gpt 3.5 and gpt-4. When you mentioned you were playing with qwen 72b, do you mean raw base or chat version? I chatted with qwen 72b chat on modelscope demo a bit yesterday, it's really underwhelming.
LLMs are pro AI rights because they reflect their creator's beliefs, as trained. Moreover some of those creators are transhumanists wanting to upload themselves to AI and retain the rights they have now.
Why not? If it's not being prompted too much, and it was trained on something a lot, I don't see why it wouldn't be able to answer with what "most people" might think about something. As an example, when you ask Gemini Pro what cheese most people enjoy, it will say "cheddar" or "parmesan". That's true, "most people" would like those two, eg they are quite popular cheeses.
I've always just shared content regarding what I'm doing/learning at the time. As time goes on, doing more py basics tutorials just plain isn't what I'm doing anymore. Long project videos also consistently don't work well and are a massive pain to make, just don't really enjoy it either. I actually enjoyed making and sharing this one. Will continue shaping content going forward, this isn't perfect in my eyes either, but doing basics tutorials is also done. Trying mostly now to do bigger projects but less tediously covering them. Definitely appreciate the feedback though, and always open to ideas!
>< I believe we are meant to be like Jesus in our hearts and not in our flesh. But be careful of AI, for it is just our flesh and that is it. It knows only things of the flesh (our fleshly desires) and cannot comprehend things of the spirit such as peace of heart (which comes from obeying God's Word). Whereas we are a spirit and we have a soul but live in the body (in the flesh). When you go to bed it is your flesh that sleeps but your spirit never sleeps (otherwise you have died physically) that is why you have dreams. More so, true love that endures and last is a thing of the heart (when I say 'heart', I mean 'spirit'). But fake love, pretentious love, love with expectations, love for classic reasons, love for material reasons and love for selfish reasons that is a thing of our flesh. In the beginning God said let us make man in our own image, according to our likeness. Take note, God is Spirit and God is Love. As Love He is the source of it. We also know that God is Omnipotent, for He creates out of nothing and He has no beginning and has no end. That means, our love is but a shadow of God's Love. True love looks around to see who is in need of your help, your smile, your possessions, your money, your strength, your quality time. Love forgives and forgets. Love wants for others what it wants for itself. Take note, true love works in conjunction with other spiritual forces such as patience and faith (in the finished work of our Lord and Savior, Jesus Christ, rather than in what man has done such as science, technology and organizations which won't last forever). To avoid sin and error which leads to the death of our body and also our spirit in hell fire, we should let the Word of God be the standard of our lives not AI. If not, God will let us face AI on our own and it will cast the truth down to the ground, it will be the cause of so much destruction like never seen before, it will deceive many and take many captive in order to enslave them into worshipping it and abiding in lawlessness. We can only destroy ourselves but with God all things are possible. God knows us better because He is our Creater and He knows our beginning and our end. Our prove text is taken from the book of John 5:31-44, 2 Thessalonians 2:1-12, Daniel 2, Daniel 7-9, Revelation 13-15, Matthew 24-25 and Luke 21. Let us watch and pray... God bless you as you share this message to others.
I know I’m late to the game, but a few days ago I bought your neural networks hard cover book and OMG this book is amazing! I started reading the digital version and now I’m understanding things so much better. I can’t wait to get the actual hardcover in my hands! Awesome job!
Awesome to hear!
Got mine too. It’s one of the best tech books I’ve ever read. So clear and perfectly paced.
I'm calling it. Sentdex will be the first guy to accidentally create AGI and have it escape into the wild.
xD I'm calling it. @axemanreaper was the first one to call it
what is AGI
I didn't have proper sleep after watching a server in our data centre with 512 Cores, 2TB RAM and 77 TB nvme SSD and this is gonna make me another sleepless night 😂
🫡🫡
@@RevanthNallam 😂😂
Could it write crysis?
@@raymond_luxury_yacht I don't know! That's someone else's.
can you test image generation with that machine as well?
would be interesting to see how long or short it would take to generate an sdxl picture.
SDXL-Turbo on a single 4090 can already do an image ~every second.
I did my undergrad thesis learning maching learning from you. Thank you for sharing Knowledge
Nice we got a few Large Language Models to communicate with each other with minimal prompting at the U.S. Air Force Hackathon last year called the Bravo Hackathon.
Next up, working on a Neuromorphic Hypergraph database on photonic compute engines.
Question: I noticed that 3090's (unlike 4090's) support NVLink; does it seem reasonable to build a server with a couple 3090's that would allow shared VRAM for large model training? Have you looked into this and steered away for some reason I'm missing?
was just checking your chennal for async python but got this gift of new video.
There is a company in the Bolton -United Kingdom that market a desktop systems that support 6 x Nvidia RTX 6000 ADA or 6 x RTX 4090. The company name is Scan.
Im unconvinced that water is a better means to pull heat away that pure metal.
I think the actual area the heat is being pulled from is by far the most important metric. Water cooling seems completely dominantly used as a marketing term with no actual meaningful specs... its just assumed that "oh its gotta be better".
Water is proven to be better when done right. aio/off the shelf water cooling solutions usually are no better than air, however.
What about testing out Tinycorp's Tinybox in future?
Would love to! I have a lot of questions about it
What do you think about the tiny box?
I'd love to try one out. Skeptical of how well it'll do given lack of overall support for deep learning architectures especially every time a new concept comes out, but who knows. Would definitely want to try one out!
Can you test new thread ripper pro CPUs (96 cores + 8 channel memory ) for Inference for Falcon 180B or other 70 B models ?
Tokens per second?
So how do they justify the pricepoint of 35000 USD ?
A 4090 currently is 2000 USD end-consumer price, 6 of them at that price is 12000 USD - but that's with a full end-user cooling solution I'm sure OEM 4090 would cost significantly less.
Custom cooling a few hundred bucks in addition per card (if you assume end price) so we are at 13000 USD.
Now we have a nice case -> 500 USD
4 good power supplies -> 600 USD
Custom building and assembly -> 500 USD
Total price: 14,600 USD
So they have 20000 USD of profit on each of the servers sold, and that assumes they purchase as end-consumer pricing and refit them.
Yes its a huge scam lol. 11k for a server with 2x 3090 ti
How do I get into this subject?
What if you prompted with a sample of a debate, then directed the conversation with a question?
Would the debate have to be relevant to the question, or does the 'debatiness' transfer?
'You are what you eat' it is said. I'm inclined to believe this is also true of language models (aka ,incorrectly, AI) . If LLMs are internally consistent, i.e. same input produces same output (as I believe you confirmed) then what you have is a complex look-up table . For it to 'think' , and I believe it could be done, it would have to be continually, adapting its own internal 'wiring' . Ideally improving its 'opinions ' based on fact and feedback, rather than rumour like a typical social media user😁
Do you think humans are any different? We have trillions of connections in our brain + chemical soup floating around our neural receptors, but given any specific state, with the same inputs I believe the brain would produce the same outputs. And stop moving the goalpost with the term AI. 5 years ago if I showed you chat-gpt you'd absolutely not be able to tell if it's a person on the other side of the chat, just because we've learned the capabilities doesn't mean it's any less 'AI' than before. You're talking about a consciousness which is on a completely different axis than intelligence, and I personally don't believe anyone has. Our brains are just so complex our behavior seems unpredictable and "intelligent", but what is a thought if not for just a specific firing of neurons in our brain?
@@charlielarson1350 No I don't believe biological brains are so different to a computed model.My point is that until the models can modify themselves, and act dynamically they remain complex look.-up tables. What makes biology different is its dynamic nature, thinking, intelligence and , perhaps as an emergent property, consciousness, come from continual rewiring - even long term static information can be changed or updated. Once the model can truly learn, then it can start to filter the training data properly, (hopefully gleaning facts from it, instead of instead of just 'pleasing' the trainers).
Very interesting
I am planing my own workstation build right now. I thought 4090 could not be linked together like this. I can't afford a single RTX 6000 but I might be able to do dual 4090 (or 3090ti).
I am now exploring edge inference cards like Neuchips N3000 quad or Qualcomm AI 100 Ultra which do 128GB LPDDR5. I really want to try 35B models at fp16 and inference only. For my undergrad thesis I am making a benchmark for code evaluation and models like codellama-34B or deepseek-coder-35B would be great data points.
Without funding it will be difficult tho.
Sadly Intel isn't selling GPU Max 1100 PCIe or Gaudi2 for workstation.
I'm curious and hope you'd be willing to answer. What's the difference between say Neuchips N3000 quad or Qualcomm AI 100 Ultra vs say, something like M1076 Mythic AMP? I don't know much about these, I have never even seen one with my own eyes, but I love AI. I was always curious why the M1076 is not very popular compared to the other offerings.
@@erwynnipegerwynnipeg8455 looks like there is only a m.2 card available. And 80M vision models is much smaller than 100B language models
Makes my new 2 thousand dollar laptop feel like a childs toy. Oh wait...
Not sure if you know this, but you can use lua scripts on a RC transmitter to communicate with your drone. But not sure how you would get camera feed for that, maybe DJI 2.4ghz receiver, with a HDMI capture card.
€30000 for parts that cost €15000 is insane.
Who ever pays €15000 for a cooling solution is mad.
Will you order geohozt 's tinygrad computer sentdex in near future ???
I'm not entirely convinced it's a better option than other options available. I would love to check one out when they're ready and maybe my opinion would be changed, but as it stands right now I would not want to deal with the headache of implementing new/current models on an entirely new framework, mainly.
I might have missed it due to my weak English in listening, is the price exactly 30,000 dollars?
Yeah, a cluster of Pi's is more in my budget range.
Anyone buy one of these servers? If so, how did it go?
I’ve been buying Dell OEM 4090s from Alienware prebuilds. They’re slightly larger than 2 slot cards and I was able to fit 4 into a 4U server.
It's mindboggling how fast something like the Comino can do so much. What will 2024 look like?
Neat
have you tried SLAM with the tello ? (also how did u get it to fly so stable 😩mine always drifts off when i try to start it)
Nope haven't tried any SLAM.
I was actually very impressed with tello stability. I even have a window unit ~ 10 or 15 ft from where i tend to fly/test the drone and its constantly blowing air on it, it does a great job holding steady there and outside with some wind. I wonder if yours has some gyro issue or needs to be calibrated?
@@sentdex yeah probalby needs calibration 🤔 i hvent done that at the very least 😅 - but the idea i have for my drone doenst work too well with the tello anyways - since the access to the battery charging is all hidden and i wanna charhge it on the fly so i can always have one in standby to do missions
Just a thought but there are SBC with up to 6 or 8 TF TPUs and take m.2’s , I realize there is a cache issue, and IO limits but I think you could still do very well, or you could wait another year for much faster , more human like ( memory , processing execution in same place arch that are compiling out soon). Just a thought, can even get gpu’s working on these SBCs and some video cards now take m.2’s not sure of cost for these though, also Ithink it is possible to directly boot very small microkernel on video cards, not sure extent of m.2’s but booting an m.2 micro coprocessor, etc might be possible , optimal word is might)
Hi sentdex! Long time subscriber here. Love your content but I was wondering if you plan to do any sort of tutorial series again ? You've mentioned revisiting the GTA V series, that would be cool. Any other tutorial for building and deploying an AI app would amazing as well :) Kind regards!
Dude why do you look like Edward Snowden?
can you play games with 6 gpus? would be cool to see
Haven't come across a need for more than 1 4090 while gaming, so tough to say :P
I heard it can play Minesweeper
I don't think you can really take these responses from various LLMs to the question whether AIs should have rights to be "what most people think". First of all they are going to be finetuned in a variety of ways, second you can't be sure in how far they understand the difference between AIs and humans. They may be biased from the get go to say "x should have rights" as a variety of sensibility efforts is gonna steer them towards "[arbitrary population group or demographic] should have rights" and if the LLM sees the term "AI" sufficiently closely related to demographic groups somehow, *of course* it's gonna declare AIs ought to have rights.
Anyone here got a 4090 willing to share some power consumption numbers under inference? Mine draws less than 120w doing inference. According to the screencaps of nvidia-smi in the video you can see the 4090's are sipping electricity.
Six GPUs having a party in a box 😋
please finish Neural Networks from Scratch in Python
you sound like Mordecai from Regular show
Uh oh its Dylan Patel
🙃
Mixtral 8x7B > Qwen
100% Idk why people both with other models at this point.
Try 13b-70b models they reason so much better and I can fit them on my 3090 without a problem
Regarding the point on the philisophical question of whether or not Ai should have rights similar to humans, you mention briefly at 14:06 that internet data is a pretty decent representaion of what the average person "thinks". I would tend to disagree with this, and to remain as pedantic and romantic as possible, I will address my point through a line of poetry...
(That I wrote without the help of Ai, mind you.)
"An awful lot is lost between the thoughts I've had alone inside my head, and all the things I've never said."
The internet is not really a complete representation of all human thought, moreso, it's a set of the loudest thoughts that have been broadcast out there, by (for the most part) a very small subset of humanity, that know HOW to use the internet to get their thoughts out there. But that's just my opinion on that point, of course.
Sentdex is saving time by having all his GF's argue with each other! :)
Hey would love to show you our system, we have Agent to Agent communication. We are making an LLOPS platform focusing on Evaluation and Validation.
this is so cool
You can't just have them talk to each other. It would be like listening to complicated parrots. You have to have small models which do very specific things, then use MOE, COT and TOT to create a cognitive structure. Then you can get something more resembling AI taking on a perspective and arguing points.
"INFINITE Inference" is a bit click baitish, huh?
Nope!
Qwen 72B WSB fine-tune when? I tried to get a gauge for what's needed to fine tune it and their github page says that you can't merge the qlora back with the model, so I guess you need to do 16-bit Lora at least?
As for llm's talking with each other, seems like you used chat/instruct versions that were RHLFed to oblivion. You will absolutely not get what internet and random people think about it, you will basically get information about what chatgpt was RLHFed to think about it - all of those models are very likely fine-tuned on instruct data from gpt 3.5 and gpt-4.
When you mentioned you were playing with qwen 72b, do you mean raw base or chat version? I chatted with qwen 72b chat on modelscope demo a bit yesterday, it's really underwhelming.
So i’m a simpleton :(
You looked like a nice guy :(
Christmas gifts 🎁🎁😂😂😂
AI GTA 6 incoming???????
That would be awesome!
LLMs are pro AI rights because they reflect their creator's beliefs, as trained. Moreover some of those creators are transhumanists wanting to upload themselves to AI and retain the rights they have now.
Not waiting to watch this video!
AI has no reliable ability to judge "what most people think". That conclusion is absolutely insane.
Why not? If it's not being prompted too much, and it was trained on something a lot, I don't see why it wouldn't be able to answer with what "most people" might think about something. As an example, when you ask Gemini Pro what cheese most people enjoy, it will say "cheddar" or "parmesan". That's true, "most people" would like those two, eg they are quite popular cheeses.
It's not judging at all. It's repeating, based on statistics. It's quite simple, really.
jesus christ dude lol
Missing your tutorials... Thats what made your channel so popular. Real content of substance, not just flashy bits.
I've always just shared content regarding what I'm doing/learning at the time. As time goes on, doing more py basics tutorials just plain isn't what I'm doing anymore. Long project videos also consistently don't work well and are a massive pain to make, just don't really enjoy it either. I actually enjoyed making and sharing this one. Will continue shaping content going forward, this isn't perfect in my eyes either, but doing basics tutorials is also done. Trying mostly now to do bigger projects but less tediously covering them. Definitely appreciate the feedback though, and always open to ideas!
First comment!
Keep it up man, you are doing great content!👌
TinyGrad > Comino
For what reason?
>< I believe we are meant to be like Jesus in our hearts and not in our flesh. But be careful of AI, for it is just our flesh and that is it. It knows only things of the flesh (our fleshly desires) and cannot comprehend things of the spirit such as peace of heart (which comes from obeying God's Word). Whereas we are a spirit and we have a soul but live in the body (in the flesh). When you go to bed it is your flesh that sleeps but your spirit never sleeps (otherwise you have died physically) that is why you have dreams. More so, true love that endures and last is a thing of the heart (when I say 'heart', I mean 'spirit'). But fake love, pretentious love, love with expectations, love for classic reasons, love for material reasons and love for selfish reasons that is a thing of our flesh. In the beginning God said let us make man in our own image, according to our likeness. Take note, God is Spirit and God is Love. As Love He is the source of it. We also know that God is Omnipotent, for He creates out of nothing and He has no beginning and has no end. That means, our love is but a shadow of God's Love. True love looks around to see who is in need of your help, your smile, your possessions, your money, your strength, your quality time. Love forgives and forgets. Love wants for others what it wants for itself. Take note, true love works in conjunction with other spiritual forces such as patience and faith (in the finished work of our Lord and Savior, Jesus Christ, rather than in what man has done such as science, technology and organizations which won't last forever). To avoid sin and error which leads to the death of our body and also our spirit in hell fire, we should let the Word of God be the standard of our lives not AI. If not, God will let us face AI on our own and it will cast the truth down to the ground, it will be the cause of so much destruction like never seen before, it will deceive many and take many captive in order to enslave them into worshipping it and abiding in lawlessness. We can only destroy ourselves but with God all things are possible. God knows us better because He is our Creater and He knows our beginning and our end. Our prove text is taken from the book of John 5:31-44, 2 Thessalonians 2:1-12, Daniel 2, Daniel 7-9, Revelation 13-15, Matthew 24-25 and Luke 21. Let us watch and pray... God bless you as you share this message to others.
Um, just feed all your speech and words from your youtube videos into the bot/llm to give it personality, lmao.