Can't call it a mini supercomputer if it can't even match a basic graphics card. I think the new Intel cards make a lot more sense for ai than this thing.
Just went to NVIDIA specifications and well... Nano with 8G is what probably they mean talking about $249 is just 67 TOPS. 275 TOPS is the model with 64G and twice as much cores (both types). $2398 devkit on Amazon. Usual deceptive advertisement.
LLM limited usually by the memory, so the ram is really all that matters. 8GB is pretty low, and I would not say the models that can fit in that constraint are "very capable" except for some narrow use cases. Were going to see a lot of LPDDR5x and LPDDR6 devices that can do better. So the real news here is the price.
Some confusion here.. the tiny computer is the jetson nano.. the super jetson nano is capable of 67 TOPS… the larger more expensive jetson hardware (not the nano) can get up to 275 TOPS. The tiny computer can NOT get 275 TOPS.
@@AINEET Mine just jumped with joy as it will have a new play friend soon when I implement teach mode skills that call web services that run on my jetson nano super dev kit to access some custom local agents ♥️
You're not plugging this into a car to have it autonomously drive, you're vastly underestimating the hardware requirements. Also with just 8gb of RAM this is just a toy.
Imagine unleashing a team of autonomous humanoid robots running on a a 2B multimodal vision model. Human: "I order you to STOP". Robot: "Queen Victoria was born in Australia in 1358". 🤣
I imagine that this is just motherboard and core components - you can add Vram i would imagine -- still with that ammount if trillion operations per second ... is it possible it wont need much or hardly none ?
Okay actually, now that i think of it, maybe it could be used as a way to get a cheap gaming PC. While 8GB is outdated, $250 for the whole system could make it a decent gaming box running a lightweight Linux os.
I mean, We need to know how to chain together. A tutorial on that would be cool. And we need to know how many we can chain to gether. Can we do it up to 80 vram? What is doable and what isnt. The limitations.
Dave's Garage did a install and walktru video of the device...it ships with a sd card preloaded...but he didn't see that in the bottom of the box...so he downloaded a image and went thru a bunch of steps only to later fish the box out of the trash...it runs ubuntu....I just wish it had a bit more power...8gig of ram would never ingest the pdfs I need...and 1024 cuda cores is about 3x off the mark...and with 8gig not sure how much is GPU memory there is, could not find a spec for that. But at this price and power point it gives raspberry pi concern I think...as it is more capable for AI than there hardware.
Yes a tutorial would be great thank you! This is really interesting. It’s definitely moving things in the right direction. The human neocortex doesn’t require a nuclear reactor for power.
Yes, please. I’m interested, but would like to see your setup video first before I pull the trigger. Also, its capabilities and use cases. Thanks in advance!
I'd love to see a side-by-side comparison of this with a decently specced computer running the same LLM (or Stable Diffusion or Flux). I have my doubts about how it would perform against a computer with a 16GB 4090, given the specs of this.
Or in a more sinister direction, good enough to operate a drone with decent machine vision that can understand complicated missions from verbal instructions and go off to attack targets in coordination with a few thousand buddies in a swarm - without any RF control to jam. If these are as common and nearly as cheap as a Raspberry Pi - everybody will have brilliant kamikaze drones very soon.
yes i got one i want to set it up as an agent so i just need to link it to crew or other programs maybe load balancing would be cool to look at if you have more then one agent
I started to look for one and they're obviously in short supply so up to double price. I then started to wonder how they compare to a base M4 Mini. Once you've paid the UK price then they're well over half the cost of an M4. The Mac Mini is roughly double the power for running LLMs, includes a power supply and double the RAM. This thing is 5X my Raspberry Pi but a fraction of my MBP M4 Max. As I said, I almost ordered one today but because of the difficulty getting one in the UK it made me think about the Mini and I'm no longer as impressed by this as I was when I first saw it. Bring on the RaspberryPi 6 with some serious GPU cores!
A memory segmentation nightmare... The Intel Ultra 5 allows you to dedicate 70GB to vram and network the devices with segmentation in torch... NO MORE TROLL TOLL FOR ME!
A tutorial would be nice. The memory is too small for me and the LLMs I run, 70B Q6. I would like to see how this integrates with LM Studio, if possible.
forget the hype please! so what it can run as you say a 7b model but have do we like connect/merge multiply of these models through 2 or more of these base unites??
It got only 8 GB of Vram, so the use case scenario would be in the robotic and car industry. Unless you can run 10 - 20 of them in a cluster and have the cluster provide good inference speed for 72b models, I don't think it would be more attractive than getting 2 3090.
ive been thinking about getting an external GPU for my laptop so i could run local image generation - could i plug this in instead? And take advantage of my laptop's 32GB of RAM?
Hey man I like you and your content, usually. In the interest of truth, though, I recommend comparing the specs on this thing to the RTX 4000 series of Nvidia's video cards, just for giggles. When I queried Perplexity with "Show/make a chart showing all Nvidia RTX chips from 4000 series with a column showing how many CUDA cores, tensor cores, video memory in GB, and memory GB/s," I got interesting results. For example, looking at the weakest card, the RTX 4060, I see 3072 CUDA cores, 96 tensor cores, 8GB GDDR 6 (vs LPDDR 5 (not vram)), and 272 GB/s. Regarding it powering all those models... at what speeds? I'm thinking - based on those stats - glacial. And yeah, I realize this isn't a fully apples to apples comparison because that Jetson Orin Nano also has a [weak] CPU. That all said, for that price, I'm sure it's a great deal. I just don't love the breathless way you are reading out those stats like they are super crazy wild amazing.
There are people starving and homeless out there - I think we can live with a 30-60 second ad given by a guy who freely gives out so much good information - even without a 'premium discount'.
@ hahaha. What does one have to do with the other? I like Matt and I’m glad that he put time stamp so we can skip the ads, but having ads definitely defeats the purpose of paying for premium.
275Tops ? ... > 40 000$ You can have 300tops for ~3000 € With a TPU, no vram max work with your cpu ram as vram. Jetson 67tops is the one who cost 250$.
If you get the Mac mini base model with a student discount, it will be 2x the price but you get 2x memory and 3-4x inference speed. So I do not see the point.
How does this compare with an Nvidia graphics card? Can it run a model like flux or stable diffusion? Will it be more efficient compared to a graphics card?
This device would 'just about' run a 7b LLM model... it would not however run a vision model very well nor would it be at all good with diffusion models cos.... well the GPU isn't good enough. But.... well its a start at making SBC's good for robotics, and a pretty good start.
if I could set up a vision model using a language model, that'd be the bees knees. ;-) as opposed to troubleshooting programming from an llm using an llm.. that's not what I'm talking about.
We definitely want that tutorial Matthew !
Yes, please! Surely it can be run as a plug-in. I got lots a free bays now that I'm all ssd.
2nd that - tutorial would be excellent!
Would LOVE to see a video of you setting it up!
We need a tutorial how to set it up I Just placed my order.
The amount of ram is pitiful 8GB is NOT ENOUGH should be at least 16gb of ram. I would not even mind if it cost a bit more if we got 16gb of ram.
Would love it to be 16GB but 8GB is enough to run most 8B and below models. Still extremely useful for its use case.
It can run llama 8b, and its a very capable model. But ye, 12-16 would have been so much better.
Wonder if you can just network 4.
You want double the ram and pay...a bit... more? How generous!
Can't call it a mini supercomputer if it can't even match a basic graphics card. I think the new Intel cards make a lot more sense for ai than this thing.
Just went to NVIDIA specifications and well... Nano with 8G is what probably they mean talking about $249 is just 67 TOPS. 275 TOPS is the model with 64G and twice as much cores (both types). $2398 devkit on Amazon.
Usual deceptive advertisement.
And you expected more from Ngreedia ??
LLM limited usually by the memory, so the ram is really all that matters.
8GB is pretty low, and I would not say the models that can fit in that constraint are "very capable" except for some narrow use cases.
Were going to see a lot of LPDDR5x and LPDDR6 devices that can do better. So the real news here is the price.
Yeah...hopefully their 64GB Jetson comes down in price.
Yes, sir. Would love to see a tutorial on how to set this computer up. Looking forward to it
Yes! Tutorial would be cool!
Yes, definitely make a video on setup!
Yes, make a tutorial to set it up, elaborate the expansive spectrum of uses cases to built upon this. Thanks very much!
Some confusion here.. the tiny computer is the jetson nano.. the super jetson nano is capable of 67 TOPS… the larger more expensive jetson hardware (not the nano) can get up to 275 TOPS. The tiny computer can NOT get 275 TOPS.
Jetson Orin Nano 8GB Module
AI Perf: 67 TOPS
£289.80
Jetson AGX Orin 64GB Module
AI Perf: 275 TOPS
£1,413.44
I’m getting one
Definitely would like to see a setup video
My rabbit r1 just shrieked
@@AINEET Mine just jumped with joy as it will have a new play friend soon when I implement teach mode skills that call web services that run on my jetson nano super dev kit to access some custom local agents ♥️
@@spleck615 Can you expound a bit? Sounds interesting.
Mine is still in post-purchase hibernation, lol.
Yes, tutorial using it please! Thanks!!!
Super interested to see you set this thing up!!
I would love to see a tutorial and you testing it.
You're not plugging this into a car to have it autonomously drive, you're vastly underestimating the hardware requirements. Also with just 8gb of RAM this is just a toy.
@@RobotechII I’m sure you could probably do that exactly once
Imagine unleashing a team of autonomous humanoid robots running on a a 2B multimodal vision model. Human: "I order you to STOP". Robot: "Queen Victoria was born in Australia in 1358". 🤣
Looking forward to setting it up
very powerful but what about vram?
I imagine that this is just motherboard and core components - you can add Vram i would imagine -- still with that ammount if trillion operations per second ... is it possible it wont need much or hardly none ?
Here , found the specs :
Jetson Nano System Specs and Software
Key features of Jetson Nano include:
GPU: 128-core NVIDIA Maxwell™ architecture-based GPU
CPU: Quad-core ARM® A57
Video: 4K @ 30 fps (H.264/H.265) / 4K @ 60 fps (H.264/H.265) encode and decode
Camera: MIPI CSI-2 DPHY lanes, 12x (Module) and 1x (Developer Kit)
Memory: 4 GB 64-bit LPDDR4; 25.6 gigabytes/second
Connectivity: Gigabit Ethernet
OS Support: Linux for Tegra®
Module Size: 70mm x 45mm
Developer Kit Size: 100mm x 80mm
I think it shares the 8gb in parallel with the arm processor and the GPU. So 8gb
It's basically a trash graphics card with a cheap arm core attached to it
Okay actually, now that i think of it, maybe it could be used as a way to get a cheap gaming PC. While 8GB is outdated, $250 for the whole system could make it a decent gaming box running a lightweight Linux os.
I mean, We need to know how to chain together. A tutorial on that would be cool. And we need to know how many we can chain to gether. Can we do it up to 80 vram? What is doable and what isnt. The limitations.
YES Matthew a JETSON Orin tutorial would be wonderful. Great Video. Well Done !
I will wear mine like Twiki, around my neck. BEDEBEDEBEDE pretty cool Buck, BEDEBEDEBEDE
wut
you need a certain age to get that one ^^
(„Buck Rogers in the 25th century“, iirc, lol)
Theo was a boss. But tbh, a human wearing one like that would look too much like Flavor Flav.
Dave's Garage did a install and walktru video of the device...it ships with a sd card preloaded...but he didn't see that in the bottom of the box...so he downloaded a image and went thru a bunch of steps only to later fish the box out of the trash...it runs ubuntu....I just wish it had a bit more power...8gig of ram would never ingest the pdfs I need...and 1024 cuda cores is about 3x off the mark...and with 8gig not sure how much is GPU memory there is, could not find a spec for that. But at this price and power point it gives raspberry pi concern I think...as it is more capable for AI than there hardware.
Buy 4 and stack them. I’d like to see that.
First 20 seconds into the video convinced me to get one 😁
Anyone else felt the same? 🤔
it s for sale on amazon since months
@@gonreebgonreebThe Super version at $249 has not been.
it's not 249, everywhere i checked it's 3-4x the advertised price,
let me know if there is an authentic agent out there.
Yes a tutorial would be great thank you! This is really interesting. It’s definitely moving things in the right direction. The human neocortex doesn’t require a nuclear reactor for power.
If only that graphic cards were as cheap and affordable
I already know people are going to have a cluster of like 10 of these.
Nintendo: 👀, we are an AI company now
I see two numbers 67 TOPS and 275 TOPS. Which is correct?
100% yes to the tutorial that would be great
Yes, please. I’m interested, but would like to see your setup video first before I pull the trigger.
Also, its capabilities and use cases.
Thanks in advance!
Yes… Please do a setup and use cases video!??
if you purchase one, the tutorial is a must
Thanks BUT We must point out that even a new M4 Mac Mini has 16 Gb of high-performance RAM. Models like 'vanilj/Phi-4:latest' deserve 32 Gb !!!!
Yes...please a tutorial! Thank you, Matthew! 😁
whenever you feel the urge to ask, should i make a video on that. the answer is yes, no need for the question
I’d like to see a tutorial on how to set it up, that would be awesome!
Yes, please have a tutorial to teach how to set it up. Thanks.
You should get 2, show us how to connect it together.
I would love to see you make a video of how to turn it into a battle bot.
just ordered my 8G model! yes a tutorial would be nice
I’ll be testing one very soon next to a cluster of raspberry pi 5s and clustered beelinks. I was just playing with the concept of indoor drones.
yes, please make a tutorial about setting one up.
I have been waiting for this to replace my Jetson Nano. 1,000+ fold increase in processing for half the price.
This holiday take a swig every time Berman says he’s bullish about ai lol🎉😂
Just because of the awesome similarity in name sounding, I'm going to have to order and play around with this. A lot of edging potential.
Would be great to see a video on setting this up with a functional LLM
I would definitely love to hear more about this subject. How would I connect it to an ordinary everyday Dell desktop.
…and allow it to log in.
Would like to see a video of you setting it up!
8
How is this a super computer?
if it's not connected it's not the edge, it's the center :)
I'd love to see a side-by-side comparison of this with a decently specced computer running the same LLM (or Stable Diffusion or Flux). I have my doubts about how it would perform against a computer with a 16GB 4090, given the specs of this.
2 min sponsor section on an 8 min video? you gotta be kidding me
Or in a more sinister direction, good enough to operate a drone with decent machine vision that can understand complicated missions from verbal instructions and go off to attack targets in coordination with a few thousand buddies in a swarm - without any RF control to jam. If these are as common and nearly as cheap as a Raspberry Pi - everybody will have brilliant kamikaze drones very soon.
Jetson is the best CEO since Jobs
yes i got one i want to set it up as an agent so i just need to link it to crew or other programs maybe load balancing would be cool to look at if you have more then one agent
the dark age of technology is getting closer day by day 😶😶
I started to look for one and they're obviously in short supply so up to double price. I then started to wonder how they compare to a base M4 Mini. Once you've paid the UK price then they're well over half the cost of an M4. The Mac Mini is roughly double the power for running LLMs, includes a power supply and double the RAM. This thing is 5X my Raspberry Pi but a fraction of my MBP M4 Max.
As I said, I almost ordered one today but because of the difficulty getting one in the UK it made me think about the Mini and I'm no longer as impressed by this as I was when I first saw it.
Bring on the RaspberryPi 6 with some serious GPU cores!
They are in stock at RS for next day delivery for £250 inc, and free shipping, 190 in stock if you are still tempting :-)
A memory segmentation nightmare... The Intel Ultra 5 allows you to dedicate 70GB to vram and network the devices with segmentation in torch... NO MORE TROLL TOLL FOR ME!
A tutorial would be nice. The memory is too small for me and the LLMs I run, 70B Q6. I would like to see how this integrates with LM Studio, if possible.
Dude we need Vram not flops
forget the hype please! so what it can run as you say a 7b model but have do we like connect/merge multiply of these models through 2 or more of these base unites??
If possible create an Exo Cluster using Jetson Nano similar to how some people build a M4 mac mini cluster for distributed inference.
I have one of the older jetson. when lego mindstorm came out I've been hoping for something tiny and powerful. It needs more ran though
Out of stock everywhere (of course) but will be picking up a at least one when able.
Yes! Tutorial, please! o_o
It got only 8 GB of Vram, so the use case scenario would be in the robotic and car industry. Unless you can run 10 - 20 of them in a cluster and have the cluster provide good inference speed for 72b models, I don't think it would be more attractive than getting 2 3090.
I wouldn't let it drive a car. Maybe solve some captchas....
it s for sale on amazon since months
Great stocking stuffer🎅🎅🎅
ive been thinking about getting an external GPU for my laptop so i could run local image generation - could i plug this in instead? And take advantage of my laptop's 32GB of RAM?
YES, tutorial,please. ❤
Can it run doom?
Would it be possible to build a workstation computer using Windows 11 on Arm? If so, please provide a tutorial showing how to make such a computer.
Hey man I like you and your content, usually. In the interest of truth, though, I recommend comparing the specs on this thing to the RTX 4000 series of Nvidia's video cards, just for giggles. When I queried Perplexity with "Show/make a chart showing all Nvidia RTX chips from 4000 series with a column showing how many CUDA cores, tensor cores, video memory in GB, and memory GB/s," I got interesting results. For example, looking at the weakest card, the RTX 4060, I see 3072 CUDA cores, 96 tensor cores, 8GB GDDR 6 (vs LPDDR 5 (not vram)), and 272 GB/s.
Regarding it powering all those models... at what speeds? I'm thinking - based on those stats - glacial.
And yeah, I realize this isn't a fully apples to apples comparison because that Jetson Orin Nano also has a [weak] CPU.
That all said, for that price, I'm sure it's a great deal. I just don't love the breathless way you are reading out those stats like they are super crazy wild amazing.
What kind of t/s can this get on something like llama 3.1 8B?
30 tokens per sec
Can I put in my old Pc and turn it into something useful?
70 TFLOPS means 20000 rasphbery pi
8 GB RAM? It's recommended to have 8 GB just to run Windows 11, any large AI model beyond like 20b params takes more RAM than that...
We should get youtube premium discounts when we watch videos that have ads built in.
There are people starving and homeless out there - I think we can live with a 30-60 second ad given by a guy who freely gives out so much good information - even without a 'premium discount'.
@ hahaha. What does one have to do with the other? I like Matt and I’m glad that he put time stamp so we can skip the ads, but having ads definitely defeats the purpose of paying for premium.
Is this thing going in Optimus?
275Tops ? ... > 40 000$
You can have 300tops for ~3000 € With a TPU, no vram max work with your cpu ram as vram.
Jetson 67tops is the one who cost 250$.
dang. this sounds incredibly futuristic. trillions of what? how? 25 watts? arent regular bitcoin miners a joke compared to something like that?
Show tutorial and use cases
If you get the Mac mini base model with a student discount, it will be 2x the price but you get 2x memory and 3-4x inference speed. So I do not see the point.
would this run A.I. better than on my desktop computer with a RTX 3050? should I attach it as an external device and have it generate images and video
Yes please do a tutorial.
this has been out for years... Used to cost 800
How does this compare with an Nvidia graphics card? Can it run a model like flux or stable diffusion? Will it be more efficient compared to a graphics card?
Oh man I never saw a computer that was so STUNNING ... "Don't recommend channel"
This device would 'just about' run a 7b LLM model... it would not however run a vision model very well nor would it be at all good with diffusion models cos.... well the GPU isn't good enough. But.... well its a start at making SBC's good for robotics, and a pretty good start.
Why are they promoting this now? I've had a Jetson Orin for at least two years. Strange. This isn't new.
3060ti/4 for $249..
They released it days after I bought a $250 nVidia card for my LLM machine 😒
They haven't even claimed it a super computer then how are you claiming it to be a supercomputer 😂
it has 8GB memory...
if I could set up a vision model using a language model, that'd be the bees knees. ;-)
as opposed to troubleshooting programming from an llm using an llm.. that's not what I'm talking about.
Absolutely make a tutorial.
Can it run PS2 and Saturn?