Nvidia doesn't upstream Linux support for these boards, so you are entirely dependent on their binary blob OS distributions consisting of an old kernel and spatterhouse of random AI tools. What you see is exactly what you'll be stuck with after the EoL date.
@@Kane0123I learned about it the hard way, so just hope the comment will save some others from the same grief. For all of their faults, NVIDIA gpus have pretty long legs as far as updates/support goes, and used to the raspberry pi side on the little boards, so I never thought to check if the 2 boards I was purchasing would have all support totally dropped in 4 months. Actually a bit surprised NVIDIA does support their gpus as long as they do.
@@LackofFaithify yep im in the same boat. Bought a nano last year only to find that all the documentation no longer referred to that board and im stuck on an ancient jetpack version.
The target market for the Raspberry Pi, according to Upton, was/is education for teenage programmers. His inspiration was the 1980s home computer revolution, specifically the BBC micro, and the bed room programmer trend. The primary features are affordability and accessibility.
Those WERE the goals. RPi4/5 are absolutely not affordable, nor suited for education anymore. $20/$35/$50 is the price range for a tinker/hacker/student not $75/$100/$150. RPi hasn't been a relevant maker/hacker/education platform since the RPi3. And it's not going to get any better with the IPO. This is clearly "Profit for Profit's sake" behavior, not "Let's make the best thing we can and get paid fairly for it" behavior. Stop buying RPi.
@@JeffGeerling I've never seen REAL RPi4s for that price. I've been able to place an order, but they always got canceled. I tried to buy 20 for a class I was helping with last fall and they were still nowhere to be found except from scalpers. Don't get me wrong. RPi was great. And it is a fantastic example of how good documentation can make ALL the difference. But the RPI I see today looks more like a profit focused "Business First" company than one by & for makers.
@@JeffGeerling I was going to reply the same thing! Plus You can get the Zero without BLE for $10! The keyboard, mouse and power supply will cost you more!
@@Prophes0r uh dude? Turbo Pascal cost $50 40 years ago, $150 if you wanted to distribute binaries made from it PLUS hardware cost. The C64's MSRP was $595 42 years ago. On what planet is a full-powered computer with software costing $60 not relevant for education or affordable? Plus every model of RPi except the very first is still being made, so if you want to continue learning C++ on a RPi3 (because $35 is too much for a Rpi4), guess what, you can.
1:30 they cost WAY more, they're rediculously overpriced $840AUD for an Orin Nano vs $170AUD for 8GB Pi 5. I remember when I got my Jetson Nano 2GB for $90AUD....those were the days....
Third, and final, comment: last I checked, Jetson had *no* first party support for the industry-standard Yocto framework, including Automotive Grade Linux. That's a big negative in my books.
While I enjoyed my time playing around with Jetson non-Orin Nano during COVID... I became concerned about supporting a monopoly and using a locked-down SDK.
Maybe...but what are you actually comparing? 25 Pi5s will cost you at least $3500 when you account for all the other stuff you need to use them all, which includes all the cabling, power supplies, SD cards or SSDs, cooling, a network switch, cases, and other things. 25 Pi5s also take up WAY more room than a mini-PC. Unless you are talking about a purpose built Pi cluster chassis (which will run you an additional $2k-$5k) you are talking about at LEAST a full-depth 2U case. Now that we have a REASONABLE comparison... Are 25 RPi5s more powerful than $3500 worth of "computer" that you can stuff into 2U of space? AHAHAHAHAHAHAHAHA....no. I could EASILY build a 6-node cluster of cheap 12th-13th Gen Intel iTX boards with 20+ threads, 32GB of RAM, 1TB SSDs, and dual 10G NICs for well under $2500. Stuff that into a 2U chassis with redundant power supplies, an ip-kvm, and even MULTIPLE 10GB switches and it will absolutely wipe the floor with a crappy Pi cluster. But why stop there? I have a 5-node cluster of USFF PCs right now that cost me ~$650 including the upgrades. 8-core/16-thread 10th Gen(ish) CPUs. 32GB of RAM 1TB SSD with real DRAM cache (Samsung 970 Evo Plus. Not some garbage no-name.) Dual 40GB networking. 25 of THOSE would only cost $3250, which leaves $250 to pick up all the optics and fiber you need AND a 40GB switch. Note: Both of my cluster solutions have full Intel iGPUs with hardware transcode and whatever other compute use you want. And those iGPUs are actually supported with drivers. But what about a single system instead of a cluster? Not everything scales well. Could you build a single server for $3500 that would absolutely demolish those 25 RPi5s? Yes. Without question. Better CPU. Better storage. Better networking. Better expansion. There is no reason to buy a Pi5. If you need a "computer" you can build something better from used parts and keep e-waste out of a landfill. If you actually need something small and low power, there are better options at half the cost or less.
Nvidia needs to make Money Number bigger. I'm not sure why anyone get's confused when they do things. Make. Number. Go. Up. Which is why no one should be buying anything they make...
ohhhh... at all, THAT nano was quite much slower. For example, only 64 bit memory bus with much lower memory frequency and much slower memory type at all. Now, it's 256bit bus with ddr5. It's much better. At all, they use all benefits of memory installed soldered onboard. Unlike for example Apple with their same variant but only 128bit ddr4 memory...
This is basically a blackmail from Nvidia. This is probably the ONLY solution for robotics AI or Edge AI application. They can charge as high as they wanted to.
Not even close. These are designed to be 'industrial embedded' ai inferencing solutions for things like security monitoring with tracking, driverless cars, and medical imaging analysis. An older generation of these SOCs power the Nintendo Switch totalling 139 million units and counting in a consumer product.
But, it won't have the development community support of the Pi. It will be hampered by nVIDIA's strong use of proprietary technology. It won't have its code fully open sourced. It's just another nVIDIA product of limited use. Like all other nVIDIA side projects, it will be cancelled once you've invested into it. If AMD could make something like this, it would be far more useful due to their open-source community support.
On the flip side, look at NVIDIA's recent earnings and how much companies are spending on AI. Never underestimate what can get done if there is a lot of money in a market.
@@ServeTheHomeVideo AMD literally has better APUs than Nvidia in this segment right now... and fully open source support either already available or coming down the pike for the XDNA stuff.. you'd have to be insane to jump on a proprietary bandwagon right now. Let me spell it out... why choose a weak ARM CPU when you can choose the strongest mobile x86 CPU for the same price, why choose the most propreitary locked down GPU driver when you can choose an open one... also this piece of garbage only has 1ish teraflops... whereas the legion go in my hands right now FOR A LOWER PRICE, has 8 teraflops+. This thing is nearly 8x weaker than AMD's competing platform on the GPU alone...
And on top of that there's a "not for sale" sticker on it, and even if you manage to go around that I haven't heard a price either. Also looks overkill for most of RPi uses
@@ServeTheHomeVideo possible but nVIDIA seems more focused on the Enterprise market than other markets. The companies spending on A.I. are spending for the enterprise equipment. Like gamers are being left to the side by nVIDIA, so will everyone else who isn't called Microsoft, Google, Amazon, and Meta. nVIDIA has been acting very much like VMWARE as of late. Don't worry, I still liked your video as I do all of your videos. You always do a great job. I'm just not feeling nVIDIA as of late.
@@thepcenthusiastchannel2300 VmWare got bought by broadcom so that is why they have jumped off the deep end... Nvidia was already a company that was all about maximizing profits over end user satisfaction.
I thought you would know better than me. But the reason why literally everyone is making Arm server CPU is because they can customise the CPU (for example using nvlink in Nvidia case). If x86 can be licensed like Arm, Graviton, Grace etc. would be x86
Yup, and not really. If you're talking about mini-pcs sure but Arm is better for low power use cases than x86. It could be argued that's not because x86 can't be, and due to development focus differences. For various industries, low power arm SBCs are crucial to their needs. For most hobbyists the cost and flexibility of SBCs are the lure and the low power requirement only a concern in niche uses.
@@Combatwhombat That is a fallacy. ARM is equal or less efficient for many (normal) tasks than x86/x64. It isn't even more efficient at the super low-end anymore (like mobile CPUs) like it USED to be when it got that reputation. In fact, for server-scale stuff, which is the WHOLE point of a dev kit like this, current gen ARM is about as efficient as the current gen x86 stuff of the same performance. Performance per Watt is what matters when deploying hundreds of CPU cores. And in the mobile space, some of the newest Intel CPUs have insane performance per Watt numbers even down to the tiny 5-7 Watt max range. Below that? Sure, ARM "wins" the 1 Watt game. But when you are THAT low you might be better off with a microcontroller instead of a CPU anyway.
Fun fact, the Nintendo Switch was codenamed NX and uses the Nvidia Tegra SOC which is a slightly more graphics oriented Jetson Nano. Rumor has it that the Jetson Orin NX is likely to power the "Switch 2" with some tweaks to the GPU optimizing. I have a Xavier NX (Volta architecture) that I bought to play with some light ML workloads. Now that I have a HEDT for my ML fun, I've been using the Xavier as a game emulator/"steam" nuk. It's not cheap but it's pretty potent. While there's some options for carrier boards out there (including an open source one!) you can plug an Orin NX module into a Jetson Nano carrier just fine. There's also a few options for single board clustering (turingPi and seed studio mate for instance); making a really cool low-power homelab for things like docker or k8. I know the turingPi can mix and match with a rockchip SOC on top of offering a Pi CM4 adapter board.
Also, these are sold by Nvidia for four different markets; Game (Tegra (NX)), AI inferencing for medical (Clara (AGX)), AI inferencing for video which targets driverless cars(AGX) and security monitoring(NX). They're sold in these development kits but never really advertised as a very mainstream "maker" product like the Pi. Nevertheless, the dev kits are perfect for all sorts of prototyping "embedded" applications. The point of all that is to say that the majority of these in circulation aren't dev kits. Nintendo alone has sold 139 million of one of the earliest in the lineup. Just because they're not as popular as the Pi flavors, doesn't mean that they're a gimmick!
@@LtdJorge what kind of processor cores are in mobile phones and tablets? Arm! Yeah, the Tegra (eg Jetson TX2) was built on the Pascal GPU(think 1080ti) and mated to four ARMv8 A57 cores (same generation as the pi3's smaller A53 cores). In other words, nearly all Arm SBCs run mobile hardware! Even the Orin rocks A78 cores while the Pi5 runs a slightly older A76. In just core counts that means there's about 1/8th of a RTX 3080 on the die for the NX and 1/4 for the AGX. Since each generation of architecture on average doubled performance of each core over the previous we can roll back the clock until it approaches a full size card; 1/4 a 3080 1/2 a 2080 A 1080ti with modern features and 12 core 64-bit processor both with direct (shared) access to 32gb of lpDDR5. I'm curious though, how this information changes your perspective on either the Switch or SBCs?
I once considered a nvidia SBC for custom thermal camera platforn for filmmaking. however i struggled with the electronic engineering and never got digital data out of my thermal imaging modules... As i kinda broke the serial chip. So the project is suspended for five years now. I remember nvidia had a 180€ SBC which the video codecs and storage options I wouldmhave been happy for as a camera platform. Potentially even doing auto focus or at least focus peaking.
Keep in mind that these aren't meant for your Joe Schmo or a regular homelabber. These are development kits for enterprise and industrial scale customers. The end products are generally utilizing computer vision and inference engines, such as license plate readers or smart sprayers for farm equipment.
I love the Jetson Nano (NX form factor) I remember getting a Maxwell based nano(shared the same CPU as the Nintendo switch) and then throwing those SODIMM modules in a carrier board and having a tiny little 4 node cluster smaller than a Mac Mini I killed the board and all of the modules when i tried to add a Turing based Jetson and the heatsink had a slightly different shape that ended up dead shorting two components on the board. Never a more sad moment with white smoke :(
Aww, that's a shame! It sounds like you had one of the seed studio Mate clusters as well. Mine seems to have been shorted out when I tried to use one of the TuringPi CM4 to sodimm adapter with it. Flashing each one in slot one and fiddling with that poorly placed jumper was a bit of a headache though. I keep eyeing the TuringPi 2 cluster board since it is designed to support the Jetson, their Rockchip pi module or CM4s with their adapter. Each node has an m.2 slot and it's a standard mini-ITX form factor w/ an ATX header so you can build a pretty cool super efficient homelab into your flavor of mini-case.
was the pi ever a good option for ai? the jetson is definitely not a cost effective io controller? the jetson might be a good option for running edge inference on a camera feed but inference isnt ai. pretty much by definition. ur running a static model that lacks the ability to self-modify
@@Kane0123 these SOCs are used for "AI inferencing at the edge" and currently sold primarily for automotive use (see Nvidia Drive). They're also leased as network attached devices to hospitals for imaging analysis (see Nvidia Clara). They could be used for lots of one-off kiosks (where dynamic TV menus and touchscreens demand a bit more than a pi but not a full computer...) including a few automated cafes. When an AGX module (@60w) is paired with a lower energy draw workstation GPU like the RTX A4000(3070ti) at just 140w (200wh peak) or the A2000 low profile at 70w (130wh peak), it can pack some serious computing power into a small power envelope; perfect for a driverless vehicle.
@@Kane0123 the Jetson Nano kits are pretty cheap, just a bit dated. The Orin is Ampere architecture so RTX 3000 series and the Jetson is 3 full architectures old at Maxwell2 so GeForce GTX 700/900 series. Each generation saw approximately doubled performance per GPU core on top of fitting a lot more cores on the same footprint thanks to moving from 16nm(Nano,128) to 8nm(Orin,1024(NX) 2048(AGX)).
Great info from a great team. I'm sorry maybe it is a stupid question, does these devices support LLM? if yes what will be the addressable Video memory? for example NVIDIA Jetson AGX Orin 64GB Developer Kit has 64GB & it is equivalent to RTX 3050 how much video RAM is addressable?
I think it would be interesting, particularly for professional applications, if they just integrated an arm CPU into their GPUs. No PCI band with bottleneck at all.
Clickbait title 👎 starting at $500 it makes very little sense to describe it as a Raspberry Pi alternative. The comparison could be with AMD equipped mini PCs instead, presenting what would work as alternative for external GPIO
I would like to add that if you want AI these are also incredibly expensive. If you want to play with AI on last gen hardware (aka Ampere which these boards have) buy a used 3090. You will learn a lot more if you don't have to fiddle with this exotic platform.
Most of the comments are also using the comparison. But once we get past the similarities by virtue of them both being low power SBCs, when you consider that Jetsons (aside from the nano) aren't really sold or advertised to a "maker" audience, the fact that it's a dev kit for industrial AI edge computing would suggest that the fair comparison would be a Pi designed for industry which aren't much cheaper.
You can get a 7840HS mini-pc for not much more than the asking price here. The APU has 33 TOPS and it has other uses, mainstream Linux support. The 8840HS has 39 TOPS.
Curious how they stack up to the Khadas Vim's, Odroid h2/3+, Kria KV260 or a second hand "TinyMiniMicro" (possibly with coral). Price performance but also performance/watt. Still looking for a portable sbc for RTABMAP/SLAM. Currently I have an Odroid H2+, Lattepanda Alpha 864s and a Jetson AGX Xavier, but don't know which to use. 🤔 Only thing I know is that Jetpack on the Jetson is not a good package. It doesn't make use of Cuda on a lot of (Cuda supported) preinstalled packages and re-installing with Cuda support is a nightmare, but without it I don't know if the extra power draw and weight makes sense. Does anybody know a good community for these kind of questions? I am interested in everything that has the end result of getting a building in 3d/Cad/Bim. Slam / Nerf / Gaussian Splatting / point clouds / Photogrammetry / Lidar / Floorplan2Bim / Raster to Vector / OCR / RTK GNSS / indoor navigation / IMU
We want open source, certainly not going to help nvidia expanding their licenced closed corporate architecture. Wow a dev kit for 2000 dollar, with an nvme slot and a pcie slot, what a bargain 😅, in the near future Cuda or similar AI cores will be common in most chips, probably with the difference their they are more open, no licensing, amd is already heading in that direction.
How? Having some of the, albeit EoLed boards, there is nothing unusually hardened about them. No shielding at all. No redundancies. The RAM is nothing special, no way to correct for errors, bar the single, "industrial," version that came out not long ago. And as far as I know it is merely designed to be more tolerant of a larger temperature range, vibration, etc... Perhaps someone threw a ton of metal and plastic around one and called it shielded, but other than that.....
@@LackofFaithify yes, that is true, nothing in the paper seems to indicate good radiation performance. But the NVIDIA team did a great job of adding protections in each corner of the device. There is a bunch of papers talking about that like "Sources of Single Event Effects in the NVIDIA Xavier SoC Family under Proton Irradiation" that was with the previous generations, and soon will be a similar paper regarding the Orin family of devices. All seems to boil down to the A78AE cores and the LPDDR5 that from manufacturing seems to have some low level protections.
I would wonder if Nvidia might look at RISC-V over ARM in the long term. They've already done some work with it making a coprocessor that goes on their GPUs, and ARM still has license fees for using or designing CPUs. They also have strict requirements on compliance with their ISA, while Nvidia could make their own extensions at will for RISC-V, so they could make something like CUDA for their CPUs that's accelerated at the hardware level on all their chips. It also would explain why they tried to acquire ARM instead of getting a more permissive license like Apple did for their M series.
Already have issues with exports on GPU, getting all in at every level with China would probably draw even more unwanted regulations on them. And yes, that is what RISC-V is, and why they moved them to a "neutral" country to begin with.
@@Combatwhombatnominally it's good that int operations are considered as well but it drives me mad that the format isn't part of the name since that basically determines the performance. having 2 petaOPS* is fine and dandy until you can't compare it to last gen which doesn't have that format and you can't use it in your inferencing workload. *int4 operations per second
It doesn't sounds bad, but they are super overpriced, it shouldn't be over 1k to be worth it. If they'd be priced better, they could be really good, I think Nvidia could even make their own mini pc for the consumer market if they wanted
What you describe has just been announced as a product, called the Truffle-1. $1299, using the 64GB model at 60W for local LLMs. A much better deal than the dev board!
@@supercurioTube interesting! On their page I went to the features and it has an exploded view of the device. Guess what's in it. Go on. Yeah, an Orin.😂 Well, you didn't say it wasn't. The critical difference is almost certainly the carrier board and 'case.' I agree, they're way too expensive already. From experience if you end up in the market for one, it might seem like a way better deal from a Chinese supplier... right up until DHL charges you the tariffs. It's not just greed driving the price of these things! I got my Xavier NX from Seed Studio at a premium because at the time they were the only one with it in stock. I don't know how thin the market is for consumer sales of the dev kits, but it's probably not consistent.
Well, given CPU+GPU SoC design and other rumours, there's also a chance nVidia will use the new gen to wrap the higher power units not only in robotics brains units but also as handhelds if they manage to efficiently accelerate x86 translation, or as future shields to combine mini PC performance with smart TV box, or maybe, given there's a success in Qualcom's ARM laptop foray with Microsoft doing a lot of software heavy lifting/tuning to accommodate wide range of use cases, as nVidia-only laptops. They certainly could make a lot of money from these 3 options with potential sales, but the edge inferencing or machine/factory automation or DC businesses would net more cash inflow so the execs are rather going to accommodate those markets, leaving those units as dev units/boxes at preliminary project stages before utilising big boys heavy hitters from nVidia AI/compute stable. Heck, given nVidia's portfolio of IP, they could have made a truly unique device crossing multiple borders... just think for a second... they have ARM CPU architecture somewhat in-house now, they have ARM GPU designs and of course their own, they have had own set-top boxes for home entertainment designs for years now, they also own a lot of performant networking from Mellanox acquisitions which can drive DC/HPC racks communication but the now not as fancy older designs could easily serve the 10GbE-25GbE spectrum. Thus, they could very easily create a new Shield, one armed with multiple NICs/SFP28 ports driven by Mellanox IP, with management and extra processing or compute duties done by the ARM CPU cores with nVidia GPU accelerating media and AI tasks, maybe they could even add some WiFi capabilities to create an edge switch/router/access-point that could let it live near your TV doing the smart TV android device stuff or let it live in a rack or on a shelf in a cabinet hidden from plain view, allowing some gaming light console/PC style casting to local network and with docker/oci containers enabled/running on the hardware doing a lot of traffic analysis or media processing... a house AI oriented combo device to take over all the processing needs of your surroundings ;)
@@ServeTheHomeVideoI don't use Ubuntu either - it's fine for newbie users but I consider in bloated rubbish too. I have been running Gentoo Linux for more than 20 years now - I can build Linux to my own specifications on whatever platform I need it on (and not anything made by NVIDIA).
@@HyenaEmpyema It's useless anyway since it its about 4 years behind the times, AMD's current generation APUs have up to 8.6Teralops to this things 1.2... Even the basic 4CU version of AMD's CPU is over twice as fast. As an example the SZBOX S77 is an SBC that only costs 500 same as the Orin, has 7x more Teraflops, much stronger CPUs, is more power efficient.... 2.5G ethernet, even the fastest version of the orin falls behind this board by about 2 Teraflops at 3x the cost.
@@Wingnut353 I think AMD's current designs lack actual A.I. acceleration but that seems to be changing rather quickly. They're adding A.I. accelerators on a whole slew of products and I think RDNA4 might be the architecture to usher in that change.
If the next generation has faster and more memory. I would definitely get this over the Apple m ultra. If I can get my 3090 running alongside this then I think it will be a decent ai inferencing machine. It won’t be better than apples m ultra but is definitely cheaper and more upgradable. The setup will also look a little jank but it’s worth it for me. It’s not all positive though. One of the main drawbacks is that with less client base, there will be less developer support for this product. That’s the one thing that nvidia can’t miss on. I don’t need the fancy operating system but this has to work.
Interesting you mention that, the Truffle-1 based on the larger module available is kind of in the ballpark in performance for local LLM inference with a M1 Max 64GB. So to stay competitive with what devs would already have on their laptop today, Nvidia should provide an upgrade soon as you suggested.
nice to have a deep learning toy with 8GB ram, Im sure we will all be competing with openai with it. Because you got lots of LLMs that will run with 8GB.
I get the hate. But us old heads that remember the GPU wars all the way back to the Voodoo cards can often get stuck in our ways. AMD graphics cards have a long and storied history of terrible drivers, spotty support, confusing branding, and low developer uptake; while team green proved for many years to be reliable. I propose that the apparent love you're seeing for Nvidia is mostly old habit/brand loyalty. That said, the Jetson line has been pretty much the most robust AI performant SBCs on the market for the last few years. And, pretty much the only one able to utilize a GPU over PCI express out of the box. The 16x lanes on the AGX variant permits adding another Nvidia GPU, for instance an RTX A4000 (same chip as the 3070ti) would add 16GB of vram adding over 6000 Ampere CUDA cores for only 140wh totalling a peak 200wh power envelope. How many Intel Compute Sticks or Google Coral do you need to plug into a Pi to reach the same performance?
@@Kane0123 Hardware being provided for free (even if temporary) is being sponsored, at least in the vast majority of jurisdictions, and requires more significant disclosure than is done here, for example: In the US, FTC Guidelines clearly require clear disclosure, even if no money changed hands, and this is not done sufficiently here In Canada, similarly, this would have to be clearly disclosed, for example by marking it as sponsored (receiving hardware IS sponsorship, no money has to change hands) in the creator studio, which it is not In the UK, this video would have to be clearly labelled as "Ad, Advert, Advertising, Advertisement or Advertisement Feature" using a "prominent disclosure label", the UA-cam marking is insufficient here Similarly, in Germany, this would have to be marked (with a large, constantly visible text on screen for the entirety of the video) as "Werbung" (Advertisement), as in the UK, the UA-cam label is insufficient.
@@Kane0123He responds to comments about the proprietary nature of Nvidia software, justifying it by stating he's running Ubuntu. Running and open source OS has no bearing or relation to the closed source nature of Nvidia software. The whole thing just smells.
@@zivzulander you're missing the point. The FTC doesn't make laws, but that doesn't make their rules non-binding. I can't link things on UA-cam, it's as simple as Google searching. You can find an 8 page PDF on guidelines, and an even more expansive one on the rules those guidelines are based on, the latter of which is binding and enforceable. For example, the Contact Lens Rule, which requires prescribers to provide patients with a copy of their prescriptions after fitting, is actually an FTC rule, not a law. A lot of the applied framework of COPPA was also defined by FTC rule, not by the Act of Congress, and the FTC was responsible for a 170 million dollar case win against UA-cam for COPPA (rule, not act) violations. Saying "thanks to Nvidia for letting us borrow X" is not sufficient disclosure according to the FTC guidelines. Heck, just using #ad or just marking the video as an ad in creator studio is insufficient (at least according to their guideline article). It is never made clear that this video is an advertisement, or that sponsorship is involved. It is intentionally obscured to the maximum possible, whilst staying in the realm of plausible deniability when it comes to breaking FTC rules (which most certainly were broken). Furthermore, the FTC recommends this disclosure happens in both video and audio, the video part is lacking here. Again, FTC guidelines require, "clear", "simple" and "hard to miss" disclosure. Just the fact that people in the comment section were asking if the video was sponsored should be evidence enough that the disclosure was insufficient.
8:11 204.8GB/S!!??!! Yikes. With 64GB RAM it's a lot cheaper than a Mac Mini. Plus you can add extra storage. If anyone has tried to to run MacOS Hackintosh ARM on that, it'd be cool with a video on that. Also, I hope Nintendo Switch games game be "made" to run on it.
Orin is Nothing new. and why shut i buy this with a vulcanised ecosystem with next to nothing open source things ? trash and a dumpsterfire for near all users that need things like this.
On the Truffle-1 based on the 64GB module, the company speced 20 token/s on Mixtral 8x7b. (Quantization unspecified) You can look up this product for more figures.
Leaving aside everything else, the lack of proper software support for uses outside of what Nvidia deigns acceptable for us plebs makes this pretty much doa for many if not most tinkerers or enthusiasts. There's also very little reason to believe that ARM is really that much of an advantage. AMD is very close to apple silicon for performance per watt in general tasks and the advantages apple has have very little to do with the ISA. Apple dedicates a lot of silicon to essentially ASICS used a lot in some tasks. ASICS and accelerators will always be more efficient than general compute regardless of the ISA. I'd rather spend more on a fully supported X86 SBC or save money and buy a different ARM device. It feels like Nvidia is just dumping unsold silicon on mugs with this.
I had to downvote this. Very, very few people developing for RPi5 have any reason whatsoever to jump to these. Everything from power consumption to price per unit, hardware support to software support, is inferior with these devices. Honestly, I hadn't expected such a stupid comparison from STH. It's disappointing.
@@ServeTheHomeVideoApple have their own complete ecosystem with their own operating system and to the metal optimisation. They're also converging and streamlining their technology and IP of their smartphones and PCs to make programming simpler. The ISA used is only a minor factor. ARM has some advantages with power for smaller scale but it hasn't taken over like it was predicted to and AMD have said they see Apple silicon as more of a performance competitor than Intel and have significantly closed the gap in performance per watt. It seems like ARM has a lower power floor than X86 but newer AMD SOCs have similar performance per watt as APPLE when the M chips aren't using dedicated silicon accelerators. ARM will never go away but there is no reason to believe it is likely to grow it's marketshare much further. PLC and small industrial controllers are already switching over to riscV which will offset some of the growth in larger ARM chips.
@@ServeTheHomeVideo Nvidia migh have an ulterior motive: Intel is a competition and the Gaudi 2 is proving to be a serious contender. Intel is also adding an NPU to it's mobile processors. However, perhaps the main "threat" could be the OneAPI that may give CUDA a run for the money!
This feels like you’re comparing Toyota to a Ferrari… sure the Ferrari is better in almost every way, but it’s also 40x more expensive… an probably consumes 40x more gas as well….
I hope NVIDIA paid you a lot for this ad. This hardware is NOT a replacement for a raspberry PI at $400 and it is NOT a good AI platform for 99% of use cases. It is very overpriced and models will not easily run on these chips. Hell they don't even have NVIDIA's last gen architecture. I am used to ServeTheHome having better content.
"Borrow these"?. How cheapskate is Nvidia ?. Shocking. Do they not know that they get at least 10X value from this video alone ?. Just Patrick mentioning this product is going places for them!.
People have the right to dislike proprietary drivers and high-priced accelerators, but they shouldn't shun others for interest in the entry-level tools for self development. Stop shouting at clouds.
These development boards would be more useful if it used a RISC-V extendable CPU, coherent FPGA ISA/DMA fabric, and dedicated debug/observable CPU via 10Gbe telemetry IPMI. On batteries, it would be useful to have a microSD IRQ backed by tunable/extendable capacitors/battery to save critical data in power loss/recovery.
YT is not letting me edit my other comment, so I'll make this one: I've learned the hard way that it can be anything. From 16 bits in small industrial systems, through 64 bit as standard DDR sticks do, to 256 bits of Apple's M, and even higher.
Disappointed by the improper Sponsorship disclosure, you've lost a LOT of goodwill with this video. Hardware being provided for free (even if only temporarily) requires discolsure basically everywhere: In the US, FTC Guidelines require clear disclosure, even if no money changed hands, and this is not done sufficiently here, you never even mention the word "Sponsored" In Canada, similarly, this would have to be clearly disclosed, for example by marking it as sponsored (again, receiving hardware IS sponsorship, no money has to change hands) in the creator studio, which it is not In the UK, this video would have to be clearly labelled as "Ad, Advert, Advertising, Advertisement or Advertisement Feature" using a "prominent disclosure label", the UA-cam marking is insufficient here Similarly, in Germany, this would have to be marked (with a large, constantly visible text on screen for the entirety of the video) as "Werbung" (Advertisement), as in the UK, the UA-cam label is insufficient.
@@zivzulander FTC regulations are not laws or statutes, these are guidelines set out by a federal agency (the competency for which has been delegated to them by Congress in the FTC act). This is, however, still binding (as it is within their delegated powers to regulate advertising, and can result in sanctions). This is typical in the US and also the case for agencies like the FCC, for example. Can't link it here because it gets filtered by YT but there is a FTC guidance article (on FTC dot gov) that explicitly mentions that "thanks" should not be used in these disclosures, as it can be "vague and confusing". Saying Thanks to Nvidia for letting us borrow X is blatantly misleading, it should just be something along the lines of: Nvidia has provided these units (for review), as it is the disclosure massively downplays Nvidia's relation to this piece of content. The disclosure made here is at the least very poor, certainly in violation, and a rather surprising & out of character bout of incompetence, or wilful ignorance. Additionally, you must follow guidelines of any country your video is broadcast in. If you're a non-US creator, the FTC Guidelines still apply to you (if your video is available in the US for example). The same goes for European regulations. Videos like this should ALWAYS be tagged as sponsored in creator studio, as a sponsorship objectively exists, and not doing so most likely violates UA-cam's guidelines as well (although UA-cam's usual shenanigans mean they only enforce this when it's convenient for them)
Failed to mention they all have very firm EoL dates. Some are 5 years off, some are next year, so watch your step.
This seems valuable
Nvidia doesn't upstream Linux support for these boards, so you are entirely dependent on their binary blob OS distributions consisting of an old kernel and spatterhouse of random AI tools. What you see is exactly what you'll be stuck with after the EoL date.
@@Kane0123I learned about it the hard way, so just hope the comment will save some others from the same grief. For all of their faults, NVIDIA gpus have pretty long legs as far as updates/support goes, and used to the raspberry pi side on the little boards, so I never thought to check if the 2 boards I was purchasing would have all support totally dropped in 4 months. Actually a bit surprised NVIDIA does support their gpus as long as they do.
@@LackofFaithify yep im in the same boat. Bought a nano last year only to find that all the documentation no longer referred to that board and im stuck on an ancient jetpack version.
The target market for the Raspberry Pi, according to Upton, was/is education for teenage programmers.
His inspiration was the 1980s home computer revolution, specifically the BBC micro, and the bed room programmer trend.
The primary features are affordability and accessibility.
Those WERE the goals.
RPi4/5 are absolutely not affordable, nor suited for education anymore.
$20/$35/$50 is the price range for a tinker/hacker/student not $75/$100/$150.
RPi hasn't been a relevant maker/hacker/education platform since the RPi3. And it's not going to get any better with the IPO.
This is clearly "Profit for Profit's sake" behavior, not "Let's make the best thing we can and get paid fairly for it" behavior.
Stop buying RPi.
@@Prophes0rRPi 4 is available in quantity for 35+, and Zero 2 W is a steal at $15 (usually available here in the states now).
@@JeffGeerling I've never seen REAL RPi4s for that price.
I've been able to place an order, but they always got canceled.
I tried to buy 20 for a class I was helping with last fall and they were still nowhere to be found except from scalpers.
Don't get me wrong. RPi was great. And it is a fantastic example of how good documentation can make ALL the difference.
But the RPI I see today looks more like a profit focused "Business First" company than one by & for makers.
@@JeffGeerling
I was going to reply the same thing!
Plus
You can get the Zero without BLE for $10!
The keyboard, mouse and power supply will cost you more!
@@Prophes0r uh dude? Turbo Pascal cost $50 40 years ago, $150 if you wanted to distribute binaries made from it PLUS hardware cost. The C64's MSRP was $595 42 years ago. On what planet is a full-powered computer with software costing $60 not relevant for education or affordable? Plus every model of RPi except the very first is still being made, so if you want to continue learning C++ on a RPi3 (because $35 is too much for a Rpi4), guess what, you can.
1:30 they cost WAY more, they're rediculously overpriced $840AUD for an Orin Nano vs $170AUD for 8GB Pi 5. I remember when I got my Jetson Nano 2GB for $90AUD....those were the days....
No leather jacket like Jensen? Missed opportunity…
At first I misheard and thought all these models are called Jensen (rather than Jetsen) 😅
It was more confusing because I have been to a Jetson launch where Jensen handed me a snack just before getting up and launching the dev kit.
@@ServeTheHomeVideo haha! did you eat the snack or frame it and hang it up in your studio?
@@ServeTheHomeVideoI caught the joke 😂😂😂😂😂😂😂😂
@@ServeTheHomeVideoso you're saying you got a jensen lunch then a jetson launch?
Third, and final, comment: last I checked, Jetson had *no* first party support for the industry-standard Yocto framework, including Automotive Grade Linux. That's a big negative in my books.
If you're an automaker, you will get first party support from Nvidia. So, if you're are a contract developer, you might need to talk to your client.
While I enjoyed my time playing around with Jetson non-Orin Nano during COVID...
I became concerned about supporting a monopoly and using a locked-down SDK.
I believe 25 raspberry pi 5s are more powerful than a single $2000 mini pc
Yeah or build a 2018 max spec pc for $1000 and have many-x the performance.
@@NickDoddTVOr buy old server gear for the same price and get all the nice things with server grade stuff (ie. ECC RAM etc)
Maybe...but what are you actually comparing?
25 Pi5s will cost you at least $3500 when you account for all the other stuff you need to use them all, which includes all the cabling, power supplies, SD cards or SSDs, cooling, a network switch, cases, and other things.
25 Pi5s also take up WAY more room than a mini-PC.
Unless you are talking about a purpose built Pi cluster chassis (which will run you an additional $2k-$5k) you are talking about at LEAST a full-depth 2U case.
Now that we have a REASONABLE comparison...
Are 25 RPi5s more powerful than $3500 worth of "computer" that you can stuff into 2U of space?
AHAHAHAHAHAHAHAHA....no.
I could EASILY build a 6-node cluster of cheap 12th-13th Gen Intel iTX boards with 20+ threads, 32GB of RAM, 1TB SSDs, and dual 10G NICs for well under $2500.
Stuff that into a 2U chassis with redundant power supplies, an ip-kvm, and even MULTIPLE 10GB switches and it will absolutely wipe the floor with a crappy Pi cluster.
But why stop there?
I have a 5-node cluster of USFF PCs right now that cost me ~$650 including the upgrades.
8-core/16-thread 10th Gen(ish) CPUs.
32GB of RAM
1TB SSD with real DRAM cache (Samsung 970 Evo Plus. Not some garbage no-name.)
Dual 40GB networking.
25 of THOSE would only cost $3250, which leaves $250 to pick up all the optics and fiber you need AND a 40GB switch.
Note: Both of my cluster solutions have full Intel iGPUs with hardware transcode and whatever other compute use you want. And those iGPUs are actually supported with drivers.
But what about a single system instead of a cluster? Not everything scales well.
Could you build a single server for $3500 that would absolutely demolish those 25 RPi5s?
Yes. Without question.
Better CPU.
Better storage.
Better networking.
Better expansion.
There is no reason to buy a Pi5.
If you need a "computer" you can build something better from used parts and keep e-waste out of a landfill.
If you actually need something small and low power, there are better options at half the cost or less.
The orin nano is $500?! The old nano was ~$100. You could buy a much more powerful gpu for that. It’s not worth it at all.
Nvidia needs to make Money Number bigger.
I'm not sure why anyone get's confused when they do things.
Make. Number. Go. Up.
Which is why no one should be buying anything they make...
They still have the lower-end models at lower prices as well. Just had these two on hand.
ohhhh... at all, THAT nano was quite much slower. For example, only 64 bit memory bus with much lower memory frequency and much slower memory type at all. Now, it's 256bit bus with ddr5. It's much better. At all, they use all benefits of memory installed soldered onboard. Unlike for example Apple with their same variant but only 128bit ddr4 memory...
This is basically a blackmail from Nvidia. This is probably the ONLY solution for robotics AI or Edge AI application. They can charge as high as they wanted to.
Nvidia's laughing all the way to the bank, and at us for paying through the nose .
Just don't buy it.
Don't buy their consumer GPUs.
Don't buy their datacenter "AI" stuff.
It's THAT easy.
I like how Torvalds expressed his thoughts about nvidia. ;)
It's interesting that that "F- you, Nvidia" was in 2012 and with every subsequent year, they deserve that "F- you" even more
of course in the venn diagram the circle of the use cases of a raspberry pi and the circle of nvidia's board has 100% overlapping
Not even close. These are designed to be 'industrial embedded' ai inferencing solutions for things like security monitoring with tracking, driverless cars, and medical imaging analysis. An older generation of these SOCs power the Nintendo Switch totalling 139 million units and counting in a consumer product.
@@Combatwhombat i don't know, this video claims otherwise... i've already thrown my raspberry pis on trash... 😜
@@docwhogr Google a picture of the 'Nvidia Drive Hyperion' and/or 'Nvidia Clara' hardware and then look at the AGX again.
But, it won't have the development community support of the Pi. It will be hampered by nVIDIA's strong use of proprietary technology. It won't have its code fully open sourced. It's just another nVIDIA product of limited use. Like all other nVIDIA side projects, it will be cancelled once you've invested into it.
If AMD could make something like this, it would be far more useful due to their open-source community support.
On the flip side, look at NVIDIA's recent earnings and how much companies are spending on AI. Never underestimate what can get done if there is a lot of money in a market.
@@ServeTheHomeVideo AMD literally has better APUs than Nvidia in this segment right now... and fully open source support either already available or coming down the pike for the XDNA stuff.. you'd have to be insane to jump on a proprietary bandwagon right now. Let me spell it out... why choose a weak ARM CPU when you can choose the strongest mobile x86 CPU for the same price, why choose the most propreitary locked down GPU driver when you can choose an open one... also this piece of garbage only has 1ish teraflops... whereas the legion go in my hands right now FOR A LOWER PRICE, has 8 teraflops+. This thing is nearly 8x weaker than AMD's competing platform on the GPU alone...
And on top of that there's a "not for sale" sticker on it, and even if you manage to go around that I haven't heard a price either. Also looks overkill for most of RPi uses
@@ServeTheHomeVideo possible but nVIDIA seems more focused on the Enterprise market than other markets. The companies spending on A.I. are spending for the enterprise equipment. Like gamers are being left to the side by nVIDIA, so will everyone else who isn't called Microsoft, Google, Amazon, and Meta.
nVIDIA has been acting very much like VMWARE as of late. Don't worry, I still liked your video as I do all of your videos. You always do a great job. I'm just not feeling nVIDIA as of late.
@@thepcenthusiastchannel2300 VmWare got bought by broadcom so that is why they have jumped off the deep end... Nvidia was already a company that was all about maximizing profits over end user satisfaction.
I thought you would know better than me. But the reason why literally everyone is making Arm server CPU is because they can customise the CPU (for example using nvlink in Nvidia case). If x86 can be licensed like Arm, Graviton, Grace etc. would be x86
Yup, and not really. If you're talking about mini-pcs sure but Arm is better for low power use cases than x86. It could be argued that's not because x86 can't be, and due to development focus differences. For various industries, low power arm SBCs are crucial to their needs.
For most hobbyists the cost and flexibility of SBCs are the lure and the low power requirement only a concern in niche uses.
@@Combatwhombat That is a fallacy.
ARM is equal or less efficient for many (normal) tasks than x86/x64.
It isn't even more efficient at the super low-end anymore (like mobile CPUs) like it USED to be when it got that reputation.
In fact, for server-scale stuff, which is the WHOLE point of a dev kit like this, current gen ARM is about as efficient as the current gen x86 stuff of the same performance.
Performance per Watt is what matters when deploying hundreds of CPU cores.
And in the mobile space, some of the newest Intel CPUs have insane performance per Watt numbers even down to the tiny 5-7 Watt max range.
Below that? Sure, ARM "wins" the 1 Watt game. But when you are THAT low you might be better off with a microcontroller instead of a CPU anyway.
Fun fact, the Nintendo Switch was codenamed NX and uses the Nvidia Tegra SOC which is a slightly more graphics oriented Jetson Nano. Rumor has it that the Jetson Orin NX is likely to power the "Switch 2" with some tweaks to the GPU optimizing. I have a Xavier NX (Volta architecture) that I bought to play with some light ML workloads. Now that I have a HEDT for my ML fun, I've been using the Xavier as a game emulator/"steam" nuk. It's not cheap but it's pretty potent.
While there's some options for carrier boards out there (including an open source one!) you can plug an Orin NX module into a Jetson Nano carrier just fine. There's also a few options for single board clustering (turingPi and seed studio mate for instance); making a really cool low-power homelab for things like docker or k8. I know the turingPi can mix and match with a rockchip SOC on top of offering a Pi CM4 adapter board.
Also, these are sold by Nvidia for four different markets; Game (Tegra (NX)), AI inferencing for medical (Clara (AGX)), AI inferencing for video which targets driverless cars(AGX) and security monitoring(NX). They're sold in these development kits but never really advertised as a very mainstream "maker" product like the Pi. Nevertheless, the dev kits are perfect for all sorts of prototyping "embedded" applications. The point of all that is to say that the majority of these in circulation aren't dev kits. Nintendo alone has sold 139 million of one of the earliest in the lineup. Just because they're not as popular as the Pi flavors, doesn't mean that they're a gimmick!
Tegra is a super old line and was developed for phones (didn’t work).
@@LtdJorge what kind of processor cores are in mobile phones and tablets? Arm!
Yeah, the Tegra (eg Jetson TX2) was built on the Pascal GPU(think 1080ti) and mated to four ARMv8 A57 cores (same generation as the pi3's smaller A53 cores). In other words, nearly all Arm SBCs run mobile hardware! Even the Orin rocks A78 cores while the Pi5 runs a slightly older A76. In just core counts that means there's about 1/8th of a RTX 3080 on the die for the NX and 1/4 for the AGX. Since each generation of architecture on average doubled performance of each core over the previous we can roll back the clock until it approaches a full size card;
1/4 a 3080
1/2 a 2080
A 1080ti with modern features and 12 core 64-bit processor both with direct (shared) access to 32gb of lpDDR5.
I'm curious though, how this information changes your perspective on either the Switch or SBCs?
LATE but HAPPY 500k SUBS BROSKI!!!!! Well deserved!!!
Thanks!
I once considered a nvidia SBC for custom thermal camera platforn for filmmaking. however i struggled with the electronic engineering and never got digital data out of my thermal imaging modules... As i kinda broke the serial chip. So the project is suspended for five years now.
I remember nvidia had a 180€ SBC which the video codecs and storage options I wouldmhave been happy for as a camera platform. Potentially even doing auto focus or at least focus peaking.
Yes, but does it run windows arm, and run Crysis?
Keep in mind that these aren't meant for your Joe Schmo or a regular homelabber. These are development kits for enterprise and industrial scale customers. The end products are generally utilizing computer vision and inference engines, such as license plate readers or smart sprayers for farm equipment.
How this could be compared to the new Apple M4 products (eg the Mac Mini) for developing AI applications? What could be a better choice?
500k subs, congrats!
I love the Jetson Nano (NX form factor)
I remember getting a Maxwell based nano(shared the same CPU as the Nintendo switch) and then throwing those SODIMM modules in a carrier board and having a tiny little 4 node cluster smaller than a Mac Mini
I killed the board and all of the modules when i tried to add a Turing based Jetson and the heatsink had a slightly different shape that ended up dead shorting two components on the board.
Never a more sad moment with white smoke :(
Aww, that's a shame! It sounds like you had one of the seed studio Mate clusters as well. Mine seems to have been shorted out when I tried to use one of the TuringPi CM4 to sodimm adapter with it. Flashing each one in slot one and fiddling with that poorly placed jumper was a bit of a headache though.
I keep eyeing the TuringPi 2 cluster board since it is designed to support the Jetson, their Rockchip pi module or CM4s with their adapter. Each node has an m.2 slot and it's a standard mini-ITX form factor w/ an ATX header so you can build a pretty cool super efficient homelab into your flavor of mini-case.
@@CombatwhombatYes i think the TPi2 is a far better layout that my Jetson Mate
Disclaimer :- "No JEFF GEERLING were harmed in this video.....!!"😉😂🤣
I am surprised he has not texted me yet :)
@@ServeTheHomeVideo The message is clear 😉🙃😅😂🤣🥲
@@ServeTheHomeVideoThe message is clear 😉🙃😅😂🤣🥲
500USD?! + 25% tax in Norway. You can buy a killer pc on the used market for that.
Until they get better at maintaining the jetpack software stack, these will always be kind of janky
500k well deserved Patrick!!!
Much thanks!
7:28 Would a 100 dollar Intel Arc with AV1 support be an option?
Just got an A380 for video encoding/decoding. No idea how it does for games but for that use the thing is damned impressive and, as you say, at $100.
was the pi ever a good option for ai?
the jetson is definitely not a cost effective io controller?
the jetson might be a good option for running edge inference on a camera feed
but inference isnt ai. pretty much by definition. ur running a static model that lacks the ability to self-modify
What type of port is the one labeled camera on the Jetson AGX above the m.2 slot at 5:43?
This is for the MIPI cameras.
1 pci-e slot = loads of pci-e expansion? $2000 device is almost as fast as a 4 year old $600 device? what a deal!
i'm 99% sure this is a paid(in gold) by nvidia video.
talking about AI with 8GB of total ram....
I thought these were like $100? What’s the use case for such powerful AI, yet probably not long-term reliable in an IOT size package?
@@Kane0123 these SOCs are used for "AI inferencing at the edge" and currently sold primarily for automotive use (see Nvidia Drive). They're also leased as network attached devices to hospitals for imaging analysis (see Nvidia Clara). They could be used for lots of one-off kiosks (where dynamic TV menus and touchscreens demand a bit more than a pi but not a full computer...) including a few automated cafes.
When an AGX module (@60w) is paired with a lower energy draw workstation GPU like the RTX A4000(3070ti) at just 140w (200wh peak) or the A2000 low profile at 70w (130wh peak), it can pack some serious computing power into a small power envelope; perfect for a driverless vehicle.
@@Kane0123 the Jetson Nano kits are pretty cheap, just a bit dated. The Orin is Ampere architecture so RTX 3000 series and the Jetson is 3 full architectures old at Maxwell2 so GeForce GTX 700/900 series.
Each generation saw approximately doubled performance per GPU core on top of fitting a lot more cores on the same footprint thanks to moving from 16nm(Nano,128) to 8nm(Orin,1024(NX) 2048(AGX)).
Legit responses thanks team
Great info from a great team. I'm sorry maybe it is a stupid question, does these devices support LLM? if yes what will be the addressable Video memory? for example NVIDIA Jetson AGX Orin 64GB Developer Kit has 64GB & it is equivalent to RTX 3050 how much video RAM is addressable?
This video does Not „ServeTheHome“! Some north Korean hackers must have highjacked this channel 😂
can they run home assistant?
I think it would be interesting, particularly for professional applications, if they just integrated an arm CPU into their GPUs.
No PCI band with bottleneck at all.
Clickbait title 👎 starting at $500 it makes very little sense to describe it as a Raspberry Pi alternative.
The comparison could be with AMD equipped mini PCs instead, presenting what would work as alternative for external GPIO
I'm honestly shocked and dismayed by this. Throwing away a lot of hard-built credibility here.
I would like to add that if you want AI these are also incredibly expensive. If you want to play with AI on last gen hardware (aka Ampere which these boards have) buy a used 3090. You will learn a lot more if you don't have to fiddle with this exotic platform.
Most of the comments are also using the comparison. But once we get past the similarities by virtue of them both being low power SBCs, when you consider that Jetsons (aside from the nano) aren't really sold or advertised to a "maker" audience, the fact that it's a dev kit for industrial AI edge computing would suggest that the fair comparison would be a Pi designed for industry which aren't much cheaper.
Can this product or previous version be used as laptop GPU?
Not that I am aware of
If we make a drinking game for everytime the Pi is mentioned (for no good reason) we would be drunk
Will you review the Flint 2?
You can get a 7840HS mini-pc for not much more than the asking price here. The APU has 33 TOPS and it has other uses, mainstream Linux support. The 8840HS has 39 TOPS.
We use these in robotics all the time, due to their low power consumption it makes sense to put them in mobile robots
£2,358.66 - is today 1st of April? NOT YET!
Curious how they stack up to the Khadas Vim's, Odroid h2/3+, Kria KV260 or a second hand "TinyMiniMicro" (possibly with coral).
Price performance but also performance/watt.
Still looking for a portable sbc for RTABMAP/SLAM.
Currently I have an Odroid H2+, Lattepanda Alpha 864s and a Jetson AGX Xavier, but don't know which to use. 🤔
Only thing I know is that Jetpack on the Jetson is not a good package.
It doesn't make use of Cuda on a lot of (Cuda supported) preinstalled packages and re-installing with Cuda support is a nightmare, but without it I don't know if the extra power draw and weight makes sense.
Does anybody know a good community for these kind of questions? I am interested in everything that has the end result of getting a building in 3d/Cad/Bim. Slam / Nerf / Gaussian Splatting / point clouds / Photogrammetry / Lidar / Floorplan2Bim / Raster to Vector / OCR / RTK GNSS / indoor navigation / IMU
We want open source, certainly not going to help nvidia expanding their licenced closed corporate architecture. Wow a dev kit for 2000 dollar, with an nvme slot and a pcie slot, what a bargain 😅, in the near future Cuda or similar AI cores will be common in most chips, probably with the difference their they are more open, no licensing, amd is already heading in that direction.
really useful and efficient modules, also they can tolerate a large amount of radiation before it dies.
I had no idea about that.
How? Having some of the, albeit EoLed boards, there is nothing unusually hardened about them. No shielding at all. No redundancies. The RAM is nothing special, no way to correct for errors, bar the single, "industrial," version that came out not long ago. And as far as I know it is merely designed to be more tolerant of a larger temperature range, vibration, etc... Perhaps someone threw a ton of metal and plastic around one and called it shielded, but other than that.....
@@LackofFaithify yes, that is true, nothing in the paper seems to indicate good radiation performance. But the NVIDIA team did a great job of adding protections in each corner of the device. There is a bunch of papers talking about that like "Sources of Single Event Effects in the NVIDIA Xavier SoC Family under Proton Irradiation" that was with the previous generations, and soon will be a similar paper regarding the Orin family of devices. All seems to boil down to the A78AE cores and the LPDDR5 that from manufacturing seems to have some low level protections.
I would wonder if Nvidia might look at RISC-V over ARM in the long term. They've already done some work with it making a coprocessor that goes on their GPUs, and ARM still has license fees for using or designing CPUs. They also have strict requirements on compliance with their ISA, while Nvidia could make their own extensions at will for RISC-V, so they could make something like CUDA for their CPUs that's accelerated at the hardware level on all their chips. It also would explain why they tried to acquire ARM instead of getting a more permissive license like Apple did for their M series.
Already have issues with exports on GPU, getting all in at every level with China would probably draw even more unwanted regulations on them. And yes, that is what RISC-V is, and why they moved them to a "neutral" country to begin with.
I'd just gotten used to measuring performance in FLOPs, now I have to measure in TOPS as well? 😅
Tera Operations Per Second (1000 GOP) is AI/ML focused so it counts integer operations as well.
@@Combatwhombatnominally it's good that int operations are considered as well but it drives me mad that the format isn't part of the name since that basically determines the performance. having 2 petaOPS* is fine and dandy until you can't compare it to last gen which doesn't have that format and you can't use it in your inferencing workload.
*int4 operations per second
I was wondering when somebody was gonna cover these.
It doesn't sounds bad, but they are super overpriced, it shouldn't be over 1k to be worth it. If they'd be priced better, they could be really good, I think Nvidia could even make their own mini pc for the consumer market if they wanted
The Nano is $499
The primary driver to go to ARM is that they can buy the IP rights
I would love one of these for local AI inferencing, too bad they're (understandably) expensive
What you describe has just been announced as a product, called the Truffle-1.
$1299, using the 64GB model at 60W for local LLMs. A much better deal than the dev board!
@@supercurioTube interesting! On their page I went to the features and it has an exploded view of the device.
Guess what's in it. Go on.
Yeah, an Orin.😂 Well, you didn't say it wasn't. The critical difference is almost certainly the carrier board and 'case.'
I agree, they're way too expensive already. From experience if you end up in the market for one, it might seem like a way better deal from a Chinese supplier... right up until DHL charges you the tariffs. It's not just greed driving the price of these things!
I got my Xavier NX from Seed Studio at a premium because at the time they were the only one with it in stock. I don't know how thin the market is for consumer sales of the dev kits, but it's probably not consistent.
$500 for another 32GB of ram, they are thinking they are apple!
Well, given CPU+GPU SoC design and other rumours, there's also a chance nVidia will use the new gen to wrap the higher power units not only in robotics brains units but also as handhelds if they manage to efficiently accelerate x86 translation, or as future shields to combine mini PC performance with smart TV box, or maybe, given there's a success in Qualcom's ARM laptop foray with Microsoft doing a lot of software heavy lifting/tuning to accommodate wide range of use cases, as nVidia-only laptops. They certainly could make a lot of money from these 3 options with potential sales, but the edge inferencing or machine/factory automation or DC businesses would net more cash inflow so the execs are rather going to accommodate those markets, leaving those units as dev units/boxes at preliminary project stages before utilising big boys heavy hitters from nVidia AI/compute stable.
Heck, given nVidia's portfolio of IP, they could have made a truly unique device crossing multiple borders... just think for a second... they have ARM CPU architecture somewhat in-house now, they have ARM GPU designs and of course their own, they have had own set-top boxes for home entertainment designs for years now, they also own a lot of performant networking from Mellanox acquisitions which can drive DC/HPC racks communication but the now not as fancy older designs could easily serve the 10GbE-25GbE spectrum. Thus, they could very easily create a new Shield, one armed with multiple NICs/SFP28 ports driven by Mellanox IP, with management and extra processing or compute duties done by the ARM CPU cores with nVidia GPU accelerating media and AI tasks, maybe they could even add some WiFi capabilities to create an edge switch/router/access-point that could let it live near your TV doing the smart TV android device stuff or let it live in a rack or on a shelf in a cabinet hidden from plain view, allowing some gaming light console/PC style casting to local network and with docker/oci containers enabled/running on the hardware doing a lot of traffic analysis or media processing... a house AI oriented combo device to take over all the processing needs of your surroundings ;)
NGREEDIA.... The way its meant to be paid
:)
This comment has no business being this funny.
Why mention raspi? This more competes with a minipc, even though they have gpio
Nvidia's method of measurement for tera operations per second is utter bullshit and I am still miffed about it.
Could windows arm run on it?
DYI Nvidia ShieldTV for OP 8KHDR streaming, or the most OP Google Play gaming console
As a Linux user and champion, I reject all of NVIDIA's proprietary and overpriced rubbish.
Sure but we show this running Ubuntu which is a popular Linux distribution
ARM platform is proprietary, and rPi has proprietary codec binaries otherwise it would be almost useless.
@@ServeTheHomeVideoI don't use Ubuntu either - it's fine for newbie users but I consider in bloated rubbish too. I have been running Gentoo Linux for more than 20 years now - I can build Linux to my own specifications on whatever platform I need it on (and not anything made by NVIDIA).
@@HyenaEmpyema It's useless anyway since it its about 4 years behind the times, AMD's current generation APUs have up to 8.6Teralops to this things 1.2... Even the basic 4CU version of AMD's CPU is over twice as fast. As an example the SZBOX S77 is an SBC that only costs 500 same as the Orin, has 7x more Teraflops, much stronger CPUs, is more power efficient.... 2.5G ethernet, even the fastest version of the orin falls behind this board by about 2 Teraflops at 3x the cost.
@@Wingnut353 I think AMD's current designs lack actual A.I. acceleration but that seems to be changing rather quickly. They're adding A.I. accelerators on a whole slew of products and I think RDNA4 might be the architecture to usher in that change.
What is the power compared to a 3090?
Its not a gaming machine
Around 1/5th.
@@EarnestWilliamsGeoffericshould be much less
If the next generation has faster and more memory. I would definitely get this over the Apple m ultra. If I can get my 3090 running alongside this then I think it will be a decent ai inferencing machine.
It won’t be better than apples m ultra but is definitely cheaper and more upgradable. The setup will also look a little jank but it’s worth it for me.
It’s not all positive though. One of the main drawbacks is that with less client base, there will be less developer support for this product. That’s the one thing that nvidia can’t miss on.
I don’t need the fancy operating system but this has to work.
Interesting you mention that, the Truffle-1 based on the larger module available is kind of in the ballpark in performance for local LLM inference with a M1 Max 64GB.
So to stay competitive with what devs would already have on their laptop today, Nvidia should provide an upgrade soon as you suggested.
nice to have a deep learning toy with 8GB ram, Im sure we will all be competing with openai with it.
Because you got lots of LLMs that will run with 8GB.
This video sponsored by Nvidia, based on replies in the comments.
2:13 - in case people actually believe this comment. It’s not sponsored.
I get the hate. But us old heads that remember the GPU wars all the way back to the Voodoo cards can often get stuck in our ways. AMD graphics cards have a long and storied history of terrible drivers, spotty support, confusing branding, and low developer uptake; while team green proved for many years to be reliable. I propose that the apparent love you're seeing for Nvidia is mostly old habit/brand loyalty.
That said, the Jetson line has been pretty much the most robust AI performant SBCs on the market for the last few years. And, pretty much the only one able to utilize a GPU over PCI express out of the box. The 16x lanes on the AGX variant permits adding another Nvidia GPU, for instance an RTX A4000 (same chip as the 3070ti) would add 16GB of vram adding over 6000 Ampere CUDA cores for only 140wh totalling a peak 200wh power envelope.
How many Intel Compute Sticks or Google Coral do you need to plug into a Pi to reach the same performance?
@@Kane0123 Hardware being provided for free (even if temporary) is being sponsored, at least in the vast majority of jurisdictions, and requires more significant disclosure than is done here, for example:
In the US, FTC Guidelines clearly require clear disclosure, even if no money changed hands, and this is not done sufficiently here
In Canada, similarly, this would have to be clearly disclosed, for example by marking it as sponsored (receiving hardware IS sponsorship, no money has to change hands) in the creator studio, which it is not
In the UK, this video would have to be clearly labelled as "Ad, Advert, Advertising, Advertisement or Advertisement Feature" using a "prominent disclosure label", the UA-cam marking is insufficient here
Similarly, in Germany, this would have to be marked (with a large, constantly visible text on screen for the entirety of the video) as "Werbung" (Advertisement), as in the UK, the UA-cam label is insufficient.
@@Kane0123He responds to comments about the proprietary nature of Nvidia software, justifying it by stating he's running Ubuntu. Running and open source OS has no bearing or relation to the closed source nature of Nvidia software. The whole thing just smells.
@@zivzulander you're missing the point.
The FTC doesn't make laws, but that doesn't make their rules non-binding. I can't link things on UA-cam, it's as simple as Google searching. You can find an 8 page PDF on guidelines, and an even more expansive one on the rules those guidelines are based on, the latter of which is binding and enforceable. For example, the Contact Lens Rule, which requires prescribers to provide patients with a copy of their prescriptions after fitting, is actually an FTC rule, not a law. A lot of the applied framework of COPPA was also defined by FTC rule, not by the Act of Congress, and the FTC was responsible for a 170 million dollar case win against UA-cam for COPPA (rule, not act) violations.
Saying "thanks to Nvidia for letting us borrow X" is not sufficient disclosure according to the FTC guidelines. Heck, just using #ad or just marking the video as an ad in creator studio is insufficient (at least according to their guideline article).
It is never made clear that this video is an advertisement, or that sponsorship is involved. It is intentionally obscured to the maximum possible, whilst staying in the realm of plausible deniability when it comes to breaking FTC rules (which most certainly were broken).
Furthermore, the FTC recommends this disclosure happens in both video and audio, the video part is lacking here.
Again, FTC guidelines require, "clear", "simple" and "hard to miss" disclosure. Just the fact that people in the comment section were asking if the video was sponsored should be evidence enough that the disclosure was insufficient.
There maybe something here if the prices wer comparable, but this isn't close.
8:11 204.8GB/S!!??!! Yikes. With 64GB RAM it's a lot cheaper than a Mac Mini. Plus you can add extra storage. If anyone has tried to to run MacOS Hackintosh ARM on that, it'd be cool with a video on that.
Also, I hope Nintendo Switch games game be "made" to run on it.
I wonder why NVIDIA didn't go with RISC-V when the ARM deal fell though.
They are already dealing with the export bans on the GPU side, probably don't want to get involved on a cpu front as well.
RISC-V has a long way to go. It's not even close to being usable.
You probably can (not) get one of those developer kits, as they’re sold out everywhere 😂
They are on Amazon right now, but you are right, it might be worth waiting for a bit
Orin is Nothing new. and why shut i buy this with a vulcanised ecosystem with next to nothing open source things ? trash and a dumpsterfire for near all users that need things like this.
Yeah, no. I'm trying to keep as much 'AI' out of my home network as I can.
All tiny boards always stop at 64gb memory. i wonder why.
$2000, still the cheapest thing NVidia makes with 64 GB of ram. Wonder what the inferencing speed would be on it.
On the Truffle-1 based on the 64GB module, the company speced 20 token/s on Mixtral 8x7b. (Quantization unspecified)
You can look up this product for more figures.
they should just make arm boards and skip the nano structure - hopefully they will
NVIDIA is moving to Arm. We have covered this on the STH main site a few times over the last 2-3 years.
They're two completely different use cases.
Leaving aside everything else, the lack of proper software support for uses outside of what Nvidia deigns acceptable for us plebs makes this pretty much doa for many if not most tinkerers or enthusiasts.
There's also very little reason to believe that ARM is really that much of an advantage. AMD is very close to apple silicon for performance per watt in general tasks and the advantages apple has have very little to do with the ISA. Apple dedicates a lot of silicon to essentially ASICS used a lot in some tasks. ASICS and accelerators will always be more efficient than general compute regardless of the ISA.
I'd rather spend more on a fully supported X86 SBC or save money and buy a different ARM device.
It feels like Nvidia is just dumping unsold silicon on mugs with this.
I had to downvote this. Very, very few people developing for RPi5 have any reason whatsoever to jump to these. Everything from power consumption to price per unit, hardware support to software support, is inferior with these devices. Honestly, I hadn't expected such a stupid comparison from STH. It's disappointing.
GTC 2024 is next week.
similar specs to switch 2.
Folks with PCs (sorry Apple fans) can get a cheap RTX A2000 (12GB version) and get started with AI.
Totally, but NVIDIA is moving to Arm CPUs so this is a next level. The Apple folks using Mac Studios have some great capabilities, albeit not NVIDIA.
@@ServeTheHomeVideoApple have their own complete ecosystem with their own operating system and to the metal optimisation. They're also converging and streamlining their technology and IP of their smartphones and PCs to make programming simpler.
The ISA used is only a minor factor. ARM has some advantages with power for smaller scale but it hasn't taken over like it was predicted to and AMD have said they see Apple silicon as more of a performance competitor than Intel and have significantly closed the gap in performance per watt.
It seems like ARM has a lower power floor than X86 but newer AMD SOCs have similar performance per watt as APPLE when the M chips aren't using dedicated silicon accelerators.
ARM will never go away but there is no reason to believe it is likely to grow it's marketshare much further.
PLC and small industrial controllers are already switching over to riscV which will offset some of the growth in larger ARM chips.
@@ServeTheHomeVideo Nvidia migh have an ulterior motive: Intel is a competition and the Gaudi 2 is proving to be a serious contender. Intel is also adding an NPU to it's mobile processors. However, perhaps the main "threat" could be the OneAPI that may give CUDA a run for the money!
Hehehe interesting timing 😂
This feels like you’re comparing Toyota to a Ferrari… sure the Ferrari is better in almost every way, but it’s also 40x more expensive… an probably consumes 40x more gas as well….
It is like 5x more for the Orin Nano but has things like the M.2 slots that you would have to add to the Pi
Clickbait title, nonsensical comparison. 👎
Nvidia should make an ARM gaming PC with lots of RGB lighting, and sell #SUBSCRIPTIONS to maximize profits.
They already sell enterprise GPU subscriptions for $4500/ GPU/ year.
Return to monke! No way is that worth the cost to the typical hobbyist.
I hope NVIDIA paid you a lot for this ad. This hardware is NOT a replacement for a raspberry PI at $400 and it is NOT a good AI platform for 99% of use cases. It is very overpriced and models will not easily run on these chips. Hell they don't even have NVIDIA's last gen architecture. I am used to ServeTheHome having better content.
The Raspberry Pi isn't even that great in the first place. Plenty of others options out there.
Lol, a 500+ sbc.... Ok thank you. 😂😂😂😂😂
$299 at GTC today
I literally won't use both. May go for beagle board instead
...not for sale? ... not relevant to me then.
"Borrow these"?. How cheapskate is Nvidia ?. Shocking. Do they not know that they get at least 10X value from this video alone ?. Just Patrick mentioning this product is going places for them!.
NVIDIA isn't a small company. At the moment, this video's reach is limited. They have more views on their release videos.
People have the right to dislike proprietary drivers and high-priced accelerators, but they shouldn't shun others for interest in the entry-level tools for self development. Stop shouting at clouds.
These development boards would be more useful if it used a RISC-V extendable CPU, coherent FPGA ISA/DMA fabric, and dedicated debug/observable CPU via 10Gbe telemetry IPMI. On batteries, it would be useful to have a microSD IRQ backed by tunable/extendable capacitors/battery to save critical data in power loss/recovery.
This is some serious click bait nonsense
YT is not letting me edit my other comment, so I'll make this one: I've learned the hard way that it can be anything. From 16 bits in small industrial systems, through 64 bit as standard DDR sticks do, to 256 bits of Apple's M, and even higher.
forget that junk - go with Risc-V
A long way to go for RISC-V not to be just a tinker toy and not be a joke.
I do alot of camera stuff and found the jetson just are so bad. Unlike the rpi, they have no reliable eco system
💀💀💀💀💀💀👽
Disappointed by the improper Sponsorship disclosure, you've lost a LOT of goodwill with this video. Hardware being provided for free (even if only temporarily) requires discolsure basically everywhere:
In the US, FTC Guidelines require clear disclosure, even if no money changed hands, and this is not done sufficiently here, you never even mention the word "Sponsored"
In Canada, similarly, this would have to be clearly disclosed, for example by marking it as sponsored (again, receiving hardware IS sponsorship, no money has to change hands) in the creator studio, which it is not
In the UK, this video would have to be clearly labelled as "Ad, Advert, Advertising, Advertisement or Advertisement Feature" using a "prominent disclosure label", the UA-cam marking is insufficient here
Similarly, in Germany, this would have to be marked (with a large, constantly visible text on screen for the entirety of the video) as "Werbung" (Advertisement), as in the UK, the UA-cam label is insufficient.
@@zivzulander FTC regulations are not laws or statutes, these are guidelines set out by a federal agency (the competency for which has been delegated to them by Congress in the FTC act). This is, however, still binding (as it is within their delegated powers to regulate advertising, and can result in sanctions). This is typical in the US and also the case for agencies like the FCC, for example. Can't link it here because it gets filtered by YT but there is a FTC guidance article (on FTC dot gov) that explicitly mentions that "thanks" should not be used in these disclosures, as it can be "vague and confusing". Saying Thanks to Nvidia for letting us borrow X is blatantly misleading, it should just be something along the lines of: Nvidia has provided these units (for review), as it is the disclosure massively downplays Nvidia's relation to this piece of content. The disclosure made here is at the least very poor, certainly in violation, and a rather surprising & out of character bout of incompetence, or wilful ignorance.
Additionally, you must follow guidelines of any country your video is broadcast in. If you're a non-US creator, the FTC Guidelines still apply to you (if your video is available in the US for example). The same goes for European regulations. Videos like this should ALWAYS be tagged as sponsored in creator studio, as a sponsorship objectively exists, and not doing so most likely violates UA-cam's guidelines as well (although UA-cam's usual shenanigans mean they only enforce this when it's convenient for them)
Jensen Nano is arguably costlier than raspberry pi, and it's better for educational purposes. Do you agree?
Short Nvidia is shit and overpriced.
Promotional video without saying it's sponsored is illegal.
2:13 - don’t be a dick because people will believe you. It’s not sponsored
Correct. It is not sponsored by NVIDIA. NVIDIA never asked us to do it. I just thought it is cool and wanted to share.