@@77Avadon77 NVIDIA will has announced that they are doing something similar to the Jetson Xavier. There are two version of this one, the AGX, and two versions of a NX module. This is the same form factor as the NVIDIA Jetson Xavier NX. Pricing starts at $399 for the Orin NX. These will be shipping in Q4, 2022. Thanks for watching!
Oh my goodness I need one of these! Interesting design choices with choosing Display Port over HDMI and mixing/matching USB generations, but overall I like all of the hardware inclusions and expansion possibilities over the previous Jetson variants. Great review as always, Jim!
I agree, you need one of these. I'm sure it was quite challenging to design this product since it was during the Covid shutdowns. Hard to tell if different design choices might have been made otherwise. You and I should work on a project together! Thanks for watching!
@@JetsonHacks Interesting point! I had not considered that this was all done over the pandemic. We definitely should collaborate on a project in the future - I would like that very much!
My Jetson Nano is now left in Kharkiv (Ukraine), which is bombed by Russia every day. Home Assistant is installed on the device and I can remotely control our house. But every time it is very scary to turn on the camera and see if the house is still intact or not. I'm afraid this terrible moment could come at any time. We see our beloved cat, which was left alone with a fish and a hamster in the house. Our family happened to be out of the country when the war started. We didn't intend to leave for long. But my parents and sister on crutches remained in Kharkiv. They come to feed the animals between bombings. They have been living in the subway for more than a month and sleeping in the train on a bench in turn. It's horrible. They are afraid to leave the city. They are afraid that my sister on crutches might fall somewhere in the crowd during the evacuation. They are afraid to leave their house unattended. And they are afraid of a new country where it is not clear how they will live. Persuaded them to agree to the evacuation, but they do not agree. And this hurts my heart. The Jetson Nano connects me and my house, and I so want everything to end successfully quickly, so I can safely do what I love for the soul. Together with my family. People, appreciate the world and your loved ones nearby. There is sadness in my heart.
Im really sorry about whats happening, you didnt do anything to deserve this, I hope your beloved pets survive. Be strong, and remember the entre world thinks about the Ukraine now.
Can I ask where you got the pretrained poeplenet model and notebook? I got a Jetson AGX Orin myself, and wanted to try running the model myself. I can't seem to find the notebook on the NGC. Thanks for the video!
This is from a pre-production AGX Orin that NVIDIA provided for the review. I do not know if they packaged the demo for distribution. However, it seems similar to: catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/peoplenet Thanks for watching!
Awesome video -- based on this, I picked up the Orin dev kit for myself! I notice that your Riva output is much nicer than the output from the python client (at least the asr client), which just spits out line after line. Yours is also doing punctuation properly. How did you get such nice output?
Congrats on your purchase! The ASR demo came with the device from NVIDIA. It was a prerelease, it's hard to tell what might have changed. Is this video: ua-cam.com/video/FZWgQvI7MxE/v-deo.html what you are experiencing? Thanks for watching!
@@louieearle I would guess that there are switches deep in the bowels of the beast that can make it perform like this video, but I've just scratched the surface on working with it.
Thanks for taking the time to respond. Your videos are really great quality, and I like your sense of humor - Marcus Aurelius and ripping on Kamala Harris for the win!
The next Jetson Nano, cleverly code named Jetson Nano Next, is scheduled for early next year. I'm sure that will be much more fanfare when we get closer to release. Thanks for watching!
I think that's a pretty complicated question. A lot has to do on how you acquire the streams. Just handling the bandwidth might take a significant amount of time, depending on where the source is coming from (Ethernet, GMSL, PCIe and so on). There's a lot of compute power available, but there's a significant amount of design work optimizing gathering the data, processing it, and just as importantly, what to do with it once you're finished. Thanks for watching!
I am really curious of the training performance for small networks like 10M parameters. Because it seems that Orin has 4 TFLOPS of performance for training, which would be enough to fine tune smaller parts of a big network.
@@adamrak7560 I think it would probably be fine for something like that. Big picture, in order to leverage all the models that companies like NVIDIA provide, you end up doing some type of cloud/desktop/Jetson strategy for training. The Jetsons will be very happy inferencing. Reinforcement learning with few hundreds or perhaps a thousand images would be acceptable. But the models that NVIDIA has trained and made publicly available have been trained in their super computer data centers with resources that just aren't available to most. You definitely want to take advantage of that.
This gave me the big wow too. I hope you're going to be able to keep the AGX Orin. Talk about sticker shock. For a company with money to set aside for product evaluation this isn't much money to spend, but for those of us that are married and build projects at home. That's a hefty price tag. Unfortunately my AI projects are going to be relegated to the Jetson Nano for the time being. It would be nice if NVIDIA would generate a form that users could access and let us suggest what we would want for external connections, and features. An SSD connector and the ability to boot from SSD would be nice. I think more USBC with the ability to send video would be a good option. I'd like to see how this device would work, once recognizing a person and a face, take a snapshot of the face and identify the individual from a database. Just my thoughts. Thanks for expanding my horizons with the meditations of Marcus Aurelius. You're clearly well rounded with various interests. I've already download it to my kindle for reading. I'm sorry about coming late to the party, but at least I got here and it's been quite informative. You take care of yourself, and think up some projects for the nano. I think It would be good to use several in a project with one being a master control unit running the main application and a few others reading sensor information, and manipulating control devices such as valves, motors or relays, etc. These could have the ability to make rudimentary decisions independent of the master control unit. The ethernet connections would make a good communications backbone. Just some thoughts. Peace out.
Thank you for taking the time to write this. It's been a little perplexing to me that people assume that all of the products introduced in the Jetson line should be targeted towards entry level users. Personally, I expect that the new hotness is always at a premium, and people that need or can afford it are the early adopters. That's somewhat magnified because the Jetsons aren't commodity items, so the product cycles are longer. I'm the first to agree that $2K isn't hobby level compute money. Of course it's messed up that you can't buy entry level Jetson because of parts shortages. Raspberry Pi is having the same delivery issues, so it's not a NVIDIA specific issue. There's a variety of reasons (and a lot of yelling I would guess) as to what they can get a hold of, and what they should do with the product they do get. If you've been in the industry, you can probably guess that they have contracts with some partners which incentivizes them to deliver to the partners first. At the same time, NVIDIA has been clear about when the next generation of the Nano is coming (next year, I would guess in March at GTC). It's been an unreasonably long wait because of the lack of availability of previous generation products. But next year after the rush is over, my guess is that people will be back to complaining about the usual things instead of pricing for the high end stuffs. I'm working on a series on how to use a Raspberry Pi Pico with the Nano. The Pico is a fun little microcontroller which allows the easy addition of a whole bunch of fun sensors, motors and such. There are a lot of people working with the Pico, so it's about imagination and skill more than anything else. Plus, the RPi foundation is at its core an educational organization, so there's documentation galore.
Just curious about Arrow Orin AGX orders (the main/only provider for Nvidia Orin in USA), somebody received it already. Arrow mention in the website "ships today", but it's in backorder. (my order shows shipping on Feb 2024).
Right at launch, China locked down over Covid. It will take a little time to get going again. It's certainly worth calling Arrow, Feb 2024 does not sound correct. Arrow is highly automated, my understanding is that they had to do quite a bit of work on their system even to just allow back ordering in the Orin case. Thanks for watching!
@@JetsonHacks thanks for your reply. I already call them, not a final shipping day, just waiting (ordered on Mar 23). Congrats for your great channel. You got a new subsriber.
You are welcome. This is a demo unit, Riva was not installed from a publicly available repository. You should be able to get a better answer from the official NVIDIA Jetson forums on the RIVA schedule. Thanks for watching!
For the demo, it is running on 6 1080p video files. I think this is probably more difficult as you have to load them, decode them, manage buffering and so forth before inferencing. Video streams you only really need to route and decode. Thanks for watching!
You said "you don't need a host machine to install jetpack but will for many other things" i guess you were talking about the micro usb debug. What is it also used for? Thank you for your videos
You are welcome! For example, you may want to boot the AGX Orin from a SSD. You will need to install JetPack on a host in order to do that and refresh. Most of the production tools for flashing Jetsons are on a x86 host. Thanks for watching!
Could we get a showcase of the full GPU enabled in a benchmark and/or gameplay scenario? I want to see what all that extra cache does versus Desktop Ampere's Equivalent (The 3050 Laptop) when running the full 14-16SMs (don't know how many are in this Orin Config because NVIDIA jebated in the pdf documentation that there is a CPU and GPU difference between the 32 and 64GB models) Primarily because AMD adding a lot of cache to their GPUs with RDNA2 gave them massive performance increases so NVIDIA doing it here would be interesting. Maybe 3DMark with the GPU running at 768Mghz 1Ghz Full clock? (Also a side idea, is it possible to disable SMs in a benchmark to bring it down to 12SMs? if so could you do that and run the 768Mghz and 1Ghz numbers through a benchmark like 3DMark or something?) Sorry if that is a bit, just very curious as this sort of can give us a preview into Lovelace as that also has the massive L1/L2 Cache change that Orin does but even bigger seemingly
I'm terrible at running benchmarks. Usually I rely on Michael over at Phoenix to run reasonable Jetson benchmarks. Because of the shipping challenges, it will probably take a little while for the benchmarks to be published. Thanks for watching!
@@JetsonHacks Alright, at least maybe awnser the question I asked in reply to the 15W Clock question? Aka how many GPU cores are active in that config 4 CPUs at 1Ghz with how many GPU cores/SMs active at 420MHz?
@@Alovon I'd only be guessing, I've only spent a little while on the machine. Please ask the question in the official NVIDIA AGX Orin forum, where a large group of developers and NVIDIA engineers share their experience. The NVIDIA folks will know the real answer.
2 роки тому+1
Quick question for JetsonHacks, does WiFi card have BT or just WiFi ?
@@JetsonHacks I realised I was very unclear about what I mean by real time processing. Of course if the model is optimised for tensorrt it will be few times faster than generic TF model.
@@AI-xi4jk NVIDIA makes available some benchmarks for some of the more popular models. They take a couple of hours to run on the AGX Orin full tilt. PeopleNet detection like that used in the video inferences @ 536FPS. The AGX Xavier is ~195fps. In real life when you see the demo, it looks real time with that many video streams. The video recording is limited by the hardware capture device I use.
Great review! Thanks for sharing! I really want to know whether that PCI slot can support a regular NVidia graphics card for a basically indefinite performance boost.
Thank you for the kind words. People have talked a long time about that very thing. As I understand it, that's a feature that people are working on. However, there are several issues still. You can probably ask on the NVIDIA Jetson AGX Orin forum, and maybe one of the NVIDIA engineers will have an answer. Thanks for watching!
It will theoretically support it but you need to make sure there is an ARM driver with compatible kernel version for your card. Furthermore it won’t provide enough power since the whole unit maxes out at 50/60 watts. You’ll need an external power source for you card. There is Nvidia Clara platform that does something somewhat similar. If you make it work pls share ;)
If you put another graphics card on it, the price complexity relationship is probably better spend on a normal atx board with two GPUs. You need the power anyway for the GPU and you have a form factor and none-arm tooling that are proven. Just because a graphics card fits in there, I would not try to force it to work. Rather change the tooling choice. My 2ct 🤷🏻♂️
Yes, all the Orins ship with JetPack 5.0 Developer Preview (DP) for right now. Even though this particular device itself is pre-release, I was able to install the latest pre-release. The GM will probably be released by the end of this week. Typically NVIDIA shares pre-production with reviewers before launch on most of their products. That does a couple of things. 1) Gets product to reviewers so that they can get articles and videos out. And 2) it helps iron out the start of the production process. I don't know what the volume is for more popular products, but most companies will run a small batch to test the manufacturing/quality control which they then either give away or destroy/recycle. Depends on the company, but most destroy/recycle for obvious reasons.
Impressive... newcomer to the Jetson kit however and on this would it be feasible for it to drive custom ar/vr applications and 3d graphics? I see the pcie slot...could you potentially power a GPU on it?
I believe that several people have been working in that area with previous generation Jetsons. Because the video and GPU are already built into the Jetson, people tend to build stand alone products with them. The Jetsons are a building block, and of course you can extend them however you need. Thanks for watching!
@@JetsonHacks thanks for the reply...great content! To sum up, are there any comparison benchmarks on how this would compare to a 3090 GPU? Looking to run an embedded unreal engine app.
I don't know of anyone that has done direct comparisons yet. To be clear, that's a comparison between two different classes of machines. The 3090 has 5X CUDA cores, much faster memory, more memory bandwidth, wider memory lanes and uses about 10x more power for the GPU alone. A 3090 configuration has a minimum of 850 watts, and probably well over 1000 in practice. The Jetson operates up to 50W (think battery power).
@@JetsonHacks Understood...I'll just have to do some testing to see how far I can push this kit. Thankfully Arrow had one in stock. Thanks for entertaining my questions. Subbed!
Thanks for the overview of Orin! I have a question about the PeopleNet notebook you showed - is that something you wrote? I can't find anything that looks as straight forward to use PeopleNet on NGC or TLT. thanks for posting all your videos - they have been very helpful!
You are welcome and thank you for the kind words. The PeopleNet demo is a demonstration of NVIDIA TAO. I am not sure whether it is released or not. Typically we get pre-production units for review which have some technology demonstrations that later get released. However, I do not know about how NVIDIA is handling this particular one. Thanks for watching!
I hadn't even thought about that, great observation! I just grabbed the nearest book, but reading something thousands of years old and have a machine translate it into words is rather mind boggling. Thanks for watching, and taking the time to comment!
I don't know, you should ask on the official NVIDIA AGX Orin forum: forums.developer.nvidia.com/c/agx-autonomous-machines/jetson-embedded-systems/jetson-agx-orin/486
This one of the features of JetPack 5.0 on the AGX Orin which is soon arriving for the Xavier. It's not out yet, so I don't know if you can upgrade through apt from JetPack 4.5+ or not. Thanks for watching!
The demo was supplied with the demonstration pre-release system, and is not available directly. However, all of the functionality is now available on the jetson-inference Github repository: github.com/dusty-nv/jetson-inference Good luck on your project!
The cool thing about the NVIDIA CUDA software stack is that it runs across their hardware platforms, minor niggles like versioning being ignored. If you have a recent NVIDIA graphics card in your PC, Pascal or Ampere, you can probably replicate something much like the examples shown here. Thanks for watching!
The version of JetPack 5 that the Orin runs supports the Vulcan 1.3 API. Unfortunately I'm not into games, so I'm not the right person for that demonstration. Perhaps one of the other folks who have one would be up for the challenge. Thanks for watching!
@@JetsonHacks could you even just run a benchmark from 3d mark then? A lot of people think this chip but cut down to 1024 cuda cores will be in a next generation switch. Well if you ever need an idea for some cool content id set the power to something like 15 watts and see how it runs. Anyway have a great day!
Please report this on the official NVIDIA Jetson AGX Orin forum to get support: forums.developer.nvidia.com/c/agx-autonomous-machines/jetson-embedded-systems/jetson-agx-orin/486 Thanks for watching!
Ah, meditations. Our purpose is to breed, prune, and feed the soil. Just explore life and make art of it. I'm new to these types of products, is the SDK absolutely necessary or is there some ease to development on a regular PC with a decent GPU. Any emulation software provided?
There is no emulation software. You can develop remotely on another machine using tools like VS Code and such, but most developers hook the machine up to a monitor and use standard tools. The AGX Orin has 12 CPU cores and 2048 GPU cores, so it's capable of being fielding its own development environment. Thanks for watching!
I wish the price reflected the value of the hardware, instead of the utility value it provies to large enterprises. I'd love to update my Nano to something from this decade, but i dont think this hardware is worth more than $399, and it comes down to having a GT3030 with a cellphone processor clocked faster than normal
Thanks for sharing your opinion. I'm not sure how to make a good comparison between a GPU and a full SoC (e.g. on the Orin 1/3 of the compute is taken up by the Deep Learning Accelerator "DLA" which the GPU doesn't have) but if it doesn't fit your needs or perceived value/price point it doesn't work for you. Thanks for watching!
@@JetsonHacks I suppose i worded this poorly, and should have said, if nvidia sold a consumer version with the enterprise focused accelerators removed, or disabled because it was faulty, i wouldnt price it above $399, or $450 if it only comes in a 32GB variant
@@JetsonHacks Holy crap, they announced the Orin NX for much closer to where i wanted it, at only $599 with 16GB of RAM, and $399 for 8GB. I need to upgrade my Seed Studio jetson mate if it suports it
Congrats! The demos were preloaded on that machine (it was from a preproduction run), so I don't know exactly how to replicate the demos. However, Jupyter should be installed already I believe. I would get started with this excellent resource from Dusty and NVIDIA: github.com/dusty-nv/jetson-inference Thanks for watching!
The jetson agx Orin, Can you test stable diffusion. And also a good video editing program like di Vinci. The ai suggest that you can run both on there. I have an old laptop and where as I’m not concerned with playing high end games it looks like this may be a good purchase to consider LLM running locally, stable diffusion, and general video editing 4k video for example.
Here's a video of it running Stable Diffusion: ua-cam.com/video/HlH3QkS1F5Y/v-deo.htmlsi=Mw26As_sPSHjIUdX This is probably not a good choice for a desktop/laptop replacement for tasks like video editing. However there is a lot of GPU accessible RAM for running LLMs. You can see many examples on www.jetson-ai-lab.com where a large number of different AI and ML examples are shown, along with examples. Everything is OpenSource, so it's inexpensive to get started. This includes LLMs, Text & Vision (VLM), Vision Transformers, RAG & Vector Database, ASR and TTS, Image Generation like Stable Diffusion and much more. Thanks for watching!
@@JetsonHacks thanks. It was just a suggested tie me over suggestion by AI until I buy the 5 to 6k pc with gtx 4090 or 4080super…. But it assumes video editing would work fine on Orin. Which I was curious about. It even suggested for me to install davinci on both laptop and Orin and I use proxy files. Load up video files to edit, export proxy files to laptop, (edit in low quality mode) edit, move back to Orin to process. Very interesting concept.
@@playthisnote I'm sure there are programs that you can use on the Orin that can edit video. The issue is that the Orin is an ARM machine, not x86 or Mac. This means that they're out of the mainstream for many applications, or it can be challenge to get support because it's a relatively small community in comparison. Blender does work, but I'm not sure about Resolve or Premiere. The Orin is a good machine, but its purpose is different than a desktop replacement.
Was looking up the 32/64GB differences... That SDK seems a mix of 32GB with the full core numbers from the 64GB version. Will wait till the 64GB comes out and gets cheaper.
I believe you are correct, the Dev Kit has 2K cores, but 32GB memory. The 64GB AGX Orin Module will have 2K cores, the 32GB AGX Orin Module will have a few hundred GPU cores less, as I recall. One thing to remember is that there's a big price bump in the memory markets right now to go from 32GB to 64GB, so there's some balancing to figure out how the pricing will change. Thanks for watching!
@@lowfuel6089 The Jetson all have a unified memory architecture, that is, shared memory. It's similar to how the new Apple M1s are setup, everything uses the same memory. The new Macs use LPDDR5 memory like the Jetsons. Thanks for watching!
It's hard to tell from your description what the issue might be. Please ask on the official NVIDIA Jetson Orin forum, where a large group of developers and NVIDIA engineers share their experience. Thanks for watching!
There are some info from nvidia, they tell us pcie slot does not support external gpu. So can you please tell me what meaning of pcie slot on nvidia jetson ?
The NVIDIA Orin Clara uses a GPU card in a PCIe slot. There are many PCIe peripherals besides GPUs, like network appliances and storage solutions. Since the developers are designing their own product around the Orin, they may add PCIe peripherals simply without having to design their own PCIe extenders. Thanks for watching!
Is this a computer? Could I use this instead of creating a single cpu workstation? I have a Wacom and four other screens. Would this allow me to run the Wacom on its own cpu and the other screens on other cpus?
This is a development machine for creating applications that run in low power environments. It is not really suited for what you are describing, you are better off with a regular desktop computer. Thanks for watching!
Yes, you can order them today. Because of the Covid lockdown in China, there may be some shipping delays. I have seen them at Arrow, Seeed Technologies and Silicon Highway. Thanks for watching!
The CPU is around 1.5 to 2X faster. I think most people overlook the system capabilities. CPU and GPU performance are very much tied to memory bandwidth. The Orin has LPDDR5 main memory, which is about 1.5 faster than the previous LPDDR4. To me, it feels significantly faster. Benchmarking the Jetsons is tricky because of DVFS, which throttles performance to fit in the energy footprint requested. For example, it will not turn on the CPU full throttle until there's a certain amount of workload for a certain amount of time. That is, unless you turn on performance mode. The Jetsons do that in order to save energy. People spend a lot of time tuning the energy modes (e.g. 15W, 20W, 50W, MAXN) to match their requirements by balancing clock speeds, number of cores and so on. It's way complicated, and to be honest to do it correctly is a little beyond where I am.
I know it didn't sound like it, but that was me excited when I saw the ASR results. It's beyond impressive to do that while running another large model at the same time. Thanks for watching!
You are welcome. This is a pre-production model, so the numbers might not be the same from a production unit. From /etc/nvpmodel.conf: In 15W mode, 4 CPUs are online, maximum frequency 1113600. The GPU max frequency is 420750000. Thanks for watching!
@@JetsonHacks no doubt I think It might be ahead of me a bit I would love to put one of these on my annie robot but I think I need to watch more of your videos for direction getting all of these things connected together and working together in ros is quite complicated to say the least.
They aren't in the same league. Remember, the Orin only uses 60 Watts, the machines you mention don't even get started with less than 10X that. While you can train ML models with the Orin, the real purpose is to run multiple inference nodes on pertained models. For example, in robotics where you have to do vision recognition, object detection, path planning, segmentation and such all at the same time. Thanks for watching!
@@JetsonHacks If I want to use the jetson orin AGX to process 4 IP cameras, detect people (YOLOv8m) and track (SORT) using DeepStream, what resolution and how many frames per seconds (FPS) can I use? What if I have a jetson orin NX?
@@marceloraponi8566 That's not a easy question to answer. It's a development question which depends on a variety of factors, including the skills of the developers. Typically you would use DeepStream to both detect and track objects. It will depend on which resolutions you choose, the power envelope you need to work in, the frame rates and resolution of the IP cameras in use, network speed and so on. You can get a better estimate from the official NVIDIA Jetson Orin forums, where a large group of developers and NVIDIA engineers share their experience: forums.developer.nvidia.com/c/agx-autonomous-machines/jetson-embedded-systems/jetson-agx-orin/486.
Why is that sad? I think it's great that you can get such great performance on the desktop at that price point! The AGX Orin is for a different market entirely, purposely for embedded products like robotics, autonomous vehicles, and smart video analytics. The Jetsons have 1 million developers, there is an ubelievably wide variety of uses. Thanks for watching!
3050 has only 8GB GPU memory, so it is limits size of the tensorflow model It is 8X faster on Jetson Orin if your tensorflow model requires 32GB GPU memory
@@Haskellerz if you've reached the point where you need a 400W+ graphic card.. You should buy a proper case for that 6000€+ baby cause as I assume you're doing this with a Quadro or WX using ECC 😗. And tbh those video card also requires cpu power if you don't wanna starve your GPU, whe usually use dual cpu xeon as well as 128Gb+ RAM for CPU alone cause streaming 32Gb tensor data back and forth require a whole lots of it. I don't see how this board gonna fold the job, really. I personnally use 2 x K6000 alongside 2 x P4000 and 1x RTX 4000 + Dual Xeon with 256GB, a full ECC 10 yo machine !
@@goldnoob6191 If you use unified memory on Jetson Orin there is no latency between CPU and GPU. So no memory transfer between them. Looking at the GPU prices... P4000 and K6000 are around $1500 2 x P4000 cost more than the $2000 Jetson Orin 2 x K6000 cost more than the $2000 Jetson Orin Personally, I use Google cloud right now because GPU prices are bad. Really need a good GPU with lots of memory.
@@JetsonHacks I hope 🙏🏻 (but it’s too expensive for me) and I am always watching your videos. I am waiting your Deep Learning and GPU based Opencv videos :)
@@MrFirzen91 I've worked with the NVIDIA Deep Learning Institute on different courses, some of which are free! Check it out: www.nvidia.com/en-us/training/
I hear you, the flagship models usually are. However, it's still $500 less than the AGX Xavier was when it was introduced in 2018. The Orin NX modules should be out in Q4, and they start at $399. Thanks for watching!
@@JetsonHacks yeah, luckily I'm in school still (I have nanos and agx Xaviers) and get an education discount. Still working on a very fun and nearly complete couple of projects with ML and robotics.
Full article now up on JetsonHacks! Looky here: wp.me/p7ZgI9-3g7
Do they make something smaller in form factor? Without giving up too much?
@@77Avadon77 NVIDIA will has announced that they are doing something similar to the Jetson Xavier. There are two version of this one, the AGX, and two versions of a NX module. This is the same form factor as the NVIDIA Jetson Xavier NX. Pricing starts at $399 for the Orin NX. These will be shipping in Q4, 2022. Thanks for watching!
Came in for a agx demo, left with motivation.
Good old Marcus has some pretty good things to say. Thanks for watching!
It should be noted that the RIVA feature is now an enterprise only feature (or API based pay as you go feature).
Thanks for sharing this. This is important to a lot of people. Thanks for watching!
Thanks!
You are welcome and thank you kindly for your generous donation! I'm glad you enjoyed the video.
A Jetson video is always welcome, now all we need is for Jetsons to actually be avaiable for purchase, lol
Thank you for the kind words. Yes, it's frustrating about availability, but NVIDIA assures me they are working on it. Thanks for watching!
i know it has been 2 years, the problem still isn't resolved :D. I ordered a Jetson Orin Nano 8GB and it arrived in 6 months due to stock shortages
You are the Jetson Pro. Was looking this information. Thank you so much!
Thank you for the kind words, and thanks for watching!
Oh my goodness I need one of these! Interesting design choices with choosing Display Port over HDMI and mixing/matching USB generations, but overall I like all of the hardware inclusions and expansion possibilities over the previous Jetson variants. Great review as always, Jim!
I agree, you need one of these. I'm sure it was quite challenging to design this product since it was during the Covid shutdowns. Hard to tell if different design choices might have been made otherwise. You and I should work on a project together! Thanks for watching!
@@JetsonHacks Interesting point! I had not considered that this was all done over the pandemic. We definitely should collaborate on a project in the future - I would like that very much!
I was anxiously waiting for your unboxing, setup and demo of the Orin kit , thank you!
Thank you for your kind words. You are very welcome, and thanks for watching!
Wow. Got here 2 years later and am speechless. Wow
It's still pretty cool stuff. Thanks for watching!
Well done for ditching the HDMI and going DisplayPort instead. DP is superior in all aspects. not to mention that it is patent-troll-fee free
In general, a good idea. Saves some room and some money. Thanks for watching!
My Jetson Nano is now left in Kharkiv (Ukraine), which is bombed by Russia every day. Home Assistant is installed on the device and I can remotely control our house. But every time it is very scary to turn on the camera and see if the house is still intact or not. I'm afraid this terrible moment could come at any time. We see our beloved cat, which was left alone with a fish and a hamster in the house. Our family happened to be out of the country when the war started. We didn't intend to leave for long. But my parents and sister on crutches remained in Kharkiv. They come to feed the animals between bombings. They have been living in the subway for more than a month and sleeping in the train on a bench in turn. It's horrible. They are afraid to leave the city. They are afraid that my sister on crutches might fall somewhere in the crowd during the evacuation. They are afraid to leave their house unattended. And they are afraid of a new country where it is not clear how they will live. Persuaded them to agree to the evacuation, but they do not agree. And this hurts my heart. The Jetson Nano connects me and my house, and I so want everything to end successfully quickly, so I can safely do what I love for the soul. Together with my family. People, appreciate the world and your loved ones nearby. There is sadness in my heart.
Sorry to hear about your situation, but it is good to hear that you are safe. Hopefully your Nano will keep an eye on things till you get home.
Im really sorry about whats happening, you didnt do anything to deserve this, I hope your beloved pets survive. Be strong, and remember the entre world thinks about the Ukraine now.
thank you very much for sharing your great contents! :)
You are welcome, and thanks for watching!
3:13 that's exactly what happened! Great quality content
Thank you for the kind words. Preach the truth! Thanks for watching!
Can I ask where you got the pretrained poeplenet model and notebook? I got a Jetson AGX Orin myself, and wanted to try running the model myself. I can't seem to find the notebook on the NGC. Thanks for the video!
This is from a pre-production AGX Orin that NVIDIA provided for the review. I do not know if they packaged the demo for distribution. However, it seems similar to: catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/peoplenet
Thanks for watching!
Awesome video -- based on this, I picked up the Orin dev kit for myself!
I notice that your Riva output is much nicer than the output from the python client (at least the asr client), which just spits out line after line. Yours is also doing punctuation properly. How did you get such nice output?
Congrats on your purchase! The ASR demo came with the device from NVIDIA. It was a prerelease, it's hard to tell what might have changed. Is this video: ua-cam.com/video/FZWgQvI7MxE/v-deo.html what you are experiencing? Thanks for watching!
Yes, that matches the output I am getting.
@@louieearle I would guess that there are switches deep in the bowels of the beast that can make it perform like this video, but I've just scratched the surface on working with it.
Thanks for taking the time to respond. Your videos are really great quality, and I like your sense of humor - Marcus Aurelius and ripping on Kamala Harris for the win!
I hope they release another product in the Jetson nano price class.
The next Jetson Nano, cleverly code named Jetson Nano Next, is scheduled for early next year. I'm sure that will be much more fanfare when we get closer to release. Thanks for watching!
@@JetsonHacks Awesome. Glad to hear it.
@@TheRealFrankWizza It should be quite interesting.
Thanks for the demo. How many such streams do you think it can support simultaneously? Looking at this performance I would guess even 20+? Thoughts?
I think that's a pretty complicated question. A lot has to do on how you acquire the streams. Just handling the bandwidth might take a significant amount of time, depending on where the source is coming from (Ethernet, GMSL, PCIe and so on). There's a lot of compute power available, but there's a significant amount of design work optimizing gathering the data, processing it, and just as importantly, what to do with it once you're finished. Thanks for watching!
Thank you for your content, it is very useful. Could you share the weight of the kit without the power supply?
It's about 887 grams, give or take. Thanks for watching!
Would love to see some speed comparisons in either training or Inference, especially compared to desktop class GPU's.
Thanks for the suggestion, and thanks for watching!
I am really curious of the training performance for small networks like 10M parameters. Because it seems that Orin has 4 TFLOPS of performance for training, which would be enough to fine tune smaller parts of a big network.
@@adamrak7560 I think it would probably be fine for something like that. Big picture, in order to leverage all the models that companies like NVIDIA provide, you end up doing some type of cloud/desktop/Jetson strategy for training. The Jetsons will be very happy inferencing.
Reinforcement learning with few hundreds or perhaps a thousand images would be acceptable. But the models that NVIDIA has trained and made publicly available have been trained in their super computer data centers with resources that just aren't available to most. You definitely want to take advantage of that.
Really nice. Thanks you. Ill but one when it will ne avaiable in France
You are welcome. I believe you can get it from Silicon Highway: www.siliconhighwaydirect.com/product-p/945-13730-0005-000.htm
Thanks for watching!
This gave me the big wow too. I hope you're going to be able to keep the AGX Orin. Talk about sticker shock. For a company with money to set aside for product evaluation this isn't much money to spend, but for those of us that are married and build projects at home. That's a hefty price tag. Unfortunately my AI projects are going to be relegated to the Jetson Nano for the time being. It would be nice if NVIDIA would generate a form that users could access and let us suggest what we would want for external connections, and features. An SSD connector and the ability to boot from SSD would be nice. I think more USBC with the ability to send video would be a good option. I'd like to see how this device would work, once recognizing a person and a face, take a snapshot of the face and identify the individual from a database. Just my thoughts. Thanks for expanding my horizons with the meditations of Marcus Aurelius. You're clearly well rounded with various interests. I've already download it to my kindle for reading. I'm sorry about coming late to the party, but at least I got here and it's been quite informative. You take care of yourself, and think up some projects for the nano. I think It would be good to use several in a project with one being a master control unit running the main application and a few others reading sensor information, and manipulating control devices such as valves, motors or relays, etc. These could have the ability to make rudimentary decisions independent of the master control unit. The ethernet connections would make a good communications backbone. Just some thoughts. Peace out.
Thank you for taking the time to write this. It's been a little perplexing to me that people assume that all of the products introduced in the Jetson line should be targeted towards entry level users. Personally, I expect that the new hotness is always at a premium, and people that need or can afford it are the early adopters. That's somewhat magnified because the Jetsons aren't commodity items, so the product cycles are longer. I'm the first to agree that $2K isn't hobby level compute money.
Of course it's messed up that you can't buy entry level Jetson because of parts shortages. Raspberry Pi is having the same delivery issues, so it's not a NVIDIA specific issue. There's a variety of reasons (and a lot of yelling I would guess) as to what they can get a hold of, and what they should do with the product they do get. If you've been in the industry, you can probably guess that they have contracts with some partners which incentivizes them to deliver to the partners first.
At the same time, NVIDIA has been clear about when the next generation of the Nano is coming (next year, I would guess in March at GTC). It's been an unreasonably long wait because of the lack of availability of previous generation products. But next year after the rush is over, my guess is that people will be back to complaining about the usual things instead of pricing for the high end stuffs.
I'm working on a series on how to use a Raspberry Pi Pico with the Nano. The Pico is a fun little microcontroller which allows the easy addition of a whole bunch of fun sensors, motors and such. There are a lot of people working with the Pico, so it's about imagination and skill more than anything else. Plus, the RPi foundation is at its core an educational organization, so there's documentation galore.
Just curious about Arrow Orin AGX orders (the main/only provider for Nvidia Orin in USA), somebody received it already. Arrow mention in the website "ships today", but it's in backorder. (my order shows shipping on Feb 2024).
Right at launch, China locked down over Covid. It will take a little time to get going again. It's certainly worth calling Arrow, Feb 2024 does not sound correct. Arrow is highly automated, my understanding is that they had to do quite a bit of work on their system even to just allow back ordering in the Orin case. Thanks for watching!
@@JetsonHacks thanks for your reply. I already call them, not a final shipping day, just waiting (ordered on Mar 23). Congrats for your great channel. You got a new subsriber.
@@javqui410 Thank you for subscribing!
Same boat. Feb 2024 is rediculars. 😀😀
May I ask whether the peoplenet you demonstrated in the 8th minute is the demo that comes with orin or is there a library that you can download?😀
The demo was provided by NVIDIA, but you can download PeopleNet from catalog.ngc.nvidia.com/orgs/nvidia/models/tlt_peoplenet
Thanks for watching!
@@JetsonHacksThank you so much ! Your videos are very useful to me!
@@qyy2889 You are welcome, and thank you for the kind words.
Thank you for the impressive demo. How did you install Riva. Do you have a link to the instructions ?
You are welcome. This is a demo unit, Riva was not installed from a publicly available repository. You should be able to get a better answer from the official NVIDIA Jetson forums on the RIVA schedule. Thanks for watching!
Now if only we could get Gentoo working on this thing!
It should be possible with JetPack 6. Thanks for watching!
xavier를 써본지 3년밖에 안됐는데 벌써 orin이 나오다니 그것도 몇배의 성능을 가지고.. 대단하네
Thanks for watching!
Was that inference running on 6 actual 1080p video feeds? Wow.
For the demo, it is running on 6 1080p video files. I think this is probably more difficult as you have to load them, decode them, manage buffering and so forth before inferencing. Video streams you only really need to route and decode. Thanks for watching!
@@JetsonHacks oh even better! Just ordered one from work, can't wait to try it out. Thanks for your reply.
@@flyboi86 You are welcome!
You said "you don't need a host machine to install jetpack but will for many other things" i guess you were talking about the micro usb debug. What is it also used for?
Thank you for your videos
You are welcome! For example, you may want to boot the AGX Orin from a SSD. You will need to install JetPack on a host in order to do that and refresh. Most of the production tools for flashing Jetsons are on a x86 host. Thanks for watching!
Could we get a showcase of the full GPU enabled in a benchmark and/or gameplay scenario?
I want to see what all that extra cache does versus Desktop Ampere's Equivalent (The 3050 Laptop) when running the full 14-16SMs (don't know how many are in this Orin Config because NVIDIA jebated in the pdf documentation that there is a CPU and GPU difference between the 32 and 64GB models)
Primarily because AMD adding a lot of cache to their GPUs with RDNA2 gave them massive performance increases so NVIDIA doing it here would be interesting.
Maybe 3DMark with the GPU running at
768Mghz
1Ghz
Full clock?
(Also a side idea, is it possible to disable SMs in a benchmark to bring it down to 12SMs? if so could you do that and run the 768Mghz and 1Ghz numbers through a benchmark like 3DMark or something?)
Sorry if that is a bit, just very curious as this sort of can give us a preview into Lovelace as that also has the massive L1/L2 Cache change that Orin does but even bigger seemingly
I'm terrible at running benchmarks. Usually I rely on Michael over at Phoenix to run reasonable Jetson benchmarks. Because of the shipping challenges, it will probably take a little while for the benchmarks to be published. Thanks for watching!
@@JetsonHacks Alright, at least maybe awnser the question I asked in reply to the 15W Clock question?
Aka how many GPU cores are active in that config
4 CPUs at 1Ghz with how many GPU cores/SMs active at 420MHz?
@@Alovon I'd only be guessing, I've only spent a little while on the machine. Please ask the question in the official NVIDIA AGX Orin forum, where a large group of developers and NVIDIA engineers share their experience. The NVIDIA folks will know the real answer.
Quick question for JetsonHacks, does WiFi card have BT or just WiFi ?
The WiFi card has BT capabilities also. Thanks for watching!
It would be awesome to see some benchmarks of heavy models like detectron2 on this. Previous model wasn’t real time at all. My hope this can be.
Could you help me to understand how you are using the term "real time" in this context?
@@JetsonHacks I mean being able to process live video feed without dropping frames. Average feed would be something like 1080p@30 fps.
@@JetsonHacks I realised I was very unclear about what I mean by real time processing. Of course if the model is optimised for tensorrt it will be few times faster than generic TF model.
@@AI-xi4jk NVIDIA makes available some benchmarks for some of the more popular models. They take a couple of hours to run on the AGX Orin full tilt. PeopleNet detection like that used in the video inferences @ 536FPS. The AGX Xavier is ~195fps. In real life when you see the demo, it looks real time with that many video streams. The video recording is limited by the hardware capture device I use.
Great review! Thanks for sharing! I really want to know whether that PCI slot can support a regular NVidia graphics card for a basically indefinite performance boost.
Thank you for the kind words. People have talked a long time about that very thing. As I understand it, that's a feature that people are working on. However, there are several issues still. You can probably ask on the NVIDIA Jetson AGX Orin forum, and maybe one of the NVIDIA engineers will have an answer. Thanks for watching!
It will theoretically support it but you need to make sure there is an ARM driver with compatible kernel version for your card. Furthermore it won’t provide enough power since the whole unit maxes out at 50/60 watts. You’ll need an external power source for you card. There is Nvidia Clara platform that does something somewhat similar. If you make it work pls share ;)
If you put another graphics card on it, the price complexity relationship is probably better spend on a normal atx board with two GPUs. You need the power anyway for the GPU and you have a form factor and none-arm tooling that are proven. Just because a graphics card fits in there, I would not try to force it to work. Rather change the tooling choice. My 2ct 🤷🏻♂️
Jim, it looks like the elusive JetPack 5.0 is there. You got a pre-release?
Yes, all the Orins ship with JetPack 5.0 Developer Preview (DP) for right now. Even though this particular device itself is pre-release, I was able to install the latest pre-release. The GM will probably be released by the end of this week.
Typically NVIDIA shares pre-production with reviewers before launch on most of their products. That does a couple of things. 1) Gets product to reviewers so that they can get articles and videos out. And 2) it helps iron out the start of the production process. I don't know what the volume is for more popular products, but most companies will run a small batch to test the manufacturing/quality control which they then either give away or destroy/recycle. Depends on the company, but most destroy/recycle for obvious reasons.
Impressive... newcomer to the Jetson kit however and on this would it be feasible for it to drive custom ar/vr applications and 3d graphics? I see the pcie slot...could you potentially power a GPU on it?
I believe that several people have been working in that area with previous generation Jetsons. Because the video and GPU are already built into the Jetson, people tend to build stand alone products with them. The Jetsons are a building block, and of course you can extend them however you need. Thanks for watching!
@@JetsonHacks thanks for the reply...great content! To sum up, are there any comparison benchmarks on how this would compare to a 3090 GPU? Looking to run an embedded unreal engine app.
I don't know of anyone that has done direct comparisons yet. To be clear, that's a comparison between two different classes of machines. The 3090 has 5X CUDA cores, much faster memory, more memory bandwidth, wider memory lanes and uses about 10x more power for the GPU alone. A 3090 configuration has a minimum of 850 watts, and probably well over 1000 in practice. The Jetson operates up to 50W (think battery power).
@@JetsonHacks Understood...I'll just have to do some testing to see how far I can push this kit. Thankfully Arrow had one in stock. Thanks for entertaining my questions. Subbed!
@@AC-ed1jz Welcome to the Jetson community! There's a learning curve getting started, but there's enough people to help so it shouldn't be too bad.
Which translation of Marcus Aurelius, were you reading? Please tell meee.
Translated by Gregory Hayes. Amazon link: amzn.to/3T60BiB Thanks for watching!
Thanks for the overview of Orin! I have a question about the PeopleNet notebook you showed - is that something you wrote? I can't find anything that looks as straight forward to use PeopleNet on NGC or TLT. thanks for posting all your videos - they have been very helpful!
You are welcome and thank you for the kind words. The PeopleNet demo is a demonstration of NVIDIA TAO. I am not sure whether it is released or not. Typically we get pre-production units for review which have some technology demonstrations that later get released. However, I do not know about how NVIDIA is handling this particular one. Thanks for watching!
Great review and thanks for your video. Is that mentioned 32Gib is for Graphic Memory? or just RAM?
The Jetsons have unified memory, which means that the memory is shared between the CPU and GPU. Thanks for watching!
A Roman Emperor on AI. Pretty cool.
I hadn't even thought about that, great observation! I just grabbed the nearest book, but reading something thousands of years old and have a machine translate it into words is rather mind boggling. Thanks for watching, and taking the time to comment!
What about PCE to sata adapter? Does it recognize it?
I don't know, you should ask on the official NVIDIA AGX Orin forum: forums.developer.nvidia.com/c/agx-autonomous-machines/jetson-embedded-systems/jetson-agx-orin/486
Do you if the old AGX XAvier also don't need a host to install Jetpack anymore?
This one of the features of JetPack 5.0 on the AGX Orin which is soon arriving for the Xavier. It's not out yet, so I don't know if you can upgrade through apt from JetPack 4.5+ or not. Thanks for watching!
very nice. Is the Riva would work on the Jeston Nano as well ?
Thanks
Eran
I do not know. I would guess yes, but for a definitive answer you should ask on the official NVIDIA Nano forum. Thanks for watching!
Can we get the exact link of the demo ( Peoplenet) please nice video!
The demo was supplied with the demonstration pre-release system, and is not available directly. However, all of the functionality is now available on the jetson-inference Github repository: github.com/dusty-nv/jetson-inference
Good luck on your project!
@@JetsonHacks Thanks, but there is no peoplenet_workflow.ipynb in that git?
Can all of this be done on a normal PC (with an nvidia board)? Seems like the developer kit is a bit expensive...
The cool thing about the NVIDIA CUDA software stack is that it runs across their hardware platforms, minor niggles like versioning being ignored. If you have a recent NVIDIA graphics card in your PC, Pascal or Ampere, you can probably replicate something much like the examples shown here. Thanks for watching!
Hello, what microphone / audio input were you using in this case?
A Logitech USB headset, H540: amzn.to/433Udya The Logitech H390 should work too. Thanks for watching!
That foam is not necessary for avoiding mechanical damage and is it ESD or at least anti-static foam?
Thank you for sharing your opinion.
Does this support Vulcan API? Would be fun to see you game on it for the lulz.
The version of JetPack 5 that the Orin runs supports the Vulcan 1.3 API. Unfortunately I'm not into games, so I'm not the right person for that demonstration. Perhaps one of the other folks who have one would be up for the challenge. Thanks for watching!
@@JetsonHacks could you even just run a benchmark from 3d mark then? A lot of people think this chip but cut down to 1024 cuda cores will be in a next generation switch. Well if you ever need an idea for some cool content id set the power to something like 15 watts and see how it runs. Anyway have a great day!
The Display port connection does not work. Screen goes blank with the status “Inactive” for DP
Please report this on the official NVIDIA Jetson AGX Orin forum to get support: forums.developer.nvidia.com/c/agx-autonomous-machines/jetson-embedded-systems/jetson-agx-orin/486 Thanks for watching!
Ah, meditations. Our purpose is to breed, prune, and feed the soil. Just explore life and make art of it. I'm new to these types of products, is the SDK absolutely necessary or is there some ease to development on a regular PC with a decent GPU. Any emulation software provided?
There is no emulation software. You can develop remotely on another machine using tools like VS Code and such, but most developers hook the machine up to a monitor and use standard tools. The AGX Orin has 12 CPU cores and 2048 GPU cores, so it's capable of being fielding its own development environment. Thanks for watching!
Is it hot? does the fan runs hard during the demo?
The fan was not running. I didn't measure the temperature, but it is something to think about. Thanks for watching!
@@JetsonHacks Something to think about whether the fan is broken 😅
@@johnny_123b Fan works, this just isn't enough load to activate it. When I add some more things, the fan starts up.
@@JetsonHacks Amazing
I wish the price reflected the value of the hardware, instead of the utility value it provies to large enterprises. I'd love to update my Nano to something from this decade, but i dont think this hardware is worth more than $399, and it comes down to having a GT3030 with a cellphone processor clocked faster than normal
Thanks for sharing your opinion. I'm not sure how to make a good comparison between a GPU and a full SoC (e.g. on the Orin 1/3 of the compute is taken up by the Deep Learning Accelerator "DLA" which the GPU doesn't have) but if it doesn't fit your needs or perceived value/price point it doesn't work for you. Thanks for watching!
@@JetsonHacks I suppose i worded this poorly, and should have said, if nvidia sold a consumer version with the enterprise focused accelerators removed, or disabled because it was faulty, i wouldnt price it above $399, or $450 if it only comes in a 32GB variant
@@JetsonHacks Holy crap, they announced the Orin NX for much closer to where i wanted it, at only $599 with 16GB of RAM, and $399 for 8GB. I need to upgrade my Seed Studio jetson mate if it suports it
I just got my Orin. :> How to run your demo again? Just serach peopleNet from ngc? jupyter notebook is build-in? Or I need to do some setup? Thx
Congrats! The demos were preloaded on that machine (it was from a preproduction run), so I don't know exactly how to replicate the demos. However, Jupyter should be installed already I believe. I would get started with this excellent resource from Dusty and NVIDIA: github.com/dusty-nv/jetson-inference Thanks for watching!
The jetson agx Orin, Can you test stable diffusion. And also a good video editing program like di Vinci. The ai suggest that you can run both on there. I have an old laptop and where as I’m not concerned with playing high end games it looks like this may be a good purchase to consider LLM running locally, stable diffusion, and general video editing 4k video for example.
Here's a video of it running Stable Diffusion: ua-cam.com/video/HlH3QkS1F5Y/v-deo.htmlsi=Mw26As_sPSHjIUdX This is probably not a good choice for a desktop/laptop replacement for tasks like video editing. However there is a lot of GPU accessible RAM for running LLMs. You can see many examples on www.jetson-ai-lab.com where a large number of different AI and ML examples are shown, along with examples. Everything is OpenSource, so it's inexpensive to get started. This includes LLMs, Text & Vision (VLM), Vision Transformers, RAG & Vector Database, ASR and TTS, Image Generation like Stable Diffusion and much more. Thanks for watching!
@@JetsonHacks thanks. It was just a suggested tie me over suggestion by AI until I buy the 5 to 6k pc with gtx 4090 or 4080super…. But it assumes video editing would work fine on Orin. Which I was curious about. It even suggested for me to install davinci on both laptop and Orin and I use proxy files. Load up video files to edit, export proxy files to laptop, (edit in low quality mode) edit, move back to Orin to process. Very interesting concept.
@@playthisnote I'm sure there are programs that you can use on the Orin that can edit video. The issue is that the Orin is an ARM machine, not x86 or Mac. This means that they're out of the mainstream for many applications, or it can be challenge to get support because it's a relatively small community in comparison. Blender does work, but I'm not sure about Resolve or Premiere. The Orin is a good machine, but its purpose is different than a desktop replacement.
Was looking up the 32/64GB differences... That SDK seems a mix of 32GB with the full core numbers from the 64GB version.
Will wait till the 64GB comes out and gets cheaper.
I believe you are correct, the Dev Kit has 2K cores, but 32GB memory. The 64GB AGX Orin Module will have 2K cores, the 32GB AGX Orin Module will have a few hundred GPU cores less, as I recall. One thing to remember is that there's a big price bump in the memory markets right now to go from 32GB to 64GB, so there's some balancing to figure out how the pricing will change. Thanks for watching!
I think it will get cheaper in maybe 3 years. Check the AGX Xavier price - still retails at the same 800.
@@AI-xi4jk I already have 2 of them, 16 & 32GB
@@JetsonHacks is it shared between CPU and GPU or does the GPU have its own memory?
@@lowfuel6089 The Jetson all have a unified memory architecture, that is, shared memory. It's similar to how the new Apple M1s are setup, everything uses the same memory. The new Macs use LPDDR5 memory like the Jetsons. Thanks for watching!
'sudo apt install nvidia-jetpack' is giving me an error: some pakcages could not be installed??
It's hard to tell from your description what the issue might be. Please ask on the official NVIDIA Jetson Orin forum, where a large group of developers and NVIDIA engineers share their experience. Thanks for watching!
I wonder if riva does other languages . Sam need order 1
It is compatible with several languages, and they're working on more.
There are some info from nvidia, they tell us pcie slot does not support external gpu. So can you please tell me what meaning of pcie slot on nvidia jetson ?
The NVIDIA Orin Clara uses a GPU card in a PCIe slot. There are many PCIe peripherals besides GPUs, like network appliances and storage solutions. Since the developers are designing their own product around the Orin, they may add PCIe peripherals simply without having to design their own PCIe extenders. Thanks for watching!
Is this a computer? Could I use this instead of creating a single cpu workstation? I have a Wacom and four other screens. Would this allow me to run the Wacom on its own cpu and the other screens on other cpus?
This is a development machine for creating applications that run in low power environments. It is not really suited for what you are describing, you are better off with a regular desktop computer. Thanks for watching!
Are these available?
Yes, you can order them today. Because of the Covid lockdown in China, there may be some shipping delays. I have seen them at Arrow, Seeed Technologies and Silicon Highway. Thanks for watching!
As CPU is the main bottleneck of the previous version (Xavier), could you please share some experience with the CPU benchmark of the new hardware?
The CPU is around 1.5 to 2X faster. I think most people overlook the system capabilities. CPU and GPU performance are very much tied to memory bandwidth. The Orin has LPDDR5 main memory, which is about 1.5 faster than the previous LPDDR4. To me, it feels significantly faster.
Benchmarking the Jetsons is tricky because of DVFS, which throttles performance to fit in the energy footprint requested. For example, it will not turn on the CPU full throttle until there's a certain amount of workload for a certain amount of time. That is, unless you turn on performance mode. The Jetsons do that in order to save energy. People spend a lot of time tuning the energy modes (e.g. 15W, 20W, 50W, MAXN) to match their requirements by balancing clock speeds, number of cores and so on. It's way complicated, and to be honest to do it correctly is a little beyond where I am.
@@JetsonHacks Thank you so much for your detailed answer. This helps me a lot as I am still waiting for my order that arrives in 2024😄.
Impressive
I know it didn't sound like it, but that was me excited when I saw the ASR results. It's beyond impressive to do that while running another large model at the same time. Thanks for watching!
Ay that's fr an AI? I thought you were being a motivational speaker haha! Good stuff!
Thank you for the kind words,, and thanks for watching!
Thx for the demo. Would there be a way to know gpu and cpu clocks in the 15W mode?
You are welcome. This is a pre-production model, so the numbers might not be the same from a production unit. From /etc/nvpmodel.conf: In 15W mode, 4 CPUs are online, maximum frequency 1113600. The GPU max frequency is 420750000. Thanks for watching!
@@JetsonHacks Seems a bit low compared to xavier nx on the same kind of board. Thanks for the feedback
@@JetsonHacks Any indication on how many GPU cores are on in that config?
As that is...concerning with how it's mostly lopped off a lot of the SoC.
How did you install deep stream?
This is a review unit, it was preinstalled. However, installing JetPack 5.0 also installs DeepStream. Thanks for watching!
@@JetsonHacks I tried to use SDK manager to flash the device but I didn’t see the deep stream in the options
@@saazadpour Ask this question in the official NVIDIA Orin forum, I'm sure they know how to help.
how to download the reposiory of pepole net
catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/peoplenet
Good heavens 😁👍.cool but I bet it tops out at well over 3 grand if it compares to the tx2 but this thing is sweet!
.
You can order it now from Arrow, Silicon Highway or Seeed Technologies right now for $1999. Thanks for watching!
@@JetsonHacks not a bad price!👍
@@wishicouldarduino8880 It's certainly a reasonable price for what it is and the market it serves.
@@JetsonHacks no doubt I think It might be ahead of me a bit I would love to put one of these on my annie robot but I think I need to watch more of your videos for direction getting all of these things connected together and working together in ros is quite complicated to say the least.
@@wishicouldarduino8880 Yes, integrating robotics components is quite an engineering task. Good luck on your project!
Can it run BlazingSQL??
I do not know. Please contact BlazingSQL for clarification.
Hey Jim, how does the performance of this box compare to a Linux PC with a 2080ti or 3080ti installed for training ML models for example?
They aren't in the same league. Remember, the Orin only uses 60 Watts, the machines you mention don't even get started with less than 10X that.
While you can train ML models with the Orin, the real purpose is to run multiple inference nodes on pertained models. For example, in robotics where you have to do vision recognition, object detection, path planning, segmentation and such all at the same time.
Thanks for watching!
@@JetsonHacks Thanks Jim!
Wow
Thanks for watching!
could this machine run windows11?
No it does not. Thanks for watching!
They need to give this hardware to the next Nvidia Sheild or Nintendo Switch ~ Toss it in a performance android work tablet even.
Good ideas! Thanks for watching!
What fps?
What fps are asking about?
@@JetsonHacks If I want to use the jetson orin AGX to process 4 IP cameras, detect people (YOLOv8m) and track (SORT) using DeepStream, what resolution and how many frames per seconds (FPS) can I use? What if I have a jetson orin NX?
@@marceloraponi8566 That's not a easy question to answer. It's a development question which depends on a variety of factors, including the skills of the developers. Typically you would use DeepStream to both detect and track objects. It will depend on which resolutions you choose, the power envelope you need to work in, the frame rates and resolution of the IP cameras in use, network speed and so on. You can get a better estimate from the official NVIDIA Jetson Orin forums, where a large group of developers and NVIDIA engineers share their experience: forums.developer.nvidia.com/c/agx-autonomous-machines/jetson-embedded-systems/jetson-agx-orin/486.
The sad thing is that for this price you will buy a cheap PC with a 3050 (when the gpu prices are moderated) and it would perform better.
Why is that sad? I think it's great that you can get such great performance on the desktop at that price point! The AGX Orin is for a different market entirely, purposely for embedded products like robotics, autonomous vehicles, and smart video analytics. The Jetsons have 1 million developers, there is an ubelievably wide variety of uses. Thanks for watching!
3050 has only 8GB GPU memory, so it is limits size of the tensorflow model
It is 8X faster on Jetson Orin if your tensorflow model requires 32GB GPU memory
This sbc is really powerfull, but for $2000 I buy a mini ITX rizen, 4 times faster at least for half of the price !!
Sounds like a shrewd choice. Thanks for watching!
@@JetsonHacks oh no, thank you for taking time to show your device 😉
I like to know where to buy a 32GB GPU that can fit into a mini ITX rizen and cost $2000
I need it to run 32 GB tensorflow models
@@Haskellerz if you've reached the point where you need a 400W+ graphic card.. You should buy a proper case for that 6000€+ baby cause as I assume you're doing this with a Quadro or WX using ECC 😗.
And tbh those video card also requires cpu power if you don't wanna starve your GPU, whe usually use dual cpu xeon as well as 128Gb+ RAM for CPU alone cause streaming 32Gb tensor data back and forth require a whole lots of it.
I don't see how this board gonna fold the job, really. I personnally use 2 x K6000 alongside 2 x P4000 and 1x RTX 4000 + Dual Xeon with 256GB, a full ECC 10 yo machine !
@@goldnoob6191
If you use unified memory on Jetson Orin there is no latency between CPU and GPU. So no memory transfer between them.
Looking at the GPU prices... P4000 and K6000 are around $1500
2 x P4000 cost more than the $2000 Jetson Orin
2 x K6000 cost more than the $2000 Jetson Orin
Personally, I use Google cloud right now because GPU prices are bad.
Really need a good GPU with lots of memory.
Thank you so much for this video and to be honest I am sooo Jeaolus you :/ I don't have any AGX card :( :D
You are welcome. There's no reason to be jealous of me, I'm sure you will be able to get an AGX at some point. Thanks for watching!
@@JetsonHacks I hope 🙏🏻 (but it’s too expensive for me) and I am always watching your videos. I am waiting your Deep Learning and GPU based Opencv videos :)
@@MrFirzen91 I've worked with the NVIDIA Deep Learning Institute on different courses, some of which are free! Check it out: www.nvidia.com/en-us/training/
So expensive..😢
I hear you, the flagship models usually are. However, it's still $500 less than the AGX Xavier was when it was introduced in 2018. The Orin NX modules should be out in Q4, and they start at $399. Thanks for watching!
@@JetsonHacks yeah, luckily I'm in school still (I have nanos and agx Xaviers) and get an education discount. Still working on a very fun and nearly complete couple of projects with ML and robotics.
@@user-wc1em7pc2p Cool, I hope you can share them with us!
If you compare the price with the capabilities, this Orin is extremely cheap.
@@JetsonHacks How to use Orin NX modules, can it be mounted on jetson nano or jetson xavier nx carry board?
way too expensive
Thanks for sharing your opinion, and thanks for watching!