regionaltantrums
regionaltantrums
  • 50
  • 31 653
Compilers in Rust: Instruction Lowering and Binary Emission in Cranelift (Part 5)
#rust #programming #compiler #bytes #computerscience
In this video, we delve into Cranelift’s instruction lowering and binary emission flow. Specifically, we explore how Cranelift translates CLIF op-codes into machine instructions (VCode) and ultimately into binary machine code bytes.
Video Chapters
00:00:00 - Intro
00:04:32 - High-Level Overview of Lowering and Binary Emission Flow
00:08:05 - Architecture-Specific Backend Type and Traits
00:14:26 - Compilation Flags and Settings
00:17:43 - Inputs for Lowering (ABI, Function Signatures, Dominator Trees, etc.)
00:25:40 - The Lower and VCode Types
00:30:00 - Side Note on Cranelift’s Design
00:32:48 - The MInst Type
00:34:19 - The MachBuffer Type and Binary Emission
00:41:35 - Testing Binary Emission for the RV32 Target
00:49:30 - Summary and Wrap-Up
Переглядів: 301

Відео

Compilers in Rust: Learning to Add a Backend to Cranelift (Part 4 - Design and Key Data Structures)
Переглядів 281Місяць тому
I’m finally ready to take the first steps toward building a backend for Cranelift! While Cranelift doesn’t currently support 32-bit backends, I’m particularly interested in adding support for an RV32 (RISC-V 32-bit) backend. In this video, I’ll explore the key data structures, design decisions, and steps involved in adding a backend as I learn and experiment with the process. 00:00:00 Intro and...
Compilers in Rust: How to read Cranelift’s (ISLE) lowering rules (Part 3)
Переглядів 252Місяць тому
ISLE is a domain-specific language used in the Cranelift compiler to describe how high-level instructions (like iadd) are transformed into low-level machine code. It uses pattern matching and rewriting rules to make instruction lowering simple and flexible. This video dives deeper into ISLE, i.e., it’s focused on reading production ISLE code. 00:00:00 Intro and Recap of Part 1 00:05:04 Reading ...
Compilers in Rust: Understanding Cranelift’s (ISLE) lowering rules (Part 2)
Переглядів 1852 місяці тому
So, how does Cranelift, a Rust-based code generator, efficiently handle instruction selection and lowering? In this episode of Exploring Cranelift, we dive into ISLE-Cranelift’s own DSL, short for Instruction Selection or Lowering Expressions. ISLE streamlines converting Cranelift IR (clif) into MachInsts for machine code generation. Key topics include: - Writing rewrite rules - The Context tra...
Compilers in Rust: Cranelift, the All-Rust Codegen Alternative to LLVM (No C/C++, Part 1)
Переглядів 1,1 тис.3 місяці тому
In this video, we explore Cranelift, a code generator built entirely in Rust. We begin by explaining what Cranelift is and why LLVM may not always be the optimal choice for compiler engineering. We then dive into Cranelift’s intermediate representation (IR), highlighting its use of types, static single assignment (SSA), and control flow graphs (CFGs). Next, we demonstrate a Rust-to-Cranelift IR...
Shading Languages & CubeCL: GPU Programming in Rust (Part 3)
Переглядів 6223 місяці тому
This video picks up from where we left off in the last stream, providing a recap of various approaches to GPU programming, including: - Shading languages-WGSL (WebGPU), GLSL (OpenGL), HLSL (Direct3D), and MSL (Metal) - Single-source methods like CUDA, ROCm, and OpenCL We then shift our focus to CubeCL, a Rust-based GPU runtime, and explore how it enables writing GPU kernels directly in Rust. In...
Shading Languages & CubeCL: GPU Programming in Rust (Part 2)
Переглядів 3463 місяці тому
In this video, we explore popular shading languages-WGSL (WebGPU), GLSL (OpenGL), HLSL (Direct3D), and MSL (Metal)-and dive into the Single Source method with CUDA, ROCm, and OpenCL. We then shift to CubeCL, a Rust-based GPU runtime, and discuss how CubeCL allows writing GPU kernels directly in Rust. We'll cover: - The architecture of CubeCL and its runtimes (cubecl_wgpu & cubecl_cuda) - Key co...
Something new: CubeCL, Writing Pure Rust GPU Kernels.
Переглядів 7704 місяці тому
This stream is mostly me exploring a new Rust crate I came across. Since the project is quite interesting, I recorded it so I can revisit it in the future. P.S.: The audio quality isn't the best, so you might want to turn up the volume. 00:00:00 Intro to CubeCL 00:05:45 CubeCL uses proc-macros to annotate kernels 00:15:18 Auto-vectorization, Auto-tuning, Comptime 00:20:15 What is CubeCL at its ...
Heterogeneous Computing: What is LLVM and How Does Rust (rustc) Utilize It?
Переглядів 7435 місяців тому
In this stream, we'll explore LLVM and its role in the Rust compiler. This serves as a continuation of one of my earlier streams about programming for Apple GPUs. We cover the basics of the Rust compilation pipeline and provide an overview of LLVM's architecture, including the front end, midend, and backend. You'll receive a straightforward explanation of LLVM's core IR syntax and how it is opt...
Implementing a transformer with candle + Rust - (Part11, Back-prop implementation)
Переглядів 9967 місяців тому
This series is about implementing a plain vanilla transformer in Rust using the candle framework. We cover the following - 0:00:00 a quick intro and recap 00:01:54 a simplified overview of back-prop in neural networks 00:07:55 how does candle perform back-prop 00:08:50 candle's Tensor and Op type(s) 00:12:00 topological ordering, impl and demo 00:22:00 problems with current impl of topological ...
Implementing a transformer with candle + Rust - (Part10, Dispatching GPU Kernels)
Переглядів 2867 місяців тому
Learn #GenAI with Rust. This series is about implementing a plain vanilla transformer in Rust using the candle framework. We cover the following - 00:00:00 Recap and Intro 00:02:45 Apple's (low overhead) Metal API 00:06:53 Dispatching metal kernels - Example code walkthrough 00:23:40 Summary and extra tidbits (what does a .metal kernel look like) 00:28:10 A peek at what's next (backprop) and wr...
Implementing a transformer with candle + Rust - (Part 9, the full transformer)
Переглядів 3198 місяців тому
This series is about implementing a plain vanilla transformer in Rust using the candle framework. We cover the following - 00:00:00 Intro 00:01:24 walk-through the transformer impl 00:04:30 side note on backward pass, optimizer phase, 00:05:20 metal kernel launches with candle 00:6:30 wrap up In Part10 - we will take peek at metal kernel launches and the backprop implementation in candle #ai #r...
Implementing a transformer with candle + Rust - (Part8, Decoder)
Переглядів 2248 місяців тому
This series is about implementing a plain vanilla transformer in Rust using the candle framework. We cover the following - 00:00:00 Recap 00:00:43 What is a Decoder 00:01:53 walk-through the decoder impl 00:03:20 masked multi-head attn and causal masks 00:05:06 continue decoder impl walk-through 00:09:35 unit test decoder with dummy inputs 00:11:50 what's next (full transformer, backprop in can...
Implementing a transformer with candle + Rust - (Part7, Encoder)
Переглядів 2328 місяців тому
This series is about implementing a plain vanilla transformer in Rust using the candle framework. We cover the following - 00:00:00 Recap 00:02:53 Encoder impl with candle.rs 00:06:42 Unit testing the encoder impl 00:11:33 Preview of what's next - decoder 00:13:02 design refactor and rough edges in candle 00:16:00 wrap-up In Part8 - we will cover the full decoder part of the transformer archite...
Implementing a transformer with candle + rust - (Part6, Residual Layer)
Переглядів 1578 місяців тому
This series is about implementing a plain vanilla transformer in Rust using the candle framework. We cover the following - 00:00:00 Recap 00:01:05 Intro to Residual layers 00:04:02 Implement RL with candle 00:08:38 What's next and wrap-up In Part7 - we will cover the full encoder part of the transformer architecture #ai #rust #apple #gpu
Implementing a transformer with candle + Rust (Part5, Multi-head Attention Block)
Переглядів 4169 місяців тому
Implementing a transformer with candle Rust (Part5, Multi-head Attention Block)
Implementing a transformer with candle + Rust (Part4, Feed-Forward Layer)
Переглядів 2759 місяців тому
Implementing a transformer with candle Rust (Part4, Feed-Forward Layer)
Implementing a transformer with candle + Rust - (Part3, Layer Normalization)
Переглядів 7109 місяців тому
Implementing a transformer with candle Rust - (Part3, Layer Normalization)
Implementing a transformer with candle + Rust - (Part2, Positional Encodings)
Переглядів 1,1 тис.10 місяців тому
Implementing a transformer with candle Rust - (Part2, Positional Encodings)
Implementing a transformer with candle + Rust - (Part 1, Input Embeddings)
Переглядів 3,2 тис.10 місяців тому
Implementing a transformer with candle Rust - (Part 1, Input Embeddings)
Heterogeneous computing: Experimenting with Apple Silicon gpu(s) and metal-rs bindings
Переглядів 75511 місяців тому
Heterogeneous computing: Experimenting with Apple Silicon gpu(s) and metal-rs bindings
A quick deep dive into asynchronous programming in an embedded context with Rust and Embassy
Переглядів 805Рік тому
A quick deep dive into asynchronous programming in an embedded context with Rust and Embassy
Stochastic streams: Dynamic memory allocations in bare-metal environments using Rust
Переглядів 491Рік тому
Stochastic streams: Dynamic memory allocations in bare-metal environments using Rust
Stochastic streams: Runtime instrumentation of the Linux kernel with eBPF and Rust
Переглядів 323Рік тому
Stochastic streams: Runtime instrumentation of the Linux kernel with eBPF and Rust
Building platform agnostic drivers for embedded crypto-accelerators in Rust
Переглядів 3952 роки тому
Building platform agnostic drivers for embedded crypto-accelerators in Rust
Async and Await in Rust
Переглядів 5122 роки тому
Async and Await in Rust
Building real time systems sans an RTOS - using rtic and Rust
Переглядів 1,6 тис.2 роки тому
Building real time systems sans an RTOS - using rtic and Rust
Rust Vs. C, plus an intro to Rust.
Переглядів 5802 роки тому
Rust Vs. C, plus an intro to Rust.
Learn signal processing by reverse engineering Google nearby (audio) connections
Переглядів 3324 роки тому
Learn signal processing by reverse engineering Google nearby (audio) connections

КОМЕНТАРІ

  • @axeDev22
    @axeDev22 День тому

    Hi sir, thank you so much for the video! Rust content is kind of rare on UA-cam, and I really appreciate you sharing it. Could you please share the note you used in the video? I would be grateful if you can.🙏

    • @regionaltantrums
      @regionaltantrums День тому

      @@axeDev22 Thanks for the kind words. I put the note in a GitHub gist here - gist.github.com/nihalpasham/ba0a53200c3f870db54734a113401f2a

    • @axeDev22
      @axeDev22 День тому

      @@regionaltantrums Thanks you very much. Have a great day, sir.

  • @3numa
    @3numa 2 дні тому

    Watched the whole series. That was good, thanks! At some point I think you said this was going to be used for a language tranlsation full example, is it happening or this is it? What about training, inference, saving and loading the weights etc.?

    • @regionaltantrums
      @regionaltantrums 2 дні тому

      That was the plan, but unfortunately, I got pulled into a few other things. Maybe someday, when I have more time, I can finish the remaining pieces. Thank you for watching!

  • @__noob__coder__
    @__noob__coder__ 9 днів тому

    Saved this to Watch Later. I recently was implementing a Buddy Allocator for my OS in Rust. In future, I want to shift it to Slab Allocator.

    • @regionaltantrums
      @regionaltantrums 9 днів тому

      @@__noob__coder__ Cool. Would like to check it out. Please feel free to drop a link when you do.

  • @sanchayanmaity5731
    @sanchayanmaity5731 10 днів тому

    Cool series so far.

  • @Heater-v1.0.0
    @Heater-v1.0.0 22 дні тому

    This is great to see. I have always had the sneaking suspicion that if one creates a new programming language but then relies on some other programming language to actually do most of the work of generating actual executables then one is far away from completing the job. Mind you I know almost nothing about compilers apart from having devised my own toy language and getting it to generate very inefficient x86 code and never making my language self-hostiing. That little exercise taught me how big a job it is. All the best for Cranelift.

    • @regionaltantrums
      @regionaltantrums 22 дні тому

      @@Heater-v1.0.0 Thank you for watching! PS: I’m not the author/maintainer of Cranelift.

  • @zandrrlife
    @zandrrlife 22 дні тому

    🔥

  • @zandrrlife
    @zandrrlife Місяць тому

    The compiler whisper. Great content per usual.

  • @sekarshreyaaspanchaksharam3679
    @sekarshreyaaspanchaksharam3679 Місяць тому

    Hi

  • @minirop
    @minirop Місяць тому

    I've started to fiddle with cranelift recently and this series is a nice breakdown of "what goes where and what each part does".

  • @mrpocock
    @mrpocock Місяць тому

    It will be interesting to see how far they can get to things that can be used in practice.

    • @regionaltantrums
      @regionaltantrums Місяць тому

      @@mrpocock indeed,

    • @mrpocock
      @mrpocock Місяць тому

      @regionaltantrums running some small models locally, I've had situations where I'm pining one cpu to 100% in python doing the data preprocessing and tokenisation. A rust ecosystem would fix that.

  • @amidamarurookie
    @amidamarurookie 2 місяці тому

    are you gonna do sth with Cranelift? 👀

    • @regionaltantrums
      @regionaltantrums 2 місяці тому

      Currently, Cranelift supports only 64-bit backends (aarch64, riscv64, s390x, x86-64). I’m interested in contributing a 32-bit backend to Cranelift.

    • @amidamarurookie
      @amidamarurookie 2 місяці тому

      @@regionaltantrums awesome, looking forward to your sharing in this journey 🥰

  • @perc-ai
    @perc-ai 3 місяці тому

    This is incredible. Have you taken a look at DARPA's TRACTOR program and how they intend to build a tool that could convert all C to Rust?

    • @regionaltantrums
      @regionaltantrums 3 місяці тому

      Yes, I've looked into TRACTOR. While it's an interesting project, in my experience, code translation is a non-trivial problem. For instance, mapping C design patterns to idiomatic Rust is anything but straightforward, especially with Rust's evolving specifications. That said, more projects are tackling this challenge. If you're interested, the C2Rust project by Galois and Immunant is an early attempt to address it.

    • @perc-ai
      @perc-ai 3 місяці тому

      @@regionaltantrums Thanks I'll look into that project C2Rust I have not read that one! Wouldn't be surprised if you had a PhD, your idea is quite novel and solving it at the IR level sounds promising! I'll keep Cranelift in mind and tell my colleagues about it.

    • @regionaltantrums
      @regionaltantrums 3 місяці тому

      @@perc-ai I don’t have a PhD, just a deep interest in the topic. Thank you for watching, I really appreciate it.

  • @asdf-ik7nc
    @asdf-ik7nc 3 місяці тому

    Hello sir, you have explained in quite a lot of depth. Thanks for this wonderful work.

  • @TheNaive
    @TheNaive 3 місяці тому

    Hello, I commented in previous video, regarding your thoughts on mojo You dont have to program in it Could you tell difference how it uses mlir unlike rust which is llvm

    • @regionaltantrums
      @regionaltantrums 3 місяці тому

      LLVM is a low-level compiler framework that allows multiple languages (such as Rust, Julia, Swift, and C++) to be compiled down to LLVM IR-a form close to assembly language. Once in LLVM IR, we can select an LLVM back-end, such as x86, ARM64, or RISC-V, to generate actual machine code. On the other hand, MLIR is a higher-level framework. It uses dialects (or multiple intermediate representations) to progressively lower a source language (such as Mojo). The goal of multi-level IR lowering is to simplify handling high-level, domain-specific optimizations (like tensor operations in ML/AI). Ultimately, we still need a code generator to produce machine code, so the final step is often lowering to LLVM IR and using an LLVM back-end to generate the executable code.

    • @regionaltantrums
      @regionaltantrums 3 місяці тому

      Even without MLIR, multi-level IR lowering is already present in most modern languages. For example, Rust follows this progression: Rust → HIR → THIR → MIR → LLVM IR. Similarly, languages like Swift and Julia also undergo multiple stages of lowering. MLIR introduces a framework that formalizes and extends this process, making it more flexible and accessible for a variety of domains. However, MLIR is currently primarily used within the AI/ML community, where it excels at handling domain-specific optimizations, such as tensor operations. Its broader adoption in other areas is still a question (I guess).

  • @towel9245
    @towel9245 4 місяці тому

    Thanks for sharing, it was interesting 👍

  • @warriorblood92
    @warriorblood92 5 місяців тому

    how are you showing CPU / Memory use percentage on top bar of Mac ? any extension? Can you let me know? Thank you for the videos btw. Appreciate it.

    • @regionaltantrums
      @regionaltantrums 5 місяців тому

      @@warriorblood92 it’s called stats, an open source project - github.com/exelban/stats

  • @SaschaRobitzki
    @SaschaRobitzki 5 місяців тому

    What VS Code theme are you using?

    • @regionaltantrums
      @regionaltantrums 5 місяців тому

      @@SaschaRobitzki I believe it’s called Monokai (light/dark)

  • @AyoDamilareMichael
    @AyoDamilareMichael 6 місяців тому

    Pls what's the vs code theme 😍

  • @emvdl
    @emvdl 6 місяців тому

    Thanks, good job 👍 Do you have a git repo?

    • @regionaltantrums
      @regionaltantrums 6 місяців тому

      Yes, here is the link - github.com/nihalpasham/optimus

    • @emvdl
      @emvdl 6 місяців тому

      @@regionaltantrums 👍

  • @ngocanle3037
    @ngocanle3037 7 місяців тому

    Is there any code on GitHub used in this video series?

    • @regionaltantrums
      @regionaltantrums 7 місяців тому

      Yes, here is the link - github.com/nihalpasham/optimus

    • @ngocanle3037
      @ngocanle3037 7 місяців тому

      @@regionaltantrums thank you !!

  • @SJ-ds8lp
    @SJ-ds8lp 7 місяців тому

    Subscribed.!!!

  • @towel9245
    @towel9245 7 місяців тому

    Thanks for exploring / sharing this concept! I've been wondering the idea of limiting a type to a certain range, e.g. an int to 1..10, and recently discovered that refinement types are the name for it. AFAIK it's not a first-class language feature anywhere, but compile-time asserts with flux already seem super useful. Thanks!

  • @JReuben111
    @JReuben111 7 місяців тому

    Does Candle support a CUDA backend ?

    • @regionaltantrums
      @regionaltantrums 7 місяців тому

      Yes, Candle supports three backends: CPU, CUDA, and Metal. In this stream, I provide a quick walkthrough of Candle’s GPU implementation: ua-cam.com/video/Nhb5HIUeiMM/v-deo.htmlsi=Gf2ZsAI3BQXOkqH6

  • @fishingnightmares
    @fishingnightmares 7 місяців тому

    Thank you again for your work on this series. I didn't quite understand at the end why you need Dropout to impl Copy tho. I agree is a cheap copy, but if you want to reuse a Dropout instance, why not declare it at your top level and pass in a reference of it?

    • @regionaltantrums
      @regionaltantrums 7 місяців тому

      It's been a while, so I can't recollect off the top of my head, but I believe it had something to do with having to refactor the entire implementation. An earlier version of my code used impl traits rather than the current enum-based design. The reason for the switch was the inability to easily Copy the residual layer when iterating through each encoder block in an encoder. So, my issue was the absence of Copyable types in the residual layer and not Dropout itself.

  • @fishingnightmares
    @fishingnightmares 8 місяців тому

    The way I understand the feed forward layer is that is there to tackle on how some, previsouly calculated features affect other in a non-linear way - in other words, the side effects that some features may have in others, adding a non-linear relationship between them. For example how the feature "my mood" affects features like in my "performance at job" or "willingness to buy something", even when they are not linearily related.

    • @fishingnightmares
      @fishingnightmares 7 місяців тому

      Ok after reading more on this I think I understand now. Drop it here in case it helps someone: - We feed a Tensor (normalized or not) - with a size of e.g. 512 dimensions - as input for the FeedForward layer. - We will chain 3 transformations over this Tensor: 1.- We expand the Tensor to a higher dimensional size of, say, 2048. To do this we need 512 neurons, each of which has 2048 arbitrarily initialized weights. The linear transformation applied to this Tensor is basically a dot product of the arbitrary weight assigned to each neuron and the initial features, resulting in 2048 features per neuron. 2.- Then, we apply a non-linear transformation (relu) to this 1st linear result. This function sets all negative values to 0, breaking linearity. 3.- Lastly, we compress the relu result (2048 dimensions) back to the original embeddings size (512) by following the same linear strategy of the first step, but this time we will have 2048 neurons each of which has 512 arbitrarily initialized weights. This will yield the FeedForward layer output. One thing that still confuses me is how is each neuron able to self-feed its own result back to itself to correct its arbitrarily initialized weights. My guess that comes later on, when the whole model is running and is able to apply a loss function to each prediction, and "auto-correct" itself.

  • @fishingnightmares
    @fishingnightmares 8 місяців тому

    Thank you for your work on this I'm really looking forward for more episodes. Was actually reading the `Oreilly Natural Language Processing with Transformers` (Python) but wanted to code in Rust, found your channel: coming in clutch! 🥇 I have a question regarding the tokenizer part. I see you are explicitly encoding with special tokens but you expect an array without them (as later on you hardcode an 8 dimension when forwarding the positional embeddings), why is that?

    • @regionaltantrums
      @regionaltantrums 8 місяців тому

      Thanks you! Yes, the tokenizer includes special tokens. However, we're not using them yet. Instead, we're utilizing a hard-coded input sequence for unit testing purposes. We'll utilize these special tokens when we reach the training part of our project. Here's the plan: github.com/nihalpasham/optimus/issues/2

    • @fishingnightmares
      @fishingnightmares 8 місяців тому

      @@regionaltantrums Awesome thank you!

  • @shikharmishra8181
    @shikharmishra8181 8 місяців тому

    part 9 would be to connect all these pieces that are created to finally complete the architecture, right?

    • @regionaltantrums
      @regionaltantrums 8 місяців тому

      Yes, I have merged the full Transformer implementation into my GitHub. The video explaining it will follow in some time. Note: this only includes the model's implementation; we still need to obtain the dataset, prepare it followed by model training, and validation.

  • @TheNaive
    @TheNaive 8 місяців тому

    In your previous video about gpu compute using rust and webgpu results where contradictory i.e. gpu compute take large time than rayon implementation ua-cam.com/users/live6B-5Jd1l4qA?si=nRZtVEeP3POrpNOV Can you give reason why it happened

    • @regionaltantrums
      @regionaltantrums 8 місяців тому

      I believe the reason for the GPU's poor performance was due to including the time it takes to load data onto the GPU in our measurements, versus the actual time taken to compute the dot product alone.

  • @TheNaive
    @TheNaive 8 місяців тому

    Also what is the difference between tiled gpu and naive gpu

    • @regionaltantrums
      @regionaltantrums 8 місяців тому

      In simple terms, the naive approach is straightforward but can be inefficient on GPUs due to excessive memory accesses and potential thread divergence. On the other hand, the tiled approach optimizes for the GPU architecture by breaking the computation into smaller tiles, leveraging shared memory, and improving data locality and thread efficiency. The tiled approach generally provides better performance than the naive approach for matrix multiplication on GPUs, especially for larger matrices, as it takes advantage of the GPU's parallel processing capabilities and memory architecture more effectively.

  • @TheNaive
    @TheNaive 8 місяців тому

    When you switch to the drawing windows your voice get muted

    • @regionaltantrums
      @regionaltantrums 8 місяців тому

      Yes, I realized that the audio wasn't captured when I switched over to my iPad after I finished recording. Perhaps I'll redo the stream again sometime when time permits.

  • @wajahatriazmirza
    @wajahatriazmirza 9 місяців тому

    I recall that you made some material about JTAG. Is there a separate channel or were the removed?

    • @regionaltantrums
      @regionaltantrums 9 місяців тому

      I presume you're referring to this one - ua-cam.com/users/liveQeiAl35bAIQ?si=_0D5trP5Mzk0LR2b. If yes, this is a livestream, so they wouldn't appear under the video section.

  • @imperlish
    @imperlish 10 місяців тому

    great work, thanks

  • @imperlish
    @imperlish 10 місяців тому

    Thanks for video, is this available in GitHub ?

  • @ronniechowdhury3082
    @ronniechowdhury3082 11 місяців тому

    Also think about adding a SIMD version. On 32bit floats, that will give you near 8x on CPU with lower latency.

  • @joeymea
    @joeymea 11 місяців тому

    I followed this pretty close to what you did and almost exactly the same as gpgpu's documentation and it is tripping up when loading the wgsl file. If I go in and use the deprecated [[]] syntax, it begins to work. Not sure what I'm doing wrong?

  • @tanuvishu
    @tanuvishu Рік тому

    Great talk

  • @JohnnyLin-z8u
    @JohnnyLin-z8u Рік тому

    Have you considered working on oreboot? Is the goal similar with oreboot?

  • @veenz50
    @veenz50 Рік тому

    Good work . Good video

  • @ManojSharma-zd3nw
    @ManojSharma-zd3nw Рік тому

    nice and detailed video, thanks

  • @amidamarurookie
    @amidamarurookie Рік тому

    The sound is quite small, hopefully you can increase the volume in the next streams

    • @regionaltantrums
      @regionaltantrums Рік тому

      Sorry about that. My audio setup still needs some work. I’m hoping to fix that in the next stream (whenever that is)

  • @amidamarurookie
    @amidamarurookie Рік тому

    hi @regionaltantrums are you still interested in this p2p implementation? If yes, it would be nice if you walk through some protocols & transports module there. I would be happy to watch your stream.

    • @regionaltantrums
      @regionaltantrums Рік тому

      I was hoping to continue this as a series but unfortunately am unable to find time. I have way too many parallel engagements. But hopefully, if I do get the time, I’ll drop a note in advance.

  • @anneallison6402
    @anneallison6402 Рік тому

    Horrible audio

  • @maddelasaikarthik7563
    @maddelasaikarthik7563 Рік тому

    good one

  • @koutheir22
    @koutheir22 Рік тому

    The ".bss" section contains all global variables that are initialized to zero. For that reason, the ".bss" section is usually NOT stored in the binary (storing zeros is useless), and it is the responsibility of the entry point code to allocate it in memory and initialize all its bytes to zero, before starting any actual business logic.

  • @gauravtyagi8485
    @gauravtyagi8485 Рік тому

    I would like detailed videos on each modules of p2p, bytheway video is really good, otherwise it would be verry deficult for me to sit for more then ans hour and see bright screen :P

    • @regionaltantrums
      @regionaltantrums Рік тому

      Thank you for your comment. My apologies for the delay in responding. I probably lost track of the notifications here. I’ll drop a note in advance when I do the next libp2p stream (hopefully not too far out)

  • @bobbilisarathkumar220
    @bobbilisarathkumar220 Рік тому

    Hi, when you used the load command you not mentioned which binary file to load? can you give some info about where you mentioned that?

  • @michal-opteran
    @michal-opteran Рік тому

    Nice project, but I think the name "r-boot" would be better for such ambitious one. The problem I see from the code structure is lack of separation for pure CPU core architectures, they're all buried in `boards` - it may cause problems with abstracting things at later date especially for Cortex-A families or larger RISC-V.

    • @regionaltantrums
      @regionaltantrums Рік тому

      thanks. one of the goals of this project is its focus on 'simplicity'. Project layout is one of them - unlike something like u-boot or any other popular bootloader, where build (i.e make), toolchain and src files are scattered across many directories and sub-directories, rustBoot's uses a per-board folder that contains its entire hardware abstraction layer. This makes debugging and board bring-up a whole lot easier, but the trade-off is code-redundancy (a small price). Ps: I like your suggestion - may just rename it to rboot :)

    • @michal-opteran
      @michal-opteran Рік тому

      @@regionaltantrums Code redundancy is actually a huge software engineering mistake that costs a lot in long term maintenance. It makes harder for people to start off with their own board if they need one, but would have to extract CPU specific stuff for their own sake. But that's only what over 20 years on SWE tells me... I'm glad you liked the name suggestion.

  • @bbbaaa9421
    @bbbaaa9421 2 роки тому

    Great work, Keep enlightening us with your rust work. I appreciate!

  • @shanedolan6229
    @shanedolan6229 2 роки тому

    Can you share the code?

    • @regionaltantrums
      @regionaltantrums 2 роки тому

      github.com/nihalpasham/JWT-based-device-auth

    • @shanedolan6229
      @shanedolan6229 2 роки тому

      @@regionaltantrums exceptional doc. Thank you!