So basically if someone was sniffing my wifi while I was talking with my mom on WhatsApp. They might actually brute force raw data in future? Without having any information about 4 way handshake or anything.
Laurie, have you seen the Polish trains controversy? Security researchers had to reverse engineer train firmware to find and bypass malicious DRM that was sabotaging the trains in Poland. It is a fascinating story.
I have a completely different prediction on c++ memory safety. I think the chances of the compiler getting smarter to match go and rust is way more likely than c++ developers migrating to go and rust.
its not gonna happen. More likely C++ devs migrating to Rust. Because C++ is so clumsy and messy, its better to ditch it altogether and start from scratch. C++ is insecure by design and no one has desire and capacity to make it as secure as Rust. You have to discard everything and start from scratch and build the very foundations of language with security in mind. It can't be done with C++, impossible.
@@VapuR8 Is C/C++ clumsy and messy in all applications? If you're doing linear algebra, there's not really that much to go wrong in either language, and when the same subroutine is called one billion times, it does make a difference how efficiently you implement it. In those situations while Rust is probably fine, I actually find C a lot more ergonomic, especially when factoring in AVX kernels (which are still in their early days in Rust in some ways). Shaders also get a bit messy in Rust, when last I checked.
@@novantha1 Not put C and C++ like this C/C++. C is not clumsy and messy, it is elite foundation language. C++ is indeed clumsy and messy. But I hope for C to be replaced by Rust, because C is legacy and doesn't comply with modern higher demand for memory security-by-design. Do you think we might see a fully Rust-written Linux-kernel in near future?
20:34 Definitely agree with you here. In my freetime I use Ghidra to reverse-engineer games in order to create mods. While there are a lot of times where accuracy is key to understanding what is going on, most of the time spent in Ghidra is wasted by making out what hundreds of functions that I am sifting through on the search for certain functionality even do, roughly. To get a more legible approximation in these situations would be highly desirable.
It's a double edged sword. LLMs got me tricked a few times. Excuse could be that it lacked context (I can't give it whole project thus some right solutions just didn't work). But sometimes it was clearly wrong, making errors in quite fundamental stuff (overall and language specific). In conclusion, some might not be ready to have hallucinating AI even remotely close to their product, because of it's non-deterministic nature and all possible drawbacks.
Definitely! It could be implemented as a collaborative feature between the user and the LLM. You would select a portion of the intermediate representation, click a button, and receive a suggested completion. If you're not satisfied with the result, you can retry or adjust the temperature to make it more or less "creative." LLMs already outperform most humans in translation tasks, and since reverse-engineering is a form of translation, this tool would clearly be very useful.
It totally made my day to discover that you're secretly a train nerd. If you do a guest show with Adam Something or the Well There's Your Problem team I think I would lose my mind.
I learned so much from this video. I'll be coming back for more. Thank you! I watched one of your videos many months ago, but did not get as much out of it as I did from this one. I have always been interested in Reverse Engineering as a way to learn how to build new things by emulating existing or older things, and for resurrecting old technology. Laurie you are brilliant. You are a great example of why women must continue to go into technology from an early age. Your mind is well-organized and developed. You did a great job of combining many video cuts into a seamless experience for the viewer. Keep up the great work !!
Supply Chain Attacks are also a topic in LLMs, when you try to manipulate the training data that is gonna be used to train your AI models. I have done my master thesis on reverse engineering binaries using transformers. It's gonna be still challenging for obfuscated programs. I see a great potential here too. If interested, I might upload my presentation soon, its like 2 hours.
It's super pleasant to listen to your talking. You form nice, understandable and clear sentences about complicated topics - in real time, no script. That's pretty rare.
Since you literally asked, here's my unqualified and poorly thought out thoughts: - I'm too far removed from the embedded space to say anything useful about RISC-V itself. My impression from a distance is that most RISC-V fabs are Chinese. The fact that China has an offensive cybersecurity strategy aimed at 'the West' could make key industries in the USA and EU hesitant to adopt RISC-V in the near future, until fabs in more 'Western'-aligned countries pick it up. - My expectation is that, at least until 2030, neither Shor's algorithm nor Grover's algorithm will be successfully applied for practical attacks on cryptographic primitives that, in early 2025, are commonly considered "good" for use in any facet of (D)TLS. Meanwhile, since NIST has already selected some PQC primitives and work is underway to build them into TLS libraries, they will be widely in use by 2030, especially in the industries that pay so much lip service to its importance but don't contribute in any way to the open source projects that do the actual work of implementing them. - I don't expect large open source projects, such as the Linux kernel, or Chromium or Firefox, to make concerted efforts to un-C or un-C++ their codebase. I do expect memory safe programming languages to outpace C and C++ in popularity, to the point that it becomes unsustainably difficult for companies that have large proprietary codebases in memory unsafe languages to find workers to maintain, let alone develop, their product. Those companies will not get rid of their legacy code, as it won't necessarily kill those companies, but it just ends up making things more expensive, which will ultimately be paid for by end users and taxpayers. - I'd be a bit sad if this would be the fifteen minutes of fame for Rust :') But maybe that gives it some focus time to mature. - I don't know enough about the state of compression to say anything useful about it. - I'm entirely too salty about LLMs to say anything useful about it. - I expect that SBOMs will prove to be unhelpful in the next three Log4j-sized vulnerability crises, due to a combination of a low quality of identification of dependencies and the difficulty of including the use of SBOMs in operational processes. If this turns out this way, the infosec community will ridicule SBOMs to death rather than trying to fix it. - I have seen papers, like a decade ago, where the researchers used machine learning to do various interesting things with reconstructing code, but more in the Copilot-ish autocomplete sense. The machine learning was a lot simpler than LLMs (it has been a while since I've gotten into the details of it, but it used conditional random fields, I believe?). My hope (not prediction) is that the LLM hype will implode, people will rediscover simpler models that are easier to explain and build guardrails around, which would then result in also making it easier to fit it into a working process where precision is key (such as reverse engineering).
People really underestimate how inferior of a language Rust is compared with C++, too. New code will continue being written in C++ for the foreseeable future.
Raspberry Pi already released a hybrid ARM / RISC-V last year; the Pico 2 featuring an updated RP2040, the RP2350 with more everything, including ARM TrustZone powered secure boot, and, supposedly unless locked down in secure mode, a software selectable choice of booting RISC-V cores instead. There was a very fun talk at 38C3 about how well that particular choice turned out, and Raspberry Pi have a long blog about it too. So I guess my prediction is that any plans they might have for a main-line hybrid ARM / RISC-V board will be delayed while taking what they learned back to the drawing board.
It's crazy while at work I remembered that Laurie hadn't posted a video in a while and was wondering when the next video would come out and here it is!
Definitely think you're spot-on with Zig. Also, while I'm actually not a fan, I think seeing a AAA studio implement AI dialogue will likely happen this year as well, if not next. Not all topics here are exactly in my wheelhouse, being just a game dev, but it's really nice to hear predictions regarding parts of the industry I'm not entirely familiar with.
Love this conversation! Learning CS over about the last year+, and honestly, this single episode has such solid content, realistic and viable. Thank you Laurie!!
6:08 - It was driving me nuts trying to figure out what city you were walking through because I could've sworn it looked like Arlingon County VA, but the reflection in that window cleared it up pretty well.
So, I had no idea what risk 5 was :) Turns out I was also "spelling" it wrong so, if anyone else was wondering: RISC-V (pronounced "risk-five") is an open-source instruction set architecture (ISA) that is gaining traction in the Linux world. Unlike proprietary ISAs like x86 (Intel/AMD) and ARM, RISC-V is free and open, allowing anyone to develop processors based on it without licensing fees.
Someone needs to make an pure RL training pipeline for 8B LLM that takes C code, compiles & decompiles it, then have the LLM predict the original code from the decompilation pseduocode, with descriptive symbols. The LLM generated code could then be compiled, and the intermediate representation compared to the original for symbolic equivilance.
@@ELYESSS We'd blow up the context window too quickly with pure ASM to be useful. Large context such as 1M require too much VRAM at the moment to be practical.
@@Mastercar77 That's OK, if you miss one, the next one is in 4 minutes. 😛 (Yeah, OK, depends on the trains we're talking about, but... I distinctly remember a moment of realizing that I'd _just_ missed (like, saw it leave) an U-Bahn in Berlin, but the next train was due (and arrived) in 4 minutes. Granted, only at certain times of day, but...)
Great video and nice walk. I'd like to thank you for making videos like this, without background music, despite what you say in the generative AI section of the video. ;]
doesnt look off to me, he just intentionally started walking backwards with her after he passed her, no idea why but it makes me think this whole video is somehow cgi 😂😂😂
You've got your thoughts well organized on these topics. Great job of extemporaneous speaking !! I was only in Seattle for a few hours once and did not get a chance to look around. This is a great way to tour the streets. Thank you !! 8^).
Bro tries to autism-shame a girl who is far above his league in all categories that matter, and this fact triggers both his sensitive little heads... .
Counter prediction: C++ is not going away in any foreseeable future. Memory safety is not free and when you want to squeeze out the absolute most out of your hardware, you don't pay for what you absolutely don't need. C++ is evolving, new standards offering increasingly "modern" syntax and features. It will be fine.
I hate C++ but there are people that seem pretty confident using it, if it works for you... What we both share in common is the fact that there's always a need to squeeze every cpu cycle out of code so moving towards modern languages is not the rule.
@ There is no "unnecessary stuff". Just stuff you haven't learned / had to use yet. This is a low level language. In many cases higher level abstractions are enough, but if you are doing something that C++ is actually required for, you may even go as low as asm with C++.
Probability of an AAA exec trying to push LLMs into a game 100% Chances of said game shipping with the feature: 50% it’s down to workplace politics Chances of characters becoming memorable/notable 2.5% Issues because these places will kill the “live service” aspect when they need to reduce cost, and stop people from playing the game in the future 100%
I participate in ctfs often, and I pretty much always put my ghidra output into chatgpt to get an idea of what the code does. It does usually get specifics wrong (for example, it never understands string initializations), but it gives a much better idea than parsing through the local_XXX mess
My dream is a rusty syntax language with a GC. People are doing UIs with Rust, but it seems like you really need to be able to create pointer/ref cycles to make ergonomic APIs.
yeah computers are too scary for me, i just wanna crawl up under a rock and hibernate for like 20 years. or i could look for my next developer job, choices amiright
RISC-V will take off not just because of pre-compiled binary support becoming greater, but that a large player decides to make a "killer app" product that incorporates it. Right now, and for a while moving forward, it's nothing but a tinkerer's paradise.
@@Sven_Dongle The point isn't that the product is special, it's that it launches mass production and more adoption. Monkey see, monkey do. More businesses will see the viability and the ecosystem will become useful in the commercial space. It's the same with ARM64 in the server space. Now that Google and Amazon are using it, it's going to make more sense to have ARM precedence in the data center. You will see AMD releasing ARM cores in their x86-64 CPUs by the end of the decade.
SBOMs only get you part of the picture, but is the first step. You also need to apply configurations, such as a Linux kernel .config, or take into consideration build flags.
8:51 I don't think you compression prediction(AV1 will be used by 2 major streaming platforms) will hold. The reason is hardware decoding. While there might be some AV1 hardware decoders on newer hardware, the vast majority of devices would need to use CPU decoding. I could maybe see that for low-quality video, but anything 720p and up is probably going to stay on h264 until all the old phones, laptops, office computers, etc. have been replaced in sufficient numbers. Wasn't there another licence-free video standard made for the web that compresses better than h264? What happened to that one?
@@leito1996 My GPU doesn't support either AV1 nor VP9(GTX 1070) AFAIK. Neither do most AMD GPUs. And as a Linux user, let me tell you software decoding is not the same. AFAIK UA-cam still offers h264 for all videos for compatibility reasons. While this video seems to have actually been shown to me in VP9, others still show avc1(h264).
@@Maxjoker98 Sure, on desktop with older cards it might not be supported, but that is niche. Take any mobile device or smart TV (even 5+ yrs) old and it will stream VP9 on UA-cam netflix and most other services. And it will stay that way until AV1 takes over. H264 will be dead in few years hopefully. So asking what happened to VP9 - it succeeded on the market :)
Thanks for the walk in thought and city. A good mind opener :) Need some time to reflect though. Will be interesting to look back on 😊. Have lovely day
Not going to front. I clicked for the pretty face. I’m human. But I did not expect to hear half an hour of highly intelligent talk about a variety of topics I’m interested in and familiar with (as a layman). This video ended up being one of the biggest surprises I’ve had in like 2 years. Fun to watch all the way through.
I think C and C++ will also continue to be relevant for programming new codebases for microcontrollers and embedded systems for a long time. Anything where the developer has total control over what processes will be running on the target system. Especially when there's just one process, and it's the one they're writing. So, developing dedicated devices like synthesizers and samplers, battery controllers, controllers in peripherals, etc.
I have been using LLM's to evaluate other LLM's based on their outputs. Ive been able to identify file architecture as well as library vulnerabilities using this method.
Nice work! I like your thoughts on LLM's and method naming. They'll need to get a lot better than where they are currently for sure, but that's why it's a prediction! Spot on with asymmetric crypto. The factorization problem is going to be much easier to solve.
It’s cool that she has the knowledge she has including her fore knowledge of Assembly Language. I feel it’s easier to trust predictions from developers who use harder skill sets that are in higher demand. They see something that we’re missing because they had to work under a different scope.
When u were talking about cryptography and supply chain attacks, I could see ur smile widening, and I was like 'she's just like me fr'. I learned some stuff, I agree with you on ai music, supply chain attacks and LLMs explaining code functionality.
@10:20 Microsoft Edge A.I. upscaling Video graphics: I just tried it, and then read about it as my laptop had no change when I turned the enhance video to on. "The device has one of the following graphics cards (GPUs): Nvidia RTX 20/30/40 series (with Nvidia driver > 528.24) or AMD RX5700-RX7900 series GPUs. The video is played at less or equal to 1080p resolution. Both the height and width of the video are greater than 192 pixels. The video is not protected with Digital Rights Management technologies like PlayReady or Widevine. Frames from these protected videos are not accessible to the browser for processing."
This is great, thank Laurie. I like that individuals do this as I've seen predictions done on podcasts like @JupiterBroadcasting Coder Radio, Linux Unplugged, @LateNightLinux, and even @BadvoltageOrg, but yours is the vlog version to the podcast ones. Let's see how many you hit on the mark!
21:19 - a lot of information, a lot of topics, and a lot of memories you prompted by wandering around in places I used to wander around in... thank you for the trip down memory lane. :) And the predictions! I look forward to a video in a year where you reflect on how you did! (You'll be doing such a thing, right? That's my hopeful prediction, anyway. :) )
There are a few small games using LLMs already, but I'm not sure if we're going to see a lot more games use this because of the context window and cost of customizing the model into a character. There are ways to play around these limitations. For example only giving the character a very short time on screen, giving the character a memory loss condition, or the character being an AI in the game as well.
I think server-hosted LLMs for NPC dialogue generation in video games would only work for subscription-based titles, given the running costs of continuously generating new text. This potentially means more game-as-a-service type of titles and maybe an increase in subscription costs. Alternatively, running something like Phi-4 locally under the hood would not be too taxing on modern GPUs and avoids the issue of added costs for the developer/publisher that are then passed onto players.
walking with Laurie needs to be a weekly thing
Bi-weekly. Too much fresh air can mess with your mind! *puts on aluminum hat*
Most definitely.
taking Laurie for a walk*
I completely agree!
I agree
the realisation that people could be harvesting our encrypted data so it can be decrypted when quantum computers become available is terrifying
who cares. got nothing to hide anyway
This just broke my brain
So basically if someone was sniffing my wifi while I was talking with my mom on WhatsApp. They might actually brute force raw data in future? Without having any information about 4 way handshake or anything.
Yeah it's called harvest now, decrypt later
@@a21123 In theory yes
The way you can speak unscripted for extended periods of time with literally zero filler words or pauses is insane
typical woman
I developed the same technique , it is like having a big library with a quite efficient librarian with almost no hiccups between tasks.
I wonder how she can do that
She's just smart bro... I can do that too.
@@omegawii Tbf, it's more than being smart. If you can do this naturally then you're just a really good speaker by nature.
We went from LaurieWired to Laurie in the Wild
Bro missed the opportunity to say LaurieWireless 😔🫥
@@xboneyt485wireless networks are not safe as wired networks.
Laurie Wireless man 😂
Laurie, have you seen the Polish trains controversy? Security researchers had to reverse engineer train firmware to find and bypass malicious DRM that was sabotaging the trains in Poland. It is a fascinating story.
It has been on the two last c3 conferences, they got sued, stupidly, absolute mayhem. Go polish hackers!
that country is sabotaged by itself
I'm SOOO happy this wasn't a video about the job market. Good video as usual.
I came for the CS, and swooned for the awkward train segue. You're a riot.
TRAIN! plus 10 for the video!
100% Odds that 2025 is the Year of the Linux Desktop. Trust me guys. It's going to happen. This year.
edit: what have I done
So young people in tech that don't have an established career yet would be pretty safe learning Linux? -considering they want to
@@dovahking6514 😅 my sweet summer child...
If Gaben drops SteamOS that plays nicely with Nvidia out-of-the-box then you'll see a pretty big adoption I reckon.
@@Carhill If all you ever do is game, then sure. Haven't ever met a person like this.
@@upsilondiesbackwards7360 So you've never had roommates huh
05:18 the guy that walks by moves like an AI model 😀
ikr, why that happen
laurie isn't real she's ai generated
definitely a glitch in the matrix 😄
@@zapz because he saw her recording lol, bro saw an opportunity and took it
He was trolling.
operating systems developer here, just discovered this channel with this video and I'm loving what you do! :D I loved the predictions.
I like this format
a common technocrat viewer, if that's what u can call us, finds this future prospect content very appealing. Sure, catch that train!!!
The prediction I have for 2025 is Laurie will continue to bring us the best content
I have a completely different prediction on c++ memory safety. I think the chances of the compiler getting smarter to match go and rust is way more likely than c++ developers migrating to go and rust.
unfortunately the ISO C++ 26 committee seems divided on memory safety and the proposals to address it.
its not gonna happen. More likely C++ devs migrating to Rust. Because C++ is so clumsy and messy, its better to ditch it altogether and start from scratch.
C++ is insecure by design and no one has desire and capacity to make it as secure as Rust. You have to discard everything and start from scratch and build the very foundations of language with security in mind. It can't be done with C++, impossible.
@@VapuR8 Is C/C++ clumsy and messy in all applications? If you're doing linear algebra, there's not really that much to go wrong in either language, and when the same subroutine is called one billion times, it does make a difference how efficiently you implement it. In those situations while Rust is probably fine, I actually find C a lot more ergonomic, especially when factoring in AVX kernels (which are still in their early days in Rust in some ways). Shaders also get a bit messy in Rust, when last I checked.
@@novantha1 Not put C and C++ like this C/C++. C is not clumsy and messy, it is elite foundation language. C++ is indeed clumsy and messy. But I hope for C to be replaced by Rust, because C is legacy and doesn't comply with modern higher demand for memory security-by-design.
Do you think we might see a fully Rust-written Linux-kernel in near future?
not true at all, c++ will get memory saftey soon...
Assembly was key to the DeepSeek R1 optimisations, thank you for continuing to spread your love of code.
20:34 Definitely agree with you here. In my freetime I use Ghidra to reverse-engineer games in order to create mods. While there are a lot of times where accuracy is key to understanding what is going on, most of the time spent in Ghidra is wasted by making out what hundreds of functions that I am sifting through on the search for certain functionality even do, roughly. To get a more legible approximation in these situations would be highly desirable.
Cant fathom the level of autism required
What Mods
It's a double edged sword. LLMs got me tricked a few times. Excuse could be that it lacked context (I can't give it whole project thus some right solutions just didn't work). But sometimes it was clearly wrong, making errors in quite fundamental stuff (overall and language specific).
In conclusion, some might not be ready to have hallucinating AI even remotely close to their product, because of it's non-deterministic nature and all possible drawbacks.
Same, or prodding data to see what format it's actually stored in. It can help spot patterns sometimes.
Definitely! It could be implemented as a collaborative feature between the user and the LLM. You would select a portion of the intermediate representation, click a button, and receive a suggested completion. If you're not satisfied with the result, you can retry or adjust the temperature to make it more or less "creative." LLMs already outperform most humans in translation tasks, and since reverse-engineering is a form of translation, this tool would clearly be very useful.
The term "legacy PHP code" hurt my soul. I was writing code before PHP existed.
Don't worry, I'm still on Assembly, debating on migrating to C. LOL
I like calling every language I know “COBOL OF THE FUTURE!”
Embrace it, my fellow Fuddy
You wouldn't believe it, but I still have legacy Turbo Pascal, QBasic code somewhere...
It totally made my day to discover that you're secretly a train nerd. If you do a guest show with Adam Something or the Well There's Your Problem team I think I would lose my mind.
I'm waiting for the footage with the head mounted fisheye lens camera
I learned so much from this video. I'll be coming back for more. Thank you!
I watched one of your videos many months ago, but did not get as much out of it as I did from this one. I have always been interested in Reverse Engineering as a way to learn how to build new things by emulating existing or older things, and for resurrecting old technology.
Laurie you are brilliant. You are a great example of why women must continue to go into technology from an early age. Your mind is well-organized and developed.
You did a great job of combining many video cuts into a seamless experience for the viewer.
Keep up the great work !!
Supply Chain Attacks are also a topic in LLMs, when you try to manipulate the training data that is gonna be used to train your AI models. I have done my master thesis on reverse engineering binaries using transformers. It's gonna be still challenging for obfuscated programs. I see a great potential here too. If interested, I might upload my presentation soon, its like 2 hours.
will you notify here once you upload please?
Ok we are interested to see where data security is going
Please notify us on upload
interested
Love this video! Nice to see Seattle too! Love all the topics discussed.
I love this walking talking format, I love walking through city scape and thinking, nice to take a walk with Laurie
I like seeing other cities.
Her bf is almost 7 ft tall. You will never date her.
It's super pleasant to listen to your talking. You form nice, understandable and clear sentences about complicated topics - in real time, no script. That's pretty rare.
If you count Alpine as a major Linux distribution then it already officially supports RISC-V as of latest release. Only 2 more distros to go :)
Omg i just found my favorite side of youtube, please, please keep this work going!!
Every topic, im surprised other people talk like this
Since you literally asked, here's my unqualified and poorly thought out thoughts:
- I'm too far removed from the embedded space to say anything useful about RISC-V itself. My impression from a distance is that most RISC-V fabs are Chinese. The fact that China has an offensive cybersecurity strategy aimed at 'the West' could make key industries in the USA and EU hesitant to adopt RISC-V in the near future, until fabs in more 'Western'-aligned countries pick it up.
- My expectation is that, at least until 2030, neither Shor's algorithm nor Grover's algorithm will be successfully applied for practical attacks on cryptographic primitives that, in early 2025, are commonly considered "good" for use in any facet of (D)TLS. Meanwhile, since NIST has already selected some PQC primitives and work is underway to build them into TLS libraries, they will be widely in use by 2030, especially in the industries that pay so much lip service to its importance but don't contribute in any way to the open source projects that do the actual work of implementing them.
- I don't expect large open source projects, such as the Linux kernel, or Chromium or Firefox, to make concerted efforts to un-C or un-C++ their codebase. I do expect memory safe programming languages to outpace C and C++ in popularity, to the point that it becomes unsustainably difficult for companies that have large proprietary codebases in memory unsafe languages to find workers to maintain, let alone develop, their product. Those companies will not get rid of their legacy code, as it won't necessarily kill those companies, but it just ends up making things more expensive, which will ultimately be paid for by end users and taxpayers.
- I'd be a bit sad if this would be the fifteen minutes of fame for Rust :') But maybe that gives it some focus time to mature.
- I don't know enough about the state of compression to say anything useful about it.
- I'm entirely too salty about LLMs to say anything useful about it.
- I expect that SBOMs will prove to be unhelpful in the next three Log4j-sized vulnerability crises, due to a combination of a low quality of identification of dependencies and the difficulty of including the use of SBOMs in operational processes. If this turns out this way, the infosec community will ridicule SBOMs to death rather than trying to fix it.
- I have seen papers, like a decade ago, where the researchers used machine learning to do various interesting things with reconstructing code, but more in the Copilot-ish autocomplete sense. The machine learning was a lot simpler than LLMs (it has been a while since I've gotten into the details of it, but it used conditional random fields, I believe?). My hope (not prediction) is that the LLM hype will implode, people will rediscover simpler models that are easier to explain and build guardrails around, which would then result in also making it easier to fit it into a working process where precision is key (such as reverse engineering).
She asked for your engagement to boost the algo. You could've shared anything. A copy pasta, your favorite cheese cake recipe, etc. It doesn't matter
Thanks for this. My linkedin will be eating good for the next several years
@@BirdProm but giving an actual answer is much more interesting
People really underestimate how inferior of a language Rust is compared with C++, too. New code will continue being written in C++ for the foreseeable future.
Raspberry Pi already released a hybrid ARM / RISC-V last year; the Pico 2 featuring an updated RP2040, the RP2350 with more everything, including ARM TrustZone powered secure boot, and, supposedly unless locked down in secure mode, a software selectable choice of booting RISC-V cores instead. There was a very fun talk at 38C3 about how well that particular choice turned out, and Raspberry Pi have a long blog about it too.
So I guess my prediction is that any plans they might have for a main-line hybrid ARM / RISC-V board will be delayed while taking what they learned back to the drawing board.
It's crazy while at work I remembered that Laurie hadn't posted a video in a while and was wondering when the next video would come out and here it is!
Definitely think you're spot-on with Zig. Also, while I'm actually not a fan, I think seeing a AAA studio implement AI dialogue will likely happen this year as well, if not next. Not all topics here are exactly in my wheelhouse, being just a game dev, but it's really nice to hear predictions regarding parts of the industry I'm not entirely familiar with.
Love this conversation! Learning CS over about the last year+, and honestly, this single episode has such solid content, realistic and viable. Thank you Laurie!!
6:08 - It was driving me nuts trying to figure out what city you were walking through because I could've sworn it looked like Arlingon County VA, but the reflection in that window cleared it up pretty well.
Is that the Seattle space needle? Because the area was really giving me Vancouver vibes, but nothing was recognizable
@@mr.hi_vevo414 Yeah it's Seattle. I think she started out the video in the Belltown area if I was to guess.
Beautiful Seattle background while listening to awesome CS predictions
So, I had no idea what risk 5 was :) Turns out I was also "spelling" it wrong so, if anyone else was wondering:
RISC-V (pronounced "risk-five") is an open-source instruction set architecture (ISA) that is gaining traction in the Linux world. Unlike proprietary ISAs like x86 (Intel/AMD) and ARM, RISC-V is free and open, allowing anyone to develop processors based on it without licensing fees.
Laurie, thank you for not have background music in this video.
Someone needs to make an pure RL training pipeline for 8B LLM that takes C code, compiles & decompiles it, then have the LLM predict the original code from the decompilation pseduocode, with descriptive symbols. The LLM generated code could then be compiled, and the intermediate representation compared to the original for symbolic equivilance.
Yes please
Use thinking time based translation using tool calls and it's not 8b but 800B and costs 10 million
sounds good and straightforward to do. i wonder how good it can get
why decompile it before predicting? Just use the machine code for the prediction.
@@ELYESSS We'd blow up the context window too quickly with pure ASM to be useful. Large context such as 1M require too much VRAM at the moment to be practical.
Where has your channel been all my life?! Slay, girly.
Come to Europe, we have trains. Lots of trains. Fast trains!
They are always late
@@Mastercar77 That's OK, if you miss one, the next one is in 4 minutes. 😛
(Yeah, OK, depends on the trains we're talking about, but... I distinctly remember a moment of realizing that I'd _just_ missed (like, saw it leave) an U-Bahn in Berlin, but the next train was due (and arrived) in 4 minutes. Granted, only at certain times of day, but...)
Fast trains that are always late or old trains that are always even more late. I appreciate TGV but a majority of European trains are overrated.
Trains and robbers are mostly all we got.
@Mastercar77 Maybe, but that's less of an issue when you have one every 10 minutes rather than one every 10 days like in the US!
Great video and nice walk. I'd like to thank you for making videos like this, without background music, despite what you say in the generative AI section of the video. ;]
Framework is releasing a RISC-V dev board for their laptop. Would love to get one in my laptop in the next iteration.
thanks for sharing your predictions!
5:17 btw why does this man's movement look like AI-generated vid haha
it looks intentional lol
doesnt look off to me, he just intentionally started walking backwards with her after he passed her, no idea why but it makes me think this whole video is somehow cgi 😂😂😂
yeah, but the person he was with looked at him like a flesh-and-blood example of "what the fuck are you doing?", so I think it's real. :D
He heard an attractive lady say "C++" and it short circuited his brain for a sec
Pretty sure he saw someone filming and thought "wouldn't it be funny to do some Ministry of silly walks and mess with them when they go through it"
You've got your thoughts well organized on these topics. Great job of extemporaneous speaking !! I was only in Seattle for a few hours once and did not get a chance to look around. This is a great way to tour the streets. Thank you !! 8^).
Strong Luke Smith vibes today. Also, very unsurprising that ARM assembly girl is also a train girl.
Luke Smith... I have not heard that name in a loooooong time
Bro tries to autism-shame a girl who is far above his league in all categories that matter, and this fact triggers both his sensitive little heads... .
It was very pleasant to take a walk with you, albeit it was a little disconcerting walking backwards. :) Good predictions.
Cool as usual.
how'd i not know about this channel...this looks like Seattle, my old town, so now i'm already hooked. 😭
ok it totally is. man.
ah and then we get the space needle! along with what looked like 4 different hairstyles, haha love it
Counter prediction: C++ is not going away in any foreseeable future. Memory safety is not free and when you want to squeeze out the absolute most out of your hardware, you don't pay for what you absolutely don't need. C++ is evolving, new standards offering increasingly "modern" syntax and features. It will be fine.
I hate C++ but there are people that seem pretty confident using it, if it works for you... What we both share in common is the fact that there's always a need to squeeze every cpu cycle out of code so moving towards modern languages is not the rule.
Yeah, matter of taste
It's too bloated. They need to cut the language down by half. Remove all the old unnecessary stuff. Make it simpler and more straight forward.
@@vectoralphaSec what about C?
@ There is no "unnecessary stuff". Just stuff you haven't learned / had to use yet. This is a low level language. In many cases higher level abstractions are enough, but if you are doing something that C++ is actually required for, you may even go as low as asm with C++.
Probability of an AAA exec trying to push LLMs into a game 100%
Chances of said game shipping with the feature: 50% it’s down to workplace politics
Chances of characters becoming memorable/notable 2.5%
Issues because these places will kill the “live service” aspect when they need to reduce cost, and stop people from playing the game in the future 100%
The name Sam Altman turns me off wanting to read the post you linked
You don't have to like him to learn from his expertise.
@@deusexaethera he doesn’t really have any.
kind of solid predictions or predictive analysis, great video too for a change outta the studio. cool
Inb4 Laurie is the next Ray Kurzweil
such a good, compact and succinct articulation. love it
My favorite part was the train as well.
I participate in ctfs often, and I pretty much always put my ghidra output into chatgpt to get an idea of what the code does. It does usually get specifics wrong (for example, it never understands string initializations), but it gives a much better idea than parsing through the local_XXX mess
zig over rust any day
however yeah, I love these predicions!
Thanks for the advice
Have a great year
The guy at 5:18 threw me way off. Laurie is in the matrix.
😂 Thanks for pointing it out. ai slop guy.
My dream is a rusty syntax language with a GC. People are doing UIs with Rust, but it seems like you really need to be able to create pointer/ref cycles to make ergonomic APIs.
She made an outside video just to prove she's real😭
ah, i miss Seattle so much. Thanks for taking a walk around Laurie.
Super fun video with all ‘My Favorite Things’!
I don’t know, I don’t care, I hate computers I wanna run away naked in the forest
yeah computers are too scary for me, i just wanna crawl up under a rock and hibernate for like 20 years. or i could look for my next developer job, choices amiright
Good news, you can go do that
@orterves Where at without getting arrested?
@@VoiceDisasterNz oh I didn't say you wouldn't be arrested
07:20 train nerd, I love it. Walking with Laurie is my new favorite format
Her 7 ft bf does not approve
Was that guy eavesdropping a little at 5:18?
lol yeah that was weird 🤣
He's a giga chad
he heard 'C++' and was satisfied :D
Very fun video to watch. I also really loved the tour of the city, I haven't been there in years and I'm debating heading back there
what's your camera?
Keep the videos coming, Laurie.
To me, most of these feel more like Software Engineering Predictions than CS Predictions, since most of this isn't a very foundational shift.
Colloquial use of "CS". Also, does her channel focus on CS or SE more?
Eh. Software engineer is just applied computer science.
Really enjoyed the video, please bring more of it
LaurieWireless isn't real.
She isn't real.
😭
Excellent and thought provoking video. You just earned a new sub.
RISC-V will take off not just because of pre-compiled binary support becoming greater, but that a large player decides to make a "killer app" product that incorporates it. Right now, and for a while moving forward, it's nothing but a tinkerer's paradise.
A hardware specific killer app? What does that even look like for a general purpose CPU?
like the good ol days
So maybe it's good that it doesn't take off?
@@Sven_Dongle The point isn't that the product is special, it's that it launches mass production and more adoption. Monkey see, monkey do. More businesses will see the viability and the ecosystem will become useful in the commercial space. It's the same with ARM64 in the server space. Now that Google and Amazon are using it, it's going to make more sense to have ARM precedence in the data center. You will see AMD releasing ARM cores in their x86-64 CPUs by the end of the decade.
SBOMs only get you part of the picture, but is the first step. You also need to apply configurations, such as a Linux kernel .config, or take into consideration build flags.
8:51 I don't think you compression prediction(AV1 will be used by 2 major streaming platforms) will hold. The reason is hardware decoding. While there might be some AV1 hardware decoders on newer hardware, the vast majority of devices would need to use CPU decoding. I could maybe see that for low-quality video, but anything 720p and up is probably going to stay on h264 until all the old phones, laptops, office computers, etc. have been replaced in sufficient numbers. Wasn't there another licence-free video standard made for the web that compresses better than h264? What happened to that one?
VP9 and you are using it right now to watch this video LOL
@@leito1996 My GPU doesn't support either AV1 nor VP9(GTX 1070) AFAIK. Neither do most AMD GPUs. And as a Linux user, let me tell you software decoding is not the same. AFAIK UA-cam still offers h264 for all videos for compatibility reasons. While this video seems to have actually been shown to me in VP9, others still show avc1(h264).
@@Maxjoker98 Sure, on desktop with older cards it might not be supported, but that is niche. Take any mobile device or smart TV (even 5+ yrs) old and it will stream VP9 on UA-cam netflix and most other services. And it will stay that way until AV1 takes over. H264 will be dead in few years hopefully. So asking what happened to VP9 - it succeeded on the market :)
@@Maxjoker98 UA-cam only uses avc if vp9 downloading fails
Thanks for the walk in thought and city. A good mind opener :) Need some time to reflect though. Will be interesting to look back on 😊. Have lovely day
Not going to front. I clicked for the pretty face. I’m human. But I did not expect to hear half an hour of highly intelligent talk about a variety of topics I’m interested in and familiar with (as a layman).
This video ended up being one of the biggest surprises I’ve had in like 2 years. Fun to watch all the way through.
This is a great video. I appreciate the honesty and insight.
Interesting talk. Between this girl and Peter Zion on geopolitics, I feel like I get a teeny bit smarter with every video.
but why. She doesn't even base her statements on facts which rigidly lead to her predictions.
I think C and C++ will also continue to be relevant for programming new codebases for microcontrollers and embedded systems for a long time. Anything where the developer has total control over what processes will be running on the target system. Especially when there's just one process, and it's the one they're writing.
So, developing dedicated devices like synthesizers and samplers, battery controllers, controllers in peripherals, etc.
You have to be a fashion designer instead of security researcher
why not both?
I have been using LLM's to evaluate other LLM's based on their outputs.
Ive been able to identify file architecture as well as library vulnerabilities using this method.
Nice work! I like your thoughts on LLM's and method naming. They'll need to get a lot better than where they are currently for sure, but that's why it's a prediction! Spot on with asymmetric crypto. The factorization problem is going to be much easier to solve.
How did it take me so long to find this channel
Can this be a more regular ‘thing” loving the relaxed dialogue while seeing the sights and sounds (or wind) of Seattle 👍🏻
Can't wait for end of the year to see the actual outcomes and compare with the predictions.
It’s cool that she has the knowledge she has including her fore knowledge of Assembly Language. I feel it’s easier to trust predictions from developers who use harder skill sets that are in higher demand.
They see something that we’re missing because they had to work under a different scope.
Excited to see you update your weights & heuristics post-back-propagation, Laurie:)
She is excited to spend time with her 7 ft bf
@MrSoy_ Appreciate you looking out for me, MrSoy bro! Thank you😊
I have no idea what she is talking about, but I really appreciate her personality
What a nice voice 🥰
But ye, those are actually pretty well thought out points.
You're a real inspiration Laurie, I hope you know that! 😊
I really like this format, please do this more often, for us who don't touch grass it kinda helps watching it lol
When u were talking about cryptography and supply chain attacks, I could see ur smile widening, and I was like 'she's just like me fr'. I learned some stuff, I agree with you on ai music, supply chain attacks and LLMs explaining code functionality.
@10:20 Microsoft Edge A.I. upscaling Video graphics: I just tried it, and then read about it as my laptop had no change when I turned the enhance video to on.
"The device has one of the following graphics cards (GPUs): Nvidia RTX 20/30/40 series (with Nvidia driver > 528.24) or AMD RX5700-RX7900 series GPUs.
The video is played at less or equal to 1080p resolution.
Both the height and width of the video are greater than 192 pixels.
The video is not protected with Digital Rights Management technologies like PlayReady or Widevine. Frames from these protected videos are not accessible to the browser for processing."
Love your informative video. Thanks a lot.
very good video! thanks laurie
This is great, thank Laurie. I like that individuals do this as I've seen predictions done on podcasts like @JupiterBroadcasting Coder Radio, Linux Unplugged, @LateNightLinux, and even @BadvoltageOrg, but yours is the vlog version to the podcast ones. Let's see how many you hit on the mark!
21:19 - a lot of information, a lot of topics, and a lot of memories you prompted by wandering around in places I used to wander around in... thank you for the trip down memory lane. :) And the predictions! I look forward to a video in a year where you reflect on how you did! (You'll be doing such a thing, right? That's my hopeful prediction, anyway. :) )
There are a few small games using LLMs already, but I'm not sure if we're going to see a lot more games use this because of the context window and cost of customizing the model into a character. There are ways to play around these limitations. For example only giving the character a very short time on screen, giving the character a memory loss condition, or the character being an AI in the game as well.
I don't know how to explain it, but your voice feels like the most pleasant human speaking voice I have ever heard
I think server-hosted LLMs for NPC dialogue generation in video games would only work for subscription-based titles, given the running costs of continuously generating new text. This potentially means more game-as-a-service type of titles and maybe an increase in subscription costs. Alternatively, running something like Phi-4 locally under the hood would not be too taxing on modern GPUs and avoids the issue of added costs for the developer/publisher that are then passed onto players.
this is cool, you should do this more often. 👍😁