Do You really believe apple would do something like that? Sounds to complicated to market to normies, and also very unprofitable if they cannot get any money from their ai
You know whats more intelligent? Learning to code on your own without paying anyone. I taught myself to Assemble and BASIC, PASCAL, C, C++ Visual C before I entered university. I paid for nothing. in my day we used books, magazines and online material, when the Web was developed we had even more material for learning. Now you have videos, access to pirated books etc.. I recently picked up Python and MicroPython. Today most people will do well with just Shell scrip or powershell. Seriously don't pay for anything at least not until you have an idea of what your doing. Join groups for programmers.
Damn, didn’t even think of that. No way in hell am I trying to debug that without a debug vm (that could bypass the whole concept while no one is the wiser)
Pretty sure the "memory is encrypted while it runs and and at rest" part at 6:41 or so is a bit of an inaccuracy. The memory is isolated when the application runs, which is not the same thing. This isolation protects against things like cache snooping, but the actual execution is done on unencrypted data. Put another way, if you wanted to execute on encrypted data, then you would have to have something like a generic framework for on-the-fly application of fully homomorphic encryption and it would come at a significant performance penalty.
@@Tristam29 This was my takeaway as well, there would need to be hardware acceleration for handling encrypted data for all computation. I do remember such architectures being proposed, but I don't think any have been actually taped out.
@@quantum5768 Yeah, you're right. Right now, it's not really feasible. We’d need more than just light and electrons to handle encrypted data efficiently because of basic limits like the speed of light. Even with fancy encryption methods, keeping performance up to par is a huge challenge with our current tech.
As an AI researcher, I can say with confidence that the real benefits of AI are advances most people will never hear about. It's definitely not what the big tech companies are hyping up and shoving in every application.
As a developer, I've yet to find a use for AI that actually makes me interested in it. It's far too unreliable, even in basic tasks that it's supposedly good at. Even the best "help you code" options fail to generate basic boilerplate code.
I've seen chatgpt generate some pretty good bits of code, but it fails at doing anything larger in scope when it needs to consider the broader design of a system. The more concerning thing is how many developers rely on it and when you take it away they are completely useless.
@@Daktyl198 I've found it to be decent enough for learning things i don't already know, you can ask it be verbose or explain things, but once it's about something i already know, it's much faster to just write it myself from the beginning.
@@alwaysquestionyouropinions1119 open source ai also has biases, mostly the biases of the average tech bro. Which looking at the "source" won't tell you about, anyway.
I don't understand what prevents the server to just give back a fake code signature and then instead run other code that exports data, given also that this is not open source software, the only piece of code on the client that this whole infrastructure is based upon is server.checksum == magicstring? I don't think the infrastructure in which the server code is running was ever a privacy issue, the issue was (and still is) the server code itself
If it was using FHE that would be secure I think, but I think that is still much too slow to be feasible for this kind of thing. If one knows that some hardware was constructed and initialized as advertised, such that the only way to get code and data into it is by encrypting it with a public key where the private key is inside this hardware and cannot be retrieved, and that the only way to get data out from this hardware is if the code in it sends it out, and the code you send it only has it send messages out which are encrypted with your public key, then, you wouldn’t need to continually check that it isn’t being tampered with? But, knowing that the machine was constructed and initialized as promised, I don’t know, seems like it would be hard? Also, ensuring that the hardware is actually secure and such that the private key cannot be extracted..
Not claiming to have an exact answer, but the whole thing smells like zero knowledge proof to me. Which is a concept that at least to me, is still a bit difficult to grasp.
@@Mireneye Well zero knowledge proof isn't really a proof, it's just saying "trust me bro" while using intellectual property as an excuse for not revealing the details.
You know nothing about digital signatures. It's perfectly secure as far as we currently know and will be for the foreseeable future unless quantum computing makes a massive leap. Not defending Apple here, but just read a wikipedia article or 2 if you want to learn about how code signing works and how it prevents a device from running arbitrary code that wasn't signed with the secret private key. It's really the most fundamental concept in cybersecurity.
It's a reasonable request in a lot of cases. It's just that "we have made it so we *can't*" is a much better answer than "we could, but we'd prefer not to because it would mess up our sales".
@@sirrobinofloxley7156 Good. Enough technology in our lives is an impenetrable black box, and this one is going to replace the content of your previews and notifications, which might be important by themselves. Open everything, let the community see how Apple decides it what we are fed.
@@AnonymousUA-camr-m8g You unironically think Americans can identify other languages. Cute. Perhaps Spanish in light of the proximity with former Spanish colonies, but that’s probably it.
ARMv9 CCA doesn't encrypt realm memory, just prevents hypervisor from access via a bounce buffer, which is just cheaper alternative to AMD SEV-SNP/Intel TDX that actually encrypt memory. The implication is significant. The main point of these hardware TEEs is to prevent rogue admin attack vectors. CCA leaves a huge backdoor for DMA and physical memory analysis, where people can just spray the memory with coolant and take them out and attach to a memory analyzer and read all the content, which is a well documented attack against hibernated laptop with FDE.
Sure, but that would only help really targeted attacks by someone with a lot of power/money. You can't just willy-nilly snorkel all data that runs through the cloud. You would have to kill the process in the right moment and take the system offline, meaning you would need to know exactly on which hardware this specific job runs, etc., etc. Of course nothing is 100% secure, but to me this looks like it reduces the usual state-sponsored attack vectors by A LOT.
@@EinTypOhneHandle since CCA is exclusively used for VMs, it's a solved problem to transparently migrate the target VMs and seize the target host without you knowing!
Assuming there are really no hardware level attacks (big assumption), It seems like PKE is the only reliable way to ensure data gets into the realms securely. But then again that means someone is having a key. And when pressed for or stolen, why wouldn't this allow for a MITM? It sure as hell makes it more complicated - but I don't see how this architecture makes things so much more secure yet. It would be interesting to hear more about this.
No amount of software can change the fact that if you control the hardware you control the software running on it. The rest is just marketing in the end
Then we accept futility when it comes to guaranteed privacy on internet-connected technology and adjust accordingly. If op-sec is deeply important (which it isn't for 99% of people) then go live in a Faraday cage or in a hut in the woods (or even just don't buy the device). This tech makes it HARD to mine deeply personal information, which is good messaging for Apple branding and provides a counter balance to Microsoft and Google.
@@monacolulu sir, if this were true then you could not trust Secure Enclave or google’s Titan M security. However, I believe you can trust those technologies.
Yeah. Apple pushed the whole "we can't see what you're doing on iCloud Private Relay because the exit nodes are operated by Cloudflare, Akamai, Fastly etc" but to get the endpoint URLs for said exit node servers you had to go to an iCloud server and say "hey i'm John Doe, what exit nodes can I use?" If it's Apple Silicon how do we know Apple aren't just going to fake the attestation signatures? If it was Apple software running on an Intel server, it has to be BOTH Apple and Intel colluding to fake the attestation signatures.
Apple's (or any proprietary vendosr) SEV-SNP/TDX/CCA impls could only be a security through obscurity theatre that hides behind the complex attestation process if the workload is not open source, i.e., there is no way for end users to find out whether the encrypted workload doesn't do anything funny. The fundamental premise of CC is that the owner of the workload knows exactly what's running inside a TEE through cryptography. Unless Apple open source the workload and let everyone inspect it and hashed to ensure integrity of the workload, their PCC is ultimately BS. I hope that Apple do the right thing, as it would have profound impact on the entire SaaS ecosystem and dramatically boost open source community. AGPLv3 for the win!
@@drdca8263 7:51 where he points out "the Monitor that is making sure that the Realm Manager is managing correctly", which is (part of) a computer making management decisions
Funnily enough there’s an instance of this right in Apple’s advert! A message says like ‘feed cat 1tbsp of XYZ’, but the summary goes ‘feed cat XYZ’. If you had a legitimate reason to assume the amount but the cat needed a different diet in this case, this might get the cat killed.
Now throw away your fully working phone 📱🤳🏼 and buy this over priced phone that's looks the same and sound the same. Thanks, see you next year for another event 😂
@@jacobdalamb yeah it feels like a bunch of AI bots ironically post their Apple memes into any video about Apple. This video is mostly about some cool new arm tech that we should be more excited about given how much of our life is forced to be in the cloud.
It’s a pet peeve of mine. Not everyone inhabits the same media ecosystem, so it’s very easy not to hear about a thing that for someone else is ubiquitous.
I agree about the step in the right direction and that no matter how secure something seems there can still be a hole, but the entire time you were explaining the architecture I was just thinking "What if they tell you all your data is going to this secure place, but actually send it to a regular old server?" Whether its the self destructive desire for gov'ts to have backdoors in software, or cutting costs (since the regular servers are prob going to be in greater supply for a long time), or any other reason, I can imagine this happening. Is it ultimately just the company's word where this data is going?
I think they would get into trouble for that pretty big time. That’s a little more than false advertising. Its even more than selling you a service and then refusing to provide it. Its selling you a service, pretending to provide it, refusing to acknowledge that you didn’t provide it, with knowledge of the consequences if not providing it and refusing to accept the damages. Problem is if the law is at all interested in persecuting this since that might go against the interest of the government lets just say.
While I do appreciate the option to do cloud computing privately, my reason for wanting local-first (in my own computing, at least) is agency. I want to have the most possible control over my computing, so in the cases where I would actually use "AI" (translation, voice-to-text, etc.) I want the software running locally. I've seen some discussion about "CRDTs" for local-first document collaboration, and that seems intriguing.
True. But it’s a good solution as we transition. We don’t have the hardware for it at the moment specially on mobile devices where battery life is crucial.
If the data isn't encrypted client-side with a trustworthy open-source system, and processed on the cloud without being decrypted ("fully homomorphic encryption computing" is the term IIRC); then this is still putting too much trust to on the company that turned out to not delete your private photos when you told them to delete, and even sent them to other people accidentally...
But then how do you trust the monitor… How do I even trust the Realm, unless it just computes whatever I tell it to (which sounds like a vulnerability). How do I know that the computer with the special design isn’t also designed to automatically peak into the inner workings like he said would be difficult… I can imagine cryptography with special properties to combat this stuff, but the fact that you need to verify the RMM means you need to trust something on this system in the first place.
I agree with you but this is the problem with trust in the first place, ultimately you need a trust anchor you cannot additionally verify, unless he’s like, you’re uncle or something
@@LowLevelTV It's good to know I'm not missing something, but I got the impression that the point is to avoid a trust anchor (with the server owners, at least). I suppose it moves the part you're trusting to a deeper level?
@@-ism8153The implicitly trusted thing in this case is the CPU BootROM and the public key burned into eFuses which the BootROM uses to verify the bootloader. Neither of these can physically be altered. This is what's called a "hardware root of trust". Note that if the private key leaks (almost impossoble in the case of Apple, but has happened in other embedded systems) or the BootROM has some kind of arbitrary code execution vulnerability (highly unlikely, but very much possible, an example - the Checkm8 exploit) the whole security model completely falls apart and the device can be considered permanently insecure.
Can you really trust this model? I find it hard to trust that cpus aren't just harnessed to dump their contents. At some point the data has to decrypt and at that point it could just be read out. Is there a way to secure against DMA or the cpu equivanlent? Feels like there always has to be a "trust me bro" at some level.
You can't trust any computer that you don't control. It's just simple fact. It's always about risk management. Can you entrust this things like translating news article from Finnish to English? Absolutely.
All well and good unless three letter agencies have copies of a root CA private key. Given what we know about Snowden's revelations, tell me this isn't the obvious path for them (which they have likely already done).
@@misarthim6538 Perhaps, but consider Eric Weinstein's commentary in his most recent interview on Modern Wisdom. That sophisticated adversaries count on us dismissing something that is beyond the pale (to you). Only one successful clandestine op to secure a root CA key is needed. Unimaginable to some, but as an US submarine officer veteran, all bets are off in an era where someone like Tulsi Gabbard is mysteriously put on a terror watchlist the same month she speaks truth to power.
If a nation wants your information, no system is safe. They'll obtain what they want remotely, physically through clever air-gap breaching tech (e.g. Natanz NF, Iran), through legislation, or using a rubber hose (i.e. torture). Security is to stop lesser/common entities and protect brand destruction from irredeemable damage. Name a major tech company who's willing to say they're NSA-proof (Signal is not a major tech company).
@@cursedsucc9015 Their comm's tech is solid, but they're still subject to every weakness on the device Signal is installed on. You can't stop nations-states, but you can provide a consistent level of security from basic Law Enforcement and law breakers. We need to be honest about this.
@@HenryKlausEsq. Sadly spot on. Signal is an exception only because it's obscure. If it ever becomes "mainstream" I bet it'll be under the government's leash in no time.
@@HenryKlausEsq. The problem is no lesser/common entities would like to jail me if I'm against a supreme court justice, or endorse the opposition candidate, like happens in some countries. The government is the most dangerous entity I want my data protected from.
I am not an Apple fan, as such, but I do trust them more than Google, Microsoft, and all the other sleezebag companies who do business with your data. I appreciate that Apple's business model is straight up old-fashioned: they want you to buy your next device from Apple too. And I appreciate all they do with private fingerprint scanners, private face-detection, and now private AI. I will continue to buy Apple devices as long as they do business in this manner.
You can assume any data that is on your phone is public data. Even if everything is secure now, we do not know what type of attacks are possible in the future. All of what an attacker has to do is save your secure data and wait until new attack is available.
Ultimately a big problem with cloud computing is that the user doesn't have the same legal rights to their data in a cloud environment. If the server was in your house your local government would need a warrant targeted at you specifically. In a cloud environment you can get caught up in the net of large scale surveillance operations. But you're totally right that cloud computing is going to remain a huge part of life and may get bigger.
All I ask is a way to disable it from accessing their servers for AI functions. On device is fine if it cant work on device then a way to disable the AI entirely
@@paulorodriguez6288 Yeah, it's not made for tinkering. Why would you buy an Apple device and then expected that you can tinker with it. It will, unlike every other major company, give you stock E2E encryption though.
I think AI itself can be used for a lot of good things, especially when it comes to accessibility. Like Firefox is experimenting with automatically creating descriptions for images for people who use screen readers. That's great. And I heard once about a service or an idea for a service where if you got a scam call, you could redirect it to an AI that's just trying to waste their time. Also great. But as for what the big companies like MS are doing... I don't see much value in it, tbh. Not everything needs AI in it. For Apple, I hope they manage to make it private, so from a technical point of view I'm curious about this. Not going to use it, but the privacy pov is interesting.
1:33 I don't believe (re)generative AI will pave the way for big advancements in human society. Right now it's driven by the stock market, not by actual usefulness. It will be a while before we get over this sugar rush, and move toward something substantive.
I think it's a big problem that you have to assume what happens. Not from your side but from the side of Apple because they should make everything open and transparent so nobody has to assume anything.
Hey!!! Good explanation of this new tech!! Only one gotcha: At times, I almost gave up, just almost, due to the cuts. Not sure if you use silence cuts software that is a bit subpar, but you did have a lot of cuts that were less seamless. Apart from that, I loved it, especially your enthusiasm !!!
They said exactly the same thing on like the iPhone 3 or something, where they said it needed a new CPU to use the first Siri so you needed a new phone, then someone reverse engineered it and found that it was just making a web request and they could do it from Android, as long as they had a device id from an iPhone.
Apple is for privacy in the way of they do not share the information they collect, but they do collect as much as they can - which they use for their own use.
Definitely going to have to read up on the realms thing. I guess ARM does the work to make sure it's all theoretically possible, and then probably teams up with some hardware vendor to make proof of concept implementations to show that it can actually be done? Because IIRC ARM doesn't make any actual hardware themselves right? Whoever is doing that first implementation, that would be a super interesting place to work I think.
How can the Monitor know that the Manager is managing correctly if a Realm is running for a prolonged period of time? The single Realm could either be stuck in an infinite loop (should terminate), or just have an extremely lengthy, computationally challenging task it's performing (should leave it running). How can a computational system (the Monitor) know weather or not it should terminate the (potentially misbehaving) Realm if it can't see inside? It's the Halting Problem all over again
You are, in my book, one of the smartest backend guys I have ever listened to and I love your channel and all the information you provide. I stopped the video when you said Apple does not sell data.... LOL. The caveat that I give you is that it is probably not in your best interest to piss off a company as large as Apple and your saying you have no proof either way... so you get a pass on that. But Apple either sells data or allows data to be mined from applications that run on their system. FB for example will run ads for an object that my wife and I TALK about but have done NO internet searches for. So..... But I digress continue with your video please. :)
@@ultrasive very cool! Ive been looking through Kata docs and can’t find a reference. can you please point me to the right docs, blog posts or sourcecode for private containers, with Kata or otherwise?
How does this compare to the (broken) Intel TXT? It is just a matter of time until a flaw will be discovered or some Three-Letter-Agency will sneak in a backdoor.
I am under a rock and like it here. I want a phone to first and foremost be a reliable and fast phone with perfect audio and reception. Text quality is followed by an e-mail client. I want total privacy too. The interface must be fast preferably with very high quality mechanical buttons.
Hello, I have a question. I want to know whether Zig language can be better than Rust or not, and why Rust has been getting so much hate for some time now, and what is your opinion about Zig or Rust for creating very bad malware?
Man I’ve been looking for an explanation of this since they released the paper back in June. Still don’t fully understand but at least I know a little bit more now. Thanks!
While I struggle to see how this can solve the problem of having to trust at least some layers, I welcome the idea beeing able to fence off VMs even more already in hardware
Just more to go wrong and any repair shop more of a headache.. cool for the watch even though I hate apple. Guess 150 for ear pods versus 1000 for hearing aids. But to use all this you have to use a apple product like a iPhone or iPad which I will never own
I believe Intel's Software Guard Extensions (SGX) does the equivalent on the x86 side. They just call them enclaves instead of realms or balls or whatever.
Thank you for this awesome video. I am the proud owner of several shares of ARM holdings. I think we will see continued adoption of ARM, and an overall shift away from x86 architecture and with it, the eventual end of Intel as a company.
The realm management security domain thing can send *any* signature to the client, pretend to run that code, and run something different... How can you trust that ???
Oh Apple, the guardian angel in the realm of villains. While all the companies are thinking about abusing their power, we have a company that's taking care of all of us and our privacy ....... I really wish I could believe this story!
Your still missing the critical point. The program and data package, getting sent from your phone/device, WILL BE CACHED. If it was EVER in their datacenter, it's going to saved, which means that they will be able to find out what your data is. :\ What's worse, if it were encrypted and somehow obfuscated, even then, they will have access to it because the result sent back will end up being cached too, so they will have the input and output to train whatever dataset they want on you anyway.. :\ There's no solution to this problem.
Depends. If it is encrypted locally on your device before being sent, it should not be an issue. And then the output is sent back to your device encrypted as well and decrypted locally. Any cache does then not matter, as it is encrypted. Still I find it a stupid idea though, as I can never verify what happens to my data when it is in Apple's possession. I cannot verify whether their infrastructure is working the way they claim it is. So I much rather prefer to run AI locally on the device, without data being sent back and forth. I already run PrivateGPT successfully on an M1 chip and fed it some data for local use. They better invest time in developing that idea further, as it is clearly technically possible.
Apple: ok we have the new iphone. Analyst: thats the same one, the internet will make fun of us if we dont claim something is new and revolutionary. Apple: but we already have the manufacturing setup Marketing: how about we add AI!
interesting, tbh I don't get the verification part. Can the hardware really be verified? If Apple decides to replace the isolation hardware, would that measured boot thingy notice that? And what if I add hardware after the boot? But say what you will about Apple, for a few years they have really been advancing user privacy on their devices. And not solely for marketing (more for "selling" privacy) I think the apple ecosystem is currently one of the best and easiest to use options if you don't want to go tinfoil head risk profile territory.
As security becomes more global and complex, the authority of those capable of reaching through it increases. If the only one who can decrypt the message between the General and the King is the messenger, then the Messenger is the true King.
Chatbots are cancer. Nothing is secure, especially when it lives on hardware you don't own or have physical access to. This is just another gimmick we don't need.
Am I the only one who witnesses this smell. A model running locally only means the computation is done on your device. They still have the option to grab the processed data afterwards.. Win win for them
They can, but it's very likely that it would be far less profitable for them to do so. Apple is the only major company that has at least somewhat solid security and privacy trackrecord. It gives them pretty good market advantage. Also, they'd probably get sued into oblivion.
I will be a little bit controversial with this comment, but, the invention of wheelchairs would be considered enshittification by that metric… (2:10) yes, it is not doing something you couldn’t already do, but it’s helping you do it more efficiently, specially for those who struggle with memory. In my case I have ADHD and it’s not even that I have bad memory, it’s that it is selective AF and needs triggers to pull data. Constantly I am getting to things by leveraging “anchors” on my memory, like “I was texting to xxx that time, let me review the conversation quickly to remember what I intended to do”. That being said, I’m sure there’s going to be tons of useless features in general, but I wouldn’t dismiss them always as useless, they are paving the way for the things that are going to be extremely useful
Loved the video. But I find it interesting that these privacy assumptions are usually only brought up when discussing Apple products. Feels like we're just buying into their marketing schemes without any verification.
It's almost as if people continuously take stories from dystopian books and movies as a model instead of the warning they were meant to be. These are dark times. These are dark times indeed.
wouldn't encrypting/decrypting an entire app/process and it's associated memory/storage make things slow? because you'd be encrypting and decrypting every thing in real time? or is it a 1 time encryption when you enter the "Realm" and a 1 time decryption when you exit it? is this a known tradeoff b/w speed and security?
So... What's stopping the software from reporting what it is operating on? What is stopping apple from decrypting the network data before it gets to the machine? Without more info, this is just one layer of security when we need another 10 layers defined and explained.
As a bloke who had undiagnosed sleep apnea for over 10 years (fun fact, you don't need to be fat to have it), a bunch of new apple watch owners are in for a surprise.
Except for hardware bugs which various ARM manufacturers fell for just like Intel and AMD did a while ago. Any hardware exploit means you have no security. Thanks but I think I’d run things locally where you’d have to have my device rather than it share CPUs with other users and apps.
you know whats REALLY intelligent? learning to code at lowlevel.academy (20% off btw) :D
Do You really believe apple would do something like that? Sounds to complicated to market to normies, and also very unprofitable if they cannot get any money from their ai
Why wouldn't they just use homomorphic encryption?
I will became member, when I will get money later this year.
no, not really, no
You know whats more intelligent? Learning to code on your own without paying anyone. I taught myself to Assemble and BASIC, PASCAL, C, C++ Visual C before I entered university. I paid for nothing. in my day we used books, magazines and online material, when the Web was developed we had even more material for learning. Now you have videos, access to pirated books etc.. I recently picked up Python and MicroPython.
Today most people will do well with just Shell scrip or powershell. Seriously don't pay for anything at least not until you have an idea of what your doing. Join groups for programmers.
imagine debugging this
I will not and do not want to.
I've had plenty of debugging designing a virtual filesystem, let alone something like this.
imagine hoarding zero-days for this
@@zeus000.00 real
Damn, didn’t even think of that. No way in hell am I trying to debug that without a debug vm (that could bypass the whole concept while no one is the wiser)
it’s the end of ‘hotfixes’ for sure. i m sure the dev environment would not be setup like this.. making this a devops problem ultimately.
Pretty sure the "memory is encrypted while it runs and and at rest" part at 6:41 or so is a bit of an inaccuracy. The memory is isolated when the application runs, which is not the same thing. This isolation protects against things like cache snooping, but the actual execution is done on unencrypted data.
Put another way, if you wanted to execute on encrypted data, then you would have to have something like a generic framework for on-the-fly application of fully homomorphic encryption and it would come at a significant performance penalty.
And that would have other problems like reuse.
@@Tristam29 This was my takeaway as well, there would need to be hardware acceleration for handling encrypted data for all computation. I do remember such architectures being proposed, but I don't think any have been actually taped out.
@@quantum5768 Yeah homomorphic encryption is still too slow to do anything with
Thanks for the clarification. I was already wondering the feasibility of computing encrypted data given how expensive it is.
@@quantum5768 Yeah, you're right. Right now, it's not really feasible. We’d need more than just light and electrons to handle encrypted data efficiently because of basic limits like the speed of light. Even with fancy encryption methods, keeping performance up to par is a huge challenge with our current tech.
As an AI researcher, I can say with confidence that the real benefits of AI are advances most people will never hear about. It's definitely not what the big tech companies are hyping up and shoving in every application.
What’s ur opinion on dev replacement
As a developer, I've yet to find a use for AI that actually makes me interested in it. It's far too unreliable, even in basic tasks that it's supposedly good at. Even the best "help you code" options fail to generate basic boilerplate code.
I've seen chatgpt generate some pretty good bits of code, but it fails at doing anything larger in scope when it needs to consider the broader design of a system. The more concerning thing is how many developers rely on it and when you take it away they are completely useless.
Do you have any specifics in mind?
@@Daktyl198 I've found it to be decent enough for learning things i don't already know, you can ask it be verbose or explain things, but once it's about something i already know, it's much faster to just write it myself from the beginning.
Realm has a LOT of layers of "just trust me bro" for me to trust that it's completely private, especially when it comes to encryption.
@@zimmerderek chatGPT was just storing user prompts in plaintext on Mac like 2 months ago 😂 this is a great start
If that's your stance on this, then what could anyone possibly do to earn your trust?
@@mflboys Open source all of it.
@jeremy-bahadirli open source is the only way to go forward with ai. Closed source ai will have biases and security concerns and privacy issues.
@@alwaysquestionyouropinions1119 open source ai also has biases, mostly the biases of the average tech bro. Which looking at the "source" won't tell you about, anyway.
I don't understand what prevents the server to just give back a fake code signature and then instead run other code that exports data, given also that this is not open source software, the only piece of code on the client that this whole infrastructure is based upon is server.checksum == magicstring?
I don't think the infrastructure in which the server code is running was ever a privacy issue, the issue was (and still is) the server code itself
@@eliamaggioni8002 it’s probably the similar to private and public keys.
If it was using FHE that would be secure I think, but I think that is still much too slow to be feasible for this kind of thing.
If one knows that some hardware was constructed and initialized as advertised, such that the only way to get code and data into it is by encrypting it with a public key where the private key is inside this hardware and cannot be retrieved, and that the only way to get data out from this hardware is if the code in it sends it out, and the code you send it only has it send messages out which are encrypted with your public key,
then, you wouldn’t need to continually check that it isn’t being tampered with?
But, knowing that the machine was constructed and initialized as promised, I don’t know, seems like it would be hard?
Also, ensuring that the hardware is actually secure and such that the private key cannot be extracted..
Not claiming to have an exact answer, but the whole thing smells like zero knowledge proof to me. Which is a concept that at least to me, is still a bit difficult to grasp.
@@Mireneye Well zero knowledge proof isn't really a proof, it's just saying "trust me bro" while using intellectual property as an excuse for not revealing the details.
You know nothing about digital signatures. It's perfectly secure as far as we currently know and will be for the foreseeable future unless quantum computing makes a massive leap. Not defending Apple here, but just read a wikipedia article or 2 if you want to learn about how code signing works and how it prevents a device from running arbitrary code that wasn't signed with the secret private key. It's really the most fundamental concept in cybersecurity.
Some judge: Open it!
Just order NSA to provide the data instead 😂
It's a reasonable request in a lot of cases. It's just that "we have made it so we *can't*" is a much better answer than "we could, but we'd prefer not to because it would mess up our sales".
@@ImpactWench apple quaeda?
@@harounhajem7972 EU just ruled apple need, needs, lol, to pay €13 billion to Ireland in tax, ha. Apple is a tax dodging fugitive.
@@sirrobinofloxley7156 Good. Enough technology in our lives is an impenetrable black box, and this one is going to replace the content of your previews and notifications, which might be important by themselves.
Open everything, let the community see how Apple decides it what we are fed.
> 2:31 *speaks french* the "piece de resistance"
> I don't know man I don't speak greek
that is what we call the joke my friend
@@AnonymousUA-camr-m8g or, living under a rock.
@@MarcusPHagensame thing
It's a joke crafted to elicit a response from people with an underdeveloped sense of sarcasm.
@@AnonymousUA-camr-m8g You unironically think Americans can identify other languages. Cute.
Perhaps Spanish in light of the proximity with former Spanish colonies, but that’s probably it.
ARMv9 CCA doesn't encrypt realm memory, just prevents hypervisor from access via a bounce buffer, which is just cheaper alternative to AMD SEV-SNP/Intel TDX that actually encrypt memory. The implication is significant. The main point of these hardware TEEs is to prevent rogue admin attack vectors. CCA leaves a huge backdoor for DMA and physical memory analysis, where people can just spray the memory with coolant and take them out and attach to a memory analyzer and read all the content, which is a well documented attack against hibernated laptop with FDE.
heh, "just"
Sure, but that would only help really targeted attacks by someone with a lot of power/money. You can't just willy-nilly snorkel all data that runs through the cloud. You would have to kill the process in the right moment and take the system offline, meaning you would need to know exactly on which hardware this specific job runs, etc., etc. Of course nothing is 100% secure, but to me this looks like it reduces the usual state-sponsored attack vectors by A LOT.
@@EinTypOhneHandle since CCA is exclusively used for VMs, it's a solved problem to transparently migrate the target VMs and seize the target host without you knowing!
Assuming there are really no hardware level attacks (big assumption), It seems like PKE is the only reliable way to ensure data gets into the realms securely. But then again that means someone is having a key. And when pressed for or stolen, why wouldn't this allow for a MITM?
It sure as hell makes it more complicated - but I don't see how this architecture makes things so much more secure yet. It would be interesting to hear more about this.
if there's access to the hardware the memory is on, there's always attack vectors
The recipient public keys are managed by the Secure Enclave, so the keys cannot be easily duplicated or extracted.
How can we be sure that the intermediate keys are (and stay) secure? Look what happened with DigiNotar.
No amount of software can change the fact that if you control the hardware you control the software running on it. The rest is just marketing in the end
Then we accept futility when it comes to guaranteed privacy on internet-connected technology and adjust accordingly. If op-sec is deeply important (which it isn't for 99% of people) then go live in a Faraday cage or in a hut in the woods (or even just don't buy the device). This tech makes it HARD to mine deeply personal information, which is good messaging for Apple branding and provides a counter balance to Microsoft and Google.
@@HenryKlausEsq.Replying to criticism with "go live in the woods" is beyond reductive.
@@thesenamesaretaken How does the Faraday cage rank?
@@monacolulu sir, if this were true then you could not trust Secure Enclave or google’s Titan M security. However, I believe you can trust those technologies.
Yeah. Apple pushed the whole "we can't see what you're doing on iCloud Private Relay because the exit nodes are operated by Cloudflare, Akamai, Fastly etc" but to get the endpoint URLs for said exit node servers you had to go to an iCloud server and say "hey i'm John Doe, what exit nodes can I use?"
If it's Apple Silicon how do we know Apple aren't just going to fake the attestation signatures? If it was Apple software running on an Intel server, it has to be BOTH Apple and Intel colluding to fake the attestation signatures.
the NSA sees all “Realms”
Apple's (or any proprietary vendosr) SEV-SNP/TDX/CCA impls could only be a security through obscurity theatre that hides behind the complex attestation process if the workload is not open source, i.e., there is no way for end users to find out whether the encrypted workload doesn't do anything funny. The fundamental premise of CC is that the owner of the workload knows exactly what's running inside a TEE through cryptography. Unless Apple open source the workload and let everyone inspect it and hashed to ensure integrity of the workload, their PCC is ultimately BS. I hope that Apple do the right thing, as it would have profound impact on the entire SaaS ecosystem and dramatically boost open source community. AGPLv3 for the win!
"a computer cannot be held accountable, so a computer must never make management decisions" -- IBM, 1970s.
Relevance?
@@drdca8263 7:51 where he points out "the Monitor that is making sure that the Realm Manager is managing correctly", which is (part of) a computer making management decisions
Corps will no longer have to pay large fines for breach of trust/privacy every year, it was computer not us
Funnily enough there’s an instance of this right in Apple’s advert! A message says like ‘feed cat 1tbsp of XYZ’, but the summary goes ‘feed cat XYZ’. If you had a legitimate reason to assume the amount but the cat needed a different diet in this case, this might get the cat killed.
I like chocolate
_Execution Level Zero_ sounds like a movie featuring Michael Scarn.
maybe even a sequel...Execution Level Zero: The Revenge of Goldenface
Apple: we have new cool insane features.
People: what features?
Apple: we don't know.
They are going to rewrite your emails summary in your notifications using ai cause reasons…
inaccurate comment
Now throw away your fully working phone 📱🤳🏼 and buy this over priced phone that's looks the same and sound the same. Thanks, see you next year for another event 😂
@@jacobdalamb yeah it feels like a bunch of AI bots ironically post their Apple memes into any video about Apple. This video is mostly about some cool new arm tech that we should be more excited about given how much of our life is forced to be in the cloud.
@@neebuandsocanyou7557 well i feel like ai is kinda dead already... Its kinda useless
I do not trust this one byte. The encryption, the verification, the implementation, none of it
At least they’re miles better than Microsoft with this (so far ig, also the bar is in hell)
Most people do though
@@rooodis456 they also had data breach. Journalists do not love talking about these things because it is Apple, but they had.
😂
It's secure, it's very secure.
Trust us. You have no choice.
I'm gonna go back under this rock I've been living, never heard about "apple intelligence" s**t there.
@@zxuiji too late, you’ve been freed from the rock.
@@MutantNinjaDonut nah, it's quite quiet under hear :)
Did he just say I was living under a rock?
That's kinda rude...
:(
For the last couple of days, he added.
I was on the beach this weekend. And I don't use Apple products.
It’s a pet peeve of mine. Not everyone inhabits the same media ecosystem, so it’s very easy not to hear about a thing that for someone else is ubiquitous.
@@scaredyfish Exactly. The more I hear about new tech buzzwords, the less I care.
The keynote happened like an hour ago haha and is not really that important, kind of a weird thing to say in the video.
It's normal for people that live in a bubble to think the whole world revolves around their small social media echo chamber.
I agree about the step in the right direction and that no matter how secure something seems there can still be a hole, but the entire time you were explaining the architecture I was just thinking "What if they tell you all your data is going to this secure place, but actually send it to a regular old server?"
Whether its the self destructive desire for gov'ts to have backdoors in software, or cutting costs (since the regular servers are prob going to be in greater supply for a long time), or any other reason, I can imagine this happening. Is it ultimately just the company's word where this data is going?
This! All they need is plausible deniability and well polished excuse/explanation if/when stuff leaks.
I think they would get into trouble for that pretty big time. That’s a little more than false advertising. Its even more than selling you a service and then refusing to provide it. Its selling you a service, pretending to provide it, refusing to acknowledge that you didn’t provide it, with knowledge of the consequences if not providing it and refusing to accept the damages. Problem is if the law is at all interested in persecuting this since that might go against the interest of the government lets just say.
While I do appreciate the option to do cloud computing privately, my reason for wanting local-first (in my own computing, at least) is agency. I want to have the most possible control over my computing, so in the cases where I would actually use "AI" (translation, voice-to-text, etc.) I want the software running locally. I've seen some discussion about "CRDTs" for local-first document collaboration, and that seems intriguing.
True. But it’s a good solution as we transition. We don’t have the hardware for it at the moment specially on mobile devices where battery life is crucial.
If the data isn't encrypted client-side with a trustworthy open-source system, and processed on the cloud without being decrypted ("fully homomorphic encryption computing" is the term IIRC); then this is still putting too much trust to on the company that turned out to not delete your private photos when you told them to delete, and even sent them to other people accidentally...
But then how do you trust the monitor… How do I even trust the Realm, unless it just computes whatever I tell it to (which sounds like a vulnerability). How do I know that the computer with the special design isn’t also designed to automatically peak into the inner workings like he said would be difficult…
I can imagine cryptography with special properties to combat this stuff, but the fact that you need to verify the RMM means you need to trust something on this system in the first place.
I agree with you but this is the problem with trust in the first place, ultimately you need a trust anchor you cannot additionally verify, unless he’s like, you’re uncle or something
That is why FOSS lifes matter
@@LowLevelTV It's good to know I'm not missing something, but I got the impression that the point is to avoid a trust anchor (with the server owners, at least). I suppose it moves the part you're trusting to a deeper level?
@@-ism8153The implicitly trusted thing in this case is the CPU BootROM and the public key burned into eFuses which the BootROM uses to verify the bootloader. Neither of these can physically be altered. This is what's called a "hardware root of trust".
Note that if the private key leaks (almost impossoble in the case of Apple, but has happened in other embedded systems) or the BootROM has some kind of arbitrary code execution vulnerability (highly unlikely, but very much possible, an example - the Checkm8 exploit) the whole security model completely falls apart and the device can be considered permanently insecure.
Can you really trust this model? I find it hard to trust that cpus aren't just harnessed to dump their contents. At some point the data has to decrypt and at that point it could just be read out. Is there a way to secure against DMA or the cpu equivanlent? Feels like there always has to be a "trust me bro" at some level.
You can't trust any computer that you don't control. It's just simple fact. It's always about risk management. Can you entrust this things like translating news article from Finnish to English? Absolutely.
if the memory is en-/decrypted by the memory controller on the CPU, any DMA would require the same decrypting HW and keys
All well and good unless three letter agencies have copies of a root CA private key. Given what we know about Snowden's revelations, tell me this isn't the obvious path for them (which they have likely already done).
True, but at least Apple has a history of fighting that too.
@@misarthim6538 Perhaps, but consider Eric Weinstein's commentary in his most recent interview on Modern Wisdom. That sophisticated adversaries count on us dismissing something that is beyond the pale (to you).
Only one successful clandestine op to secure a root CA key is needed. Unimaginable to some, but as an US submarine officer veteran, all bets are off in an era where someone like Tulsi Gabbard is mysteriously put on a terror watchlist the same month she speaks truth to power.
NSA backdoors all over the place 😂
If a nation wants your information, no system is safe. They'll obtain what they want remotely, physically through clever air-gap breaching tech (e.g. Natanz NF, Iran), through legislation, or using a rubber hose (i.e. torture). Security is to stop lesser/common entities and protect brand destruction from irredeemable damage.
Name a major tech company who's willing to say they're NSA-proof (Signal is not a major tech company).
@@HenryKlausEsq. wanted to say signal damn
@@cursedsucc9015 Their comm's tech is solid, but they're still subject to every weakness on the device Signal is installed on. You can't stop nations-states, but you can provide a consistent level of security from basic Law Enforcement and law breakers. We need to be honest about this.
@@HenryKlausEsq. Sadly spot on. Signal is an exception only because it's obscure. If it ever becomes "mainstream" I bet it'll be under the government's leash in no time.
@@HenryKlausEsq. The problem is no lesser/common entities would like to jail me if I'm against a supreme court justice, or endorse the opposition candidate, like happens in some countries. The government is the most dangerous entity I want my data protected from.
I am not an Apple fan, as such, but I do trust them more than Google, Microsoft, and all the other sleezebag companies who do business with your data. I appreciate that Apple's business model is straight up old-fashioned: they want you to buy your next device from Apple too. And I appreciate all they do with private fingerprint scanners, private face-detection, and now private AI. I will continue to buy Apple devices as long as they do business in this manner.
You can assume any data that is on your phone is public data. Even if everything is secure now, we do not know what type of attacks are possible in the future. All of what an attacker has to do is save your secure data and wait until new attack is available.
I wonder to what degree they will verify _who_ is making the request though. I can imagine some might try to tap into this potentially free resource.
Ultimately a big problem with cloud computing is that the user doesn't have the same legal rights to their data in a cloud environment. If the server was in your house your local government would need a warrant targeted at you specifically. In a cloud environment you can get caught up in the net of large scale surveillance operations.
But you're totally right that cloud computing is going to remain a huge part of life and may get bigger.
Every time I see words like “smart”, “advanced”, “intelligent”… in a product, I feel more dumb the more I use it.
All I ask is a way to disable it from accessing their servers for AI functions. On device is fine if it cant work on device then a way to disable the AI entirely
You can do that. Actually, you can do it per each request or completely disable the cloud thing.
Yes, Apple is a champion on giving their users control over their hardware
@@paulorodriguez6288 i disagree with you on that there are many instances where that isnt true
@@Tesseract745 such as
@@paulorodriguez6288 Yeah, it's not made for tinkering. Why would you buy an Apple device and then expected that you can tinker with it.
It will, unlike every other major company, give you stock E2E encryption though.
the PCC white paper is pretty interesting, and goes into detail about how everything works
Not need to live under the rock to give no single F about apple's announcements anymore.
I think AI itself can be used for a lot of good things, especially when it comes to accessibility. Like Firefox is experimenting with automatically creating descriptions for images for people who use screen readers. That's great. And I heard once about a service or an idea for a service where if you got a scam call, you could redirect it to an AI that's just trying to waste their time. Also great. But as for what the big companies like MS are doing... I don't see much value in it, tbh. Not everything needs AI in it.
For Apple, I hope they manage to make it private, so from a technical point of view I'm curious about this. Not going to use it, but the privacy pov is interesting.
1:33 I don't believe (re)generative AI will pave the way for big advancements in human society. Right now it's driven by the stock market, not by actual usefulness. It will be a while before we get over this sugar rush, and move toward something substantive.
closed source = possible backdoor = all of these are ineffective because we cant see any of them
"Apple intelligence" reminds me off the good ol' "military intelligence, isn't"
two words combined that can't make sense
@@DanLisMusic oxymoronic
alibaba intelligence is the real AI
@@DanLisMusic Possibly I've seen too much
I think it's a big problem that you have to assume what happens. Not from your side but from the side of Apple because they should make everything open and transparent so nobody has to assume anything.
Hey!!! Good explanation of this new tech!!
Only one gotcha: At times, I almost gave up, just almost, due to the cuts.
Not sure if you use silence cuts software that is a bit subpar, but you did have a lot of cuts that were less seamless.
Apart from that, I loved it, especially your enthusiasm !!!
Interesting idea. Sounds extremely expensive to run.
They said exactly the same thing on like the iPhone 3 or something, where they said it needed a new CPU to use the first Siri so you needed a new phone, then someone reverse engineered it and found that it was just making a web request and they could do it from Android, as long as they had a device id from an iPhone.
Security as always is a balance. How much does the user value whatever "features" they get versus keeping total control over their data.
Apple is for privacy in the way of they do not share the information they collect, but they do collect as much as they can - which they use for their own use.
tbh i expected to see some prompt injection in apple intelligence
its already happened lel
Your still frames when opening the vids, but not playing it, are as always amazing. :D
Definitely going to have to read up on the realms thing. I guess ARM does the work to make sure it's all theoretically possible, and then probably teams up with some hardware vendor to make proof of concept implementations to show that it can actually be done? Because IIRC ARM doesn't make any actual hardware themselves right? Whoever is doing that first implementation, that would be a super interesting place to work I think.
Assuming the results of whatever you're asking for still has to be passed back to the OS to view on your screen, etc
Realms seems like the perfect place to run malware etc.
How can the Monitor know that the Manager is managing correctly if a Realm is running for a prolonged period of time? The single Realm could either be stuck in an infinite loop (should terminate), or just have an extremely lengthy, computationally challenging task it's performing (should leave it running). How can a computational system (the Monitor) know weather or not it should terminate the (potentially misbehaving) Realm if it can't see inside? It's the Halting Problem all over again
You are, in my book, one of the smartest backend guys I have ever listened to and I love your channel and all the information you provide. I stopped the video when you said Apple does not sell data.... LOL. The caveat that I give you is that it is probably not in your best interest to piss off a company as large as Apple and your saying you have no proof either way... so you get a pass on that. But Apple either sells data or allows data to be mined from applications that run on their system. FB for example will run ads for an object that my wife and I TALK about but have done NO internet searches for. So..... But I digress continue with your video please. :)
I think AMD x86 has TEE which i have been using with Kata Containers Qemu in order to provide confidentiality with end user application data.
@@ultrasive very cool! Ive been looking through Kata docs and can’t find a reference. can you please point me to the right docs, blog posts or sourcecode for private containers, with Kata or otherwise?
How does this compare to the (broken) Intel TXT? It is just a matter of time until a flaw will be discovered or some Three-Letter-Agency will sneak in a backdoor.
04:37 gnu part of gnu+linux is in user mode
I am under a rock and like it here. I want a phone to first and foremost be a reliable and fast phone with perfect audio and reception. Text quality is followed by an e-mail client. I want total privacy too. The interface must be fast preferably with very high quality mechanical buttons.
Whenever information is transferred between one "module" to another, it can be intercepted and "hacked".
Hello, I have a question. I want to know whether Zig language can be better than Rust or not, and why Rust has been getting so much hate for some time now, and what is your opinion about Zig or Rust for creating very bad malware?
What I think is apple intelligence is just an on demand VM dedicated to a single user. For example a kubevirt VM with VGPU
Man I’ve been looking for an explanation of this since they released the paper back in June. Still don’t fully understand but at least I know a little bit more now. Thanks!
Skynet coming up, please. Just didn’t know it would be in the shape of an apple.
I guess microsoft is more likely to cause it
this was such a clean explanation
It is exactly the opposite if the hacker gets access to Realm you will get owned and you will never be able to prove it.
While I struggle to see how this can solve the problem of having to trust at least some layers, I welcome the idea beeing able to fence off VMs even more already in hardware
It is evolving. But backwards. I don't like to live in this world in the slightest
Cuck😊
Well...
Nembutal here I come :D
Just more to go wrong and any repair shop more of a headache.. cool for the watch even though I hate apple. Guess 150 for ear pods versus 1000 for hearing aids. But to use all this you have to use a apple product like a iPhone or iPad which I will never own
I believe Intel's Software Guard Extensions (SGX) does the equivalent on the x86 side. They just call them enclaves instead of realms or balls or whatever.
Thank you for this awesome video. I am the proud owner of several shares of ARM holdings. I think we will see continued adoption of ARM, and an overall shift away from x86 architecture and with it, the eventual end of Intel as a company.
So if I understand this correctly - an iPhone would essentially be sending over an entire "Docker image" (just an example) to the server?
The realm management security domain thing can send *any* signature to the client, pretend to run that code, and run something different... How can you trust that ???
AWS Nitro Enclaves is a very similar idea already on the market. Check it out if you want to get familiar with the concept.
More reasons Apple will use to excuse the killing of right to repair, I bet. So, not so rosey.
Oh Apple, the guardian angel in the realm of villains. While all the companies are thinking about abusing their power, we have a company that's taking care of all of us and our privacy ....... I really wish I could believe this story!
Your still missing the critical point. The program and data package, getting sent from your phone/device, WILL BE CACHED. If it was EVER in their datacenter, it's going to saved, which means that they will be able to find out what your data is. :\ What's worse, if it were encrypted and somehow obfuscated, even then, they will have access to it because the result sent back will end up being cached too, so they will have the input and output to train whatever dataset they want on you anyway.. :\ There's no solution to this problem.
Yup
Depends. If it is encrypted locally on your device before being sent, it should not be an issue. And then the output is sent back to your device encrypted as well and decrypted locally. Any cache does then not matter, as it is encrypted.
Still I find it a stupid idea though, as I can never verify what happens to my data when it is in Apple's possession. I cannot verify whether their infrastructure is working the way they claim it is. So I much rather prefer to run AI locally on the device, without data being sent back and forth. I already run PrivateGPT successfully on an M1 chip and fed it some data for local use. They better invest time in developing that idea further, as it is clearly technically possible.
Apple: ok we have the new iphone.
Analyst: thats the same one, the internet will make fun of us if we dont claim something is new and revolutionary.
Apple: but we already have the manufacturing setup
Marketing: how about we add AI!
interesting, tbh I don't get the verification part. Can the hardware really be verified? If Apple decides to replace the isolation hardware, would that measured boot thingy notice that? And what if I add hardware after the boot? But say what you will about Apple, for a few years they have really been advancing user privacy on their devices. And not solely for marketing (more for "selling" privacy) I think the apple ecosystem is currently one of the best and easiest to use options if you don't want to go tinfoil head risk profile territory.
0:03 I'm living under a rock
As a french guy, I feel so triggered about your "I don't speak greek"
I don't
His Belgian is perfect
As security becomes more global and complex, the authority of those capable of reaching through it increases. If the only one who can decrypt the message between the General and the King is the messenger, then the Messenger is the true King.
Apple intelligence would be being too intelligent to purchase Apple products.
Chatbots are cancer. Nothing is secure, especially when it lives on hardware you don't own or have physical access to. This is just another gimmick we don't need.
basically it allows now companys to bitmine with your machine without you finding it ever out.
man that is so smart
My initial thought about realms is, wow wouldn't that be a great place to store a malware loader
Am I the only one who witnesses this smell.
A model running locally only means the computation is done on your device. They still have the option to grab the processed data afterwards.. Win win for them
They can, but it's very likely that it would be far less profitable for them to do so. Apple is the only major company that has at least somewhat solid security and privacy trackrecord. It gives them pretty good market advantage. Also, they'd probably get sued into oblivion.
I will be a little bit controversial with this comment, but, the invention of wheelchairs would be considered enshittification by that metric… (2:10) yes, it is not doing something you couldn’t already do, but it’s helping you do it more efficiently, specially for those who struggle with memory. In my case I have ADHD and it’s not even that I have bad memory, it’s that it is selective AF and needs triggers to pull data. Constantly I am getting to things by leveraging “anchors” on my memory, like “I was texting to xxx that time, let me review the conversation quickly to remember what I intended to do”. That being said, I’m sure there’s going to be tons of useless features in general, but I wouldn’t dismiss them always as useless, they are paving the way for the things that are going to be extremely useful
so we are relying on the signature at the end with the whole chain of trust that it imply if i summarize.
Their "private cloud" is actually GCP
Loved the video. But I find it interesting that these privacy assumptions are usually only brought up when discussing Apple products. Feels like we're just buying into their marketing schemes without any verification.
They said external people could check the security of the system. Could you do it?
Get this man some aquaphor for his lips! Great video as always
It's almost as if people continuously take stories from dystopian books and movies as a model instead of the warning they were meant to be.
These are dark times. These are dark times indeed.
I wonder how Apple will use this concept to prevent users from using basic things like replacing any parts from the phone or something else
wouldn't encrypting/decrypting an entire app/process and it's associated memory/storage make things slow? because you'd be encrypting and decrypting every thing in real time?
or is it a 1 time encryption when you enter the "Realm" and a 1 time decryption when you exit it?
is this a known tradeoff b/w speed and security?
OpenELM was the turning point to the in device LLM tasks
This system seems similar to Intel SGX in part. Which IIRC fell to a side channel attack? Hopefully Apple learned from their foray into this.
well that's a new thing to learn about
So... What's stopping the software from reporting what it is operating on? What is stopping apple from decrypting the network data before it gets to the machine? Without more info, this is just one layer of security when we need another 10 layers defined and explained.
cool. I just assumed it was their way of saying "our server is more secure trust us".
Microsoft wants this as well they call it Confidential VMs
As a bloke who had undiagnosed sleep apnea for over 10 years (fun fact, you don't need to be fat to have it), a bunch of new apple watch owners are in for a surprise.
Except for hardware bugs which various ARM manufacturers fell for just like Intel and AMD did a while ago. Any hardware exploit means you have no security. Thanks but I think I’d run things locally where you’d have to have my device rather than it share CPUs with other users and apps.