📌 The demo Unity project featured today is available via Patreon: www.patreon.com/dilmerv 👉 For additional documentation about MRUK take a look at: developer.oculus.com/documentation/unity/unity-mr-utility-kit-overview (this was a great resource to get a overall idea of what’s available)
Good day! I have a question: Is there a tool similar to Vuforia for creating image targets, so that depending on which image is detected, a 3D object appears?
Hey thanks for your great question! Currently, Meta doesn’t provide image tracking support or access to the cameras, I believe it is coming next year based on what they announced during Meta Connect 2024.
Thank you for your feedback. Meta currently only supports it prior to entering the app but image tracking is definitely not supported yet. Are you also considering other XR devices?
right now, im just trying to map an experience to a specific room. Maybe there's a way to export the room scans without having to build a 3D model of the space and "fitting it" to a prefab? Thanks, Dilmer!
@@dilmerv - I was wondering about this just yesterday. If developing a VR app for a restaurant for instance - everyone plays together in a VR room, but they still want to be able to see the world around them - I was thinking of a switch view (switch to passthrough to view the world, then back to the game). So, this is much nicer - still get to play the game and view
Hello, Dilmer. I'm from Korea, and I really enjoy watching your videos. Thank you for sharing so much valuable information. I have a question for you. Is it possible to do image tracking with Meta Quest 3’s passthrough, like how you can do image tracking with AR Foundation?
That’s a great question and thank you for all your support! As far as image tracking, currently Quest 3 doesn’t support image tracking, and you are correct AR Foundation offers it, but the underlaying Meta OS doesn’t expose it, at least not yet. Thank you and let me know if you have further questions.
Hello, thanks for this video. I was wondering if I can save that scene model and develop interactions on them, I am working on VR and XR training at my work, I scan a part of my work and save it, and in editor mode I add features and many things. that when I want to export my application to the quest 3, the lenses recognize this environment, and that all the interactions are within it, so to speak that when I walk everything is defined as I scan it, but I cannot find a solution to this
The simple interaction of grabbing objects, snap, and I want to make an interaction as if I were smearing something on a surface, for now, as a solution to this, I use particles to simulate it, but it is not completely correct, Thank you very much :)
Thank yok for great videos but I have question I'm trying to figure out how to do the room scanning on Quest 3 (like in the demo app "first encounter", not the basic plane detection that sets up walls from your room setup.) How can I make a Scan room button do you know anything?
My understanding is that each app will check for a scene model at startup, you can’t launch it yourself (with a button) as this is all handle at the OS level. Also, if you require a mesh (global mesh) there is a label that you can turn on within the MRUK prefab that will enable the captured mesh, then you can have more flexibility when it comes to raycasting.
Can we implement luma ai into unity, like i want scan an object(cars) in quest 3 and convert it to 3d object and i want to control the 3d cars like remote controlled cars. Is it possible? everything should happen in quest 3
📌 The demo Unity project featured today is available via Patreon: www.patreon.com/dilmerv
👉 For additional documentation about MRUK take a look at: developer.oculus.com/documentation/unity/unity-mr-utility-kit-overview (this was a great resource to get a overall idea of what’s available)
Thank you! For sharing/making this video!! I’ve been missing these features! So glad they are available now- Can’t wait to try it out
Thanks for your feedback and if you have any questions as you integrate it let me know 😉 have fun!
Awesome video Dilmer!
Thank you Jeff, I appreciate your feedback man!
Good day! I have a question: Is there a tool similar to Vuforia for creating image targets, so that depending on which image is detected, a 3D object appears?
Hey thanks for your great question! Currently, Meta doesn’t provide image tracking support or access to the cameras, I believe it is coming next year based on what they announced during Meta Connect 2024.
thanks, Dilmer! Anything for spatial mapping or image tracking?
Thank you for your feedback. Meta currently only supports it prior to entering the app but image tracking is definitely not supported yet. Are you also considering other XR devices?
right now, im just trying to map an experience to a specific room. Maybe there's a way to export the room scans without having to build a 3D model of the space and "fitting it" to a prefab? Thanks, Dilmer!
@0:53 - Surface Projected Passthrough - NICE - allows them to be mostly immersed, but also see the real world when needed.
That’s a great feature I agree! Thanks for your feedback.
@@dilmerv - I was wondering about this just yesterday. If developing a VR app for a restaurant for instance - everyone plays together in a VR room, but they still want to be able to see the world around them - I was thinking of a switch view (switch to passthrough to view the world, then back to the game). So, this is much nicer - still get to play the game and view
Hello, Dilmer.
I'm from Korea, and I really enjoy watching your videos.
Thank you for sharing so much valuable information.
I have a question for you.
Is it possible to do image tracking with Meta Quest 3’s passthrough, like how you can do image tracking with AR Foundation?
That’s a great question and thank you for all your support!
As far as image tracking, currently Quest 3 doesn’t support image tracking, and you are correct AR Foundation offers it, but the underlaying Meta OS doesn’t expose it, at least not yet.
Thank you and let me know if you have further questions.
Hello, thanks for this video.
I was wondering if I can save that scene model and develop interactions on them, I am working on VR and XR training at my work, I scan a part of my work and save it, and in editor mode I add features and many things. that when I want to export my application to the quest 3, the lenses recognize this environment, and that all the interactions are within it, so to speak that when I walk everything is defined as I scan it, but I cannot find a solution to this
What type of interactions are you trying to develop?
The simple interaction of grabbing objects, snap, and I want to make an interaction as if I were smearing something on a surface, for now, as a solution to this, I use particles to simulate it, but it is not completely correct, Thank you very much :)
Thank yok for great videos but I have question
I'm trying to figure out how to do the room scanning on Quest 3 (like in the demo app "first encounter", not the basic plane detection that sets up walls from your room setup.)
How can I make a Scan room button do you know anything?
My understanding is that each app will check for a scene model at startup, you can’t launch it yourself (with a button) as this is all handle at the OS level. Also, if you require a mesh (global mesh) there is a label that you can turn on within the MRUK prefab that will enable the captured mesh, then you can have more flexibility when it comes to raycasting.
Can we implement luma ai into unity, like i want scan an object(cars) in quest 3 and convert it to 3d object and i want to control the 3d cars like remote controlled cars. Is it possible? everything should happen in quest 3