📣 All of the assets used in the video are currently priced at 50% off. There are many other assets available here assetstore.unity.com/dilmer-valecillos?aid=1101l7LXo (affiliate link) that I also recommend year after year for XR or game development in general.
Dilmer, I wanted to recommend something in this field. A guide of sorts, perhaps you could sell it. The guide would have different use cases for Mixed Reality, and what approach you would take - not the details, just the high level approach. Here's a few uses case examples: 1) A restaurant - you blow out the ceiling and replace it with a 3d scene that you're interacting with (a game that renders a scene to a plane that's placed just below the surface of the ceiling in the restaurant)- Render Textures I'm assuming could handle this, though I haven't gotten into the details yet. A similar thing would be essentially blowing out the ocean completely with a plane that gets some game rendered onto that you could play anywhere along the beach. 2) A put put golf course - you want to add in some characters to the course. Perhaps they could be involved in the game in one way or another. What would be your approach for getting those characters to display properly sitting or standing on top of parts of the miniature golf course? All the tracking and everything - what approach. 3) A tunnel - you render to the tunnels surface - say for instance the fake tunnel surrounds a real world street, but you're not rendering a static tunnel. You're rendering a 3d scene onto it of fish swimming above you, or something like that. -- So, I'd like to know your approach for each of these 2 scenarious now if you're able to, but in general, I think having a guide to show developers what types of gigs that they can go out and do and what high-level approach that's likely going to work out the best. -- Could have a guide of 30 of these different types of games and the approach you would take.
Is it possible to develop a mixed-reality game that superimposes an image onto the phone screen (without the use of a headset)? Not sure the difficulty involved, but I'd like to make android app that can find components from a circuit board. For example, if you have a sheet of paper with a grid of squares, say 50x50.. if A4 is said/entered, could AR use a fiducials on the paper to calculate precisely how far into the grid it needs to place an appropriately scaled marker (scaling it based on the the angle and how close the camera is to the sheet). Can an image be overlayed onto the camera screen with this level of precision? Would such an application be realistic for a beginning unity programmer?
Currently, you won’t be able to do it with a headset mainly because there is not any kind of image tracking or object tracking available with Meta Quest devices yet. Vision Pro is pretty similar unless you use their enterprise solution. With phones (Android) yes that could be possible, you could use image tracking for your reference grid of squares and with some type of marker, then based on the dimensions, positions, rotations, you could then do your calculations. Also, take a look at something like media pipe from google github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/objectron.md Best, and great question!
@@dilmerv I think he was asking the opposite (which is what I was wondering too). It looks like the Meta SDK is an OpenXR extension, so shouldn't it be possible to build an android/iphone app that utilizes the Meta SDK's scene understanding? Chat-GPT tells me it's possible, theoretically it feels like it should be possible, but when I try to set it up I either get OpenXR runtime errors or just don't see how it can be done when the Meta SDK uses the HeadsetManager class (forget what it's actually called, the one where you pick which quest devices to support). Would love to get your opinion since you seem familiar with the Meta SDK. I think for @bennguyen1313 looking into Unity Sentis and training a single neural network to recognize circuit boards would be the easiest option. There are lots of colab guides on how to train a network to segment something with computer vision through something like tensorflow (which the network can then be converted to onnx which is what I believe Sentis uses, at least Barracuda did when that was Unity's AI option). Then you could overlay the grid onto the segmented board like you're talking about.
📣 All of the assets used in the video are currently priced at 50% off. There are many other assets available here assetstore.unity.com/dilmer-valecillos?aid=1101l7LXo (affiliate link) that I also recommend year after year for XR or game development in general.
Dilmer, I wanted to recommend something in this field.
A guide of sorts, perhaps you could sell it. The guide would have different use cases for Mixed Reality, and what approach you would take - not the details, just the high level approach.
Here's a few uses case examples:
1) A restaurant - you blow out the ceiling and replace it with a 3d scene that you're interacting with (a game that renders a scene to a plane that's placed just below the surface of the ceiling in the restaurant)- Render Textures I'm assuming could handle this, though I haven't gotten into the details yet.
A similar thing would be essentially blowing out the ocean completely with a plane that gets some game rendered onto that you could play anywhere along the beach.
2) A put put golf course - you want to add in some characters to the course. Perhaps they could be involved in the game in one way or another.
What would be your approach for getting those characters to display properly sitting or standing on top of parts of the miniature golf course?
All the tracking and everything - what approach.
3) A tunnel - you render to the tunnels surface - say for instance the fake tunnel surrounds a real world street, but you're not rendering a static tunnel. You're rendering a 3d scene onto it of fish swimming above you, or something like that.
-- So, I'd like to know your approach for each of these 2 scenarious now if you're able to, but in general, I think having a guide to show developers what types of gigs that they can go out and do and what high-level approach that's likely going to work out the best.
-- Could have a guide of 30 of these different types of games and the approach you would take.
Is it possible to develop a mixed-reality game that superimposes an image onto the phone screen (without the use of a headset)?
Not sure the difficulty involved, but I'd like to make android app that can find components from a circuit board. For example, if you have a sheet of paper with a grid of squares, say 50x50.. if A4 is said/entered, could AR use a fiducials on the paper to calculate precisely how far into the grid it needs to place an appropriately scaled marker (scaling it based on the the angle and how close the camera is to the sheet).
Can an image be overlayed onto the camera screen with this level of precision? Would such an application be realistic for a beginning unity programmer?
Currently, you won’t be able to do it with a headset mainly because there is not any kind of image tracking or object tracking available with Meta Quest devices yet. Vision Pro is pretty similar unless you use their enterprise solution.
With phones (Android) yes that could be possible, you could use image tracking for your reference grid of squares and with some type of marker, then based on the dimensions, positions, rotations, you could then do your calculations.
Also, take a look at something like media pipe from google github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/objectron.md
Best, and great question!
@@dilmerv I think he was asking the opposite (which is what I was wondering too). It looks like the Meta SDK is an OpenXR extension, so shouldn't it be possible to build an android/iphone app that utilizes the Meta SDK's scene understanding? Chat-GPT tells me it's possible, theoretically it feels like it should be possible, but when I try to set it up I either get OpenXR runtime errors or just don't see how it can be done when the Meta SDK uses the HeadsetManager class (forget what it's actually called, the one where you pick which quest devices to support). Would love to get your opinion since you seem familiar with the Meta SDK.
I think for @bennguyen1313 looking into Unity Sentis and training a single neural network to recognize circuit boards would be the easiest option. There are lots of colab guides on how to train a network to segment something with computer vision through something like tensorflow (which the network can then be converted to onnx which is what I believe Sentis uses, at least Barracuda did when that was Unity's AI option). Then you could overlay the grid onto the segmented board like you're talking about.
Also vision pro?
I have been making many prototypes for Quest 3 which I plan to cover with Vision Pro, thanks for your suggestion!