Cool solution for the pixel-perfection in 3D! Regarding the spatial audio, I wonder if the issue might be with the fact that the camera is orthogonal, because then it's at an infinite distance from the world space (not exactly, but kinda). You should be able to solve this by creating an "AudioListener3D" Node and placing it "in world-space" (by, for example, projecting a raycast from camera to the ground plane and finding the intersection). I hope this helps!
Thanks! I am using a SubViewport to render the main scene into a TextureRect. I don't know if that's also messing with things but I tried adding a listening just in the main scene unconnected to the camera and I still don't hear anything.I should probably make a new empty project and see if 3d audio is working at all for me. Thanks for the tips.
The simplest way to understand it is that it's a screen space effect. 1: In the shader if we just used the pixels screen space location (SCREEN_UV) we get a masking effect, which looks right until we need to move the camera and then you can see the texture is tied to the screen. I have seen some kids cartoons that use an effect similar to this. 2: If we offset the pixels screen position by the screen position of the object being rendered then the textures will move with the camera. That's about it for what the code is doing at it's core. I couldn't find many examples but I know there are more out there like this: ua-cam.com/video/CJAS42vNdFg/v-deo.html The textures being used in the anime are cutouts and put in the correct location rather than being projected into 3d space. In the shader most of the trickiness comes from keeping the texture size correct along with adding the outline. Then for the 3d modelling it's the same principle, each face on the 3d model will become a mask for a texture in screen space. Like the face of a cube when drawn in orthographic screen space will become some sort of skewed rectangle (parallelogram) which then has a repeating texture applied. Modelling with those things in mind is pretty limiting as you can't really control where exactly on the repeating texture certain features might appear, or at least it's quite tricky to get it to line up. I hope that sort of makes sense. In the GitHub code under materials/isometric_basic.gdshader is the simplest example. github.com/astrellon/godot-3d-pixel-art/blob/main/materials/isometric_basic.gdshader The vertex shader is calculating where the screen position for the rendered object should be based off it's 3d position, and then in the fragment shader it's taking the screen position of the pixel and offsetting it by the world position. Then it just needs to adjust by the size of the texture and screen size and that's it.
Commenting to give a boost. UA-cam randomly recommends small videos like this one to me, and I work with pixel art, so this is extremely helpful!
Thanks I appreciate it!
Cool experiment. Love it. Thx for source!
Cool solution for the pixel-perfection in 3D! Regarding the spatial audio, I wonder if the issue might be with the fact that the camera is orthogonal, because then it's at an infinite distance from the world space (not exactly, but kinda). You should be able to solve this by creating an "AudioListener3D" Node and placing it "in world-space" (by, for example, projecting a raycast from camera to the ground plane and finding the intersection). I hope this helps!
Thanks! I am using a SubViewport to render the main scene into a TextureRect. I don't know if that's also messing with things but I tried adding a listening just in the main scene unconnected to the camera and I still don't hear anything.I should probably make a new empty project and see if 3d audio is working at all for me. Thanks for the tips.
really cool!
can u explain how the code works or where to get started to make 3D pixel art like this.
The simplest way to understand it is that it's a screen space effect.
1: In the shader if we just used the pixels screen space location (SCREEN_UV) we get a masking effect, which looks right until we need to move the camera and then you can see the texture is tied to the screen. I have seen some kids cartoons that use an effect similar to this.
2: If we offset the pixels screen position by the screen position of the object being rendered then the textures will move with the camera.
That's about it for what the code is doing at it's core.
I couldn't find many examples but I know there are more out there like this: ua-cam.com/video/CJAS42vNdFg/v-deo.html
The textures being used in the anime are cutouts and put in the correct location rather than being projected into 3d space.
In the shader most of the trickiness comes from keeping the texture size correct along with adding the outline.
Then for the 3d modelling it's the same principle, each face on the 3d model will become a mask for a texture in screen space. Like the face of a cube when drawn in orthographic screen space will become some sort of skewed rectangle (parallelogram) which then has a repeating texture applied. Modelling with those things in mind is pretty limiting as you can't really control where exactly on the repeating texture certain features might appear, or at least it's quite tricky to get it to line up.
I hope that sort of makes sense. In the GitHub code under materials/isometric_basic.gdshader is the simplest example. github.com/astrellon/godot-3d-pixel-art/blob/main/materials/isometric_basic.gdshader
The vertex shader is calculating where the screen position for the rendered object should be based off it's 3d position, and then in the fragment shader it's taking the screen position of the pixel and offsetting it by the world position. Then it just needs to adjust by the size of the texture and screen size and that's it.