- 393
- 356 249
Jason Doucette
United States
Приєднався 30 тра 2006
Hello. I'm Jason. I'm a retro indie game developer. Co-founder of Xona Games, "Empower the Player" philosophy -- with my twin brother, Matthew. My games: Score Rush, Decimation X, X2, X3, Duality ZF. Some available on Xbox and PlayStation.
I make retro games with a focus on Gameplay & Empowerment. I make my own custom game engines from scratch. Some written in assembly and machine language. My first game (7 years old) used hardware sprite using v-sync automotion.
My innovations & technologies have resulted in 5 patents. My best inventions are not patented. I have had two world records in number theory. I solve Rubik's cubes (15 seconds), sometimes race cars (go-karts, R/C, road, formula). I occasionally work at high tech companies (Amazon, Oracle) making cell phones, AAA games, and clouds.
I make retro games with a focus on Gameplay & Empowerment. I make my own custom game engines from scratch. Some written in assembly and machine language. My first game (7 years old) used hardware sprite using v-sync automotion.
My innovations & technologies have resulted in 5 patents. My best inventions are not patented. I have had two world records in number theory. I solve Rubik's cubes (15 seconds), sometimes race cars (go-karts, R/C, road, formula). I occasionally work at high tech companies (Amazon, Oracle) making cell phones, AAA games, and clouds.
Neon Flow, 3D Spline, Nvidia 4090, 50M points
2024-12-25. Point cloud generated using 3D parameterization of sine waves as a foundation. From this base, a circle split into 10 segments is oriented to follow the same spline. However, the circle rotation is made out of sync, so that it rotates more quickly, as though it's following the spline faster, ahead of schedule. The circle's radius oscillates.
RENDER
The point cloud is rendered as connected lines to ensure the gaps are filled when the curves are close to the camera, though with 50,000,000 points, it is hardly necessary -- but does appear as bright dots when the curve moves very quickly and is very close.
Points further from the camera are rendered darker. They are rendered with a Z buffer, so they do not combine with transparency -- this allows the coloring to not brighten to white, losing the chromatic information.
Post processing takes the individual pixels and enlarges them into a 15x15 pixel circle. During this enlargement, additive blending is applied. The darkened colors in the distance help lower their impact to the final color, to not overwhelm the scene with colors, but also to not cause the final color to become white.
The final target is 1920x1080, and it is upscaled to 4K before upload.
SPLINE
It is a simple 3D parameterization using multiple octaves of sine waves. Borrowing the concept from 3D value noise, I set base values, and compute other octaves using constant frequency and lacunarity values. There are 3 octaves for the foundational spline, and the out-of-sync spline the circle segments are "following" for its orientation.
FRAME RATE
This is the NVidia 4090 power showing at full strength. While the final video is 60 Hz, the rendering generally over 600 Hz with 20,000,000 points, and from 250 to 400 Hz with 50,000,000 points. It depends on what is visible.
I develop this on my Alienware m18 R2 with a 480 Hz monitor, which is 8x the frame rate of 60 Hz. When these ultra smooth lines zip past the camera, that feeling is lost when you lose 7 out of every 8 frames. For this to be experienced the same by the viewer of the 60 Hz recording, it minimally needs blur motion post-processing, or actual blur motion by combining 8 frames into 1.
OPTIMIZATION
There is no optimization code to reject points out of the view frustrum. Also, the 15x15 sized post-processing is 225 color and depth samples per pixel, at 1920x1080, that's 466,560,000 samples for both, or 933,120,000 samples in total, every frame. The NVidia 4090 is not slow. This type of sampling is quite similar to Gaussian blur optimizations, and could be heavily optimized.
COLOR
Distance through the spline changes the hue of the color, keep the value and saturation at 100%.
CAMERA
The camera movement is linked to the spline.
FONT
The HUD was updated from past videos to be more minimal, to not distract from the visuals. The font is Berlin Sans FB.
HARDWARE
Alienware m18 R2 with NVidia 4090 GPU, bought day one availability from Dell: January 23, 2024.
- NVIDIA GeForce RTX 4090 16GB GDDR6 (laptop version)
- 14th Gen Intel Core i9 14900HX (24-Core, 36MB L3 Cache, up to 5.8GHz Max Turbo Frequency) (16 E-Cores, and 8 P-Cores hyperthreaded = 32 V-Cores total)
- 64 GB: 2 x 32 GB, DDR5, 5200 MT/s, non-ECC, dual-channel
- 18" FHD+ (1920 x 1200) 480Hz, 3ms, ComfortView Plus, NVIDIA G-SYNC + DDS, 100% DCI-P3, FHD IR Camera
- AlienFX RGB backlit Alienware CherryMX ultra low-profile mechanical keyboard
- 4 TB, M.2, PCIe NVMe, SSD
PLAYLISTS
- Xona System 8: ua-cam.com/video/PqFQv60p-0E/v-deo.html
- Voxel: ua-cam.com/video/uadGU-stF-w/v-deo.html
- Nvidia 4090: ua-cam.com/video/6GXPp2gnl54/v-deo.html
- Road: ua-cam.com/video/rA4g4VX7ys8/v-deo.html
- Wave Function: ua-cam.com/video/ngctVd9VK8I/v-deo.html
- Graph-All: ua-cam.com/video/kLSc7bZW2Bs/v-deo.html
- Ray Cast: ua-cam.com/video/SkaPYZOKPQg/v-deo.html
- Scroll Shmup: ua-cam.com/video/l9bIYkZepPo/v-deo.html
- Arena Shmup: ua-cam.com/video/VKjiuq437t0/v-deo.html
- 3D Polygon: ua-cam.com/video/0Qq_euAMP48/v-deo.html
- GW-BASIC: ua-cam.com/video/QMQJ7o8e-GI/v-deo.html
WEBSITES
- GitHub: github.com/JDoucette
- Blog: thefirstpixel.com
- Studio: xona.com
MUSIC
The Voyage by Audionautix
Creative Commons Attribution 4.0 license
creativecommons.org/licenses/by/4.0
RENDER
The point cloud is rendered as connected lines to ensure the gaps are filled when the curves are close to the camera, though with 50,000,000 points, it is hardly necessary -- but does appear as bright dots when the curve moves very quickly and is very close.
Points further from the camera are rendered darker. They are rendered with a Z buffer, so they do not combine with transparency -- this allows the coloring to not brighten to white, losing the chromatic information.
Post processing takes the individual pixels and enlarges them into a 15x15 pixel circle. During this enlargement, additive blending is applied. The darkened colors in the distance help lower their impact to the final color, to not overwhelm the scene with colors, but also to not cause the final color to become white.
The final target is 1920x1080, and it is upscaled to 4K before upload.
SPLINE
It is a simple 3D parameterization using multiple octaves of sine waves. Borrowing the concept from 3D value noise, I set base values, and compute other octaves using constant frequency and lacunarity values. There are 3 octaves for the foundational spline, and the out-of-sync spline the circle segments are "following" for its orientation.
FRAME RATE
This is the NVidia 4090 power showing at full strength. While the final video is 60 Hz, the rendering generally over 600 Hz with 20,000,000 points, and from 250 to 400 Hz with 50,000,000 points. It depends on what is visible.
I develop this on my Alienware m18 R2 with a 480 Hz monitor, which is 8x the frame rate of 60 Hz. When these ultra smooth lines zip past the camera, that feeling is lost when you lose 7 out of every 8 frames. For this to be experienced the same by the viewer of the 60 Hz recording, it minimally needs blur motion post-processing, or actual blur motion by combining 8 frames into 1.
OPTIMIZATION
There is no optimization code to reject points out of the view frustrum. Also, the 15x15 sized post-processing is 225 color and depth samples per pixel, at 1920x1080, that's 466,560,000 samples for both, or 933,120,000 samples in total, every frame. The NVidia 4090 is not slow. This type of sampling is quite similar to Gaussian blur optimizations, and could be heavily optimized.
COLOR
Distance through the spline changes the hue of the color, keep the value and saturation at 100%.
CAMERA
The camera movement is linked to the spline.
FONT
The HUD was updated from past videos to be more minimal, to not distract from the visuals. The font is Berlin Sans FB.
HARDWARE
Alienware m18 R2 with NVidia 4090 GPU, bought day one availability from Dell: January 23, 2024.
- NVIDIA GeForce RTX 4090 16GB GDDR6 (laptop version)
- 14th Gen Intel Core i9 14900HX (24-Core, 36MB L3 Cache, up to 5.8GHz Max Turbo Frequency) (16 E-Cores, and 8 P-Cores hyperthreaded = 32 V-Cores total)
- 64 GB: 2 x 32 GB, DDR5, 5200 MT/s, non-ECC, dual-channel
- 18" FHD+ (1920 x 1200) 480Hz, 3ms, ComfortView Plus, NVIDIA G-SYNC + DDS, 100% DCI-P3, FHD IR Camera
- AlienFX RGB backlit Alienware CherryMX ultra low-profile mechanical keyboard
- 4 TB, M.2, PCIe NVMe, SSD
PLAYLISTS
- Xona System 8: ua-cam.com/video/PqFQv60p-0E/v-deo.html
- Voxel: ua-cam.com/video/uadGU-stF-w/v-deo.html
- Nvidia 4090: ua-cam.com/video/6GXPp2gnl54/v-deo.html
- Road: ua-cam.com/video/rA4g4VX7ys8/v-deo.html
- Wave Function: ua-cam.com/video/ngctVd9VK8I/v-deo.html
- Graph-All: ua-cam.com/video/kLSc7bZW2Bs/v-deo.html
- Ray Cast: ua-cam.com/video/SkaPYZOKPQg/v-deo.html
- Scroll Shmup: ua-cam.com/video/l9bIYkZepPo/v-deo.html
- Arena Shmup: ua-cam.com/video/VKjiuq437t0/v-deo.html
- 3D Polygon: ua-cam.com/video/0Qq_euAMP48/v-deo.html
- GW-BASIC: ua-cam.com/video/QMQJ7o8e-GI/v-deo.html
WEBSITES
- GitHub: github.com/JDoucette
- Blog: thefirstpixel.com
- Studio: xona.com
MUSIC
The Voyage by Audionautix
Creative Commons Attribution 4.0 license
creativecommons.org/licenses/by/4.0
Переглядів: 79
Відео
Nvidia GeForce 4090, Restricted Chaos Game Cube, 100 Million points
Переглядів 199День тому
2024-12-17. Playing the "Restricted Chaos Game" using a cube of 8 vertices, disallowing the same random vertex to be chosen twice. The result is this incredible fractal. CHAOS GAME The chaos game involves starting with a point, and then moving partway to one of the random vertices. In this case, we move 50% of the way. Repeat ad infinitum. RESTRICTED CHAOS GAME One restriction was imposed for t...
Nvidia GeForce 4090, 3D Sierpiński Carpet, 125 Million points
Переглядів 1,2 тис.21 день тому
2024-12-02. 3D Sierpiński Carpet, also known as the Menger Sponge. GENERATION Chaos game is a method of fractal generation that creates order from chaos. You plot a 3D dot based on the position of the last plotted dot. Given a cube defined by 8 corners, pick one of the corners, or edges, at random. There are 20 total positions. Move 1/3rd the way to the corner. That's it. FRACTAL This process c...
Nvidia GeForce 4090, Butterfly Effect - Chaos Theory, Strange Attractor
Переглядів 457Місяць тому
2024-11-10. The butterfly effect is the concept that small changes in initial conditions can lead to vastly unpredictable variations in future outcomes. Its origins are from chaos theory. While a system may be deterministic, you cannot predict the outcome. The name arrives from the idea that a butterfly’s wings flapping in one part of the world could set off a chain of events leading to a torna...
Nvidia GeForce 4090, Chaos Theory, Lorenz Strange Attractor, 50 Million Dots
Переглядів 2572 місяці тому
2024-10-27. Chaos theory. Solution in the Lorenz attractor using: ρ (rho) = 28, σ (sigma) = 10, and β (beta) = 8/3. 50,000,000 time step iterations, each drawn as a dot, with a time step of 0.0000125. CHAOS THEORY Edward Lorenz attempted to recreate a simulation of weather patterns by modeling 12 variables (temperature, wind speed, etc.). He failed to recreate the simulation due to rounding of...
Nvidia GeForce 4090, Icosahedron Crystal, 125 Million points
Переглядів 4922 місяці тому
2024-10-20. The regular icosahedron is one of the Platonic solids. It has 20 faces, each an equilateral triangle, made from 12 vertices. This is a rendering of a point cloud based on these vertices. Each vertex has a unique color that was procedurally generation using evolution theory to find maximum human perception difference. The point cloud is made by planes defined by 3 random vertices, LE...
Nvidia GeForce 4090: Chaos Game: Dodecahedron, 125 Million points
Переглядів 4472 місяці тому
2024-10-12. Fractal generation using the Chaos Game algorithm, using a 20-point Dodecahedron as the native control points. CHAOS GAME Chaos Game starts with a random location, and iteratively picks a random control point, and moves towards it. Typically, the motion is 50% of the distance from the current location to the control point. In this case, it moves 70% of the way. Also, an additional c...
#8 Amazing Morphing Asymptotic Sine Graphs
Переглядів 2424 місяці тому
August 14, 2024. Rendering at 1920x1080, which makes the lines rather hard to see, even with the upscaling to 4K. Next time, I'll stick with larger pixels, or try rendering thicker lines (which is non trivial to look "right" in all directions). I was in the middle of creating a bunch of interesting sine wave graphs along with other trigonometric functions, and found a video: "Sine graphs but th...
Wall Running, Map View, DOOM + DOOM II, E1M2 (Rerelease, Aug 9, 2024)
Переглядів 1334 місяці тому
Aug 9, 2024. Another example of the glitch that allows wall-running in DOOM DOOM II for the PC Steam version I downloaded August 9, 2024. Showcased is Doom 1, E1M2. It is especially noticeable in overhead map view. Auto-run is turned on by default, and I am not executing a run input. With auto-run, you become accustomed to this running speed. You will then notice that you can "wall run" even fa...
Wall Running in DOOM + DOOM II, E1M9 (Rerelease, Aug 9, 2024)
Переглядів 1574 місяці тому
Aug 9, 2024. I found a glitch in the physics engine that allows wall-running in DOOM DOOM II, in the PC Steam version I downloaded August 9, 2024. I am playing Doom 1, E1M9 (Episode 1, Mission 9, which is the hidden level found whose exit is found in E1M3). The game has auto-run by default, which is nice. Though game purists will notice that it's for power players only (most of us today, since ...
#7 Amazing Morphing Wave Functions: Sine, Grid, 3D, Perspective, Rotate
Переглядів 1,1 тис.4 місяці тому
August 3, 2024. This video showcases the same equations as the prior, except the result is not rendered as a graph where the solution is zero. Instead, the actual result is converted into a prismatic color spectrum approximately covering all values from -1 to 1, where it fades to black beyond that range. While it's harder to see the beauty in the equation solutions, it does show a nice color gr...
#6 Amazing Morphing Equations: 3D Perspective, Grid, Rotate, Complex Sine
Переглядів 5604 місяці тому
July 28, 2024. Showcasing an array of fascinating equations, each morphing into the next. I lined up the equations to be mostly incremental changes from prior to produce a flow and also make interesting morphing patterns. The finale is my invention of the 3D formula which makes all 3D graphics on a 2D screen possible the Holy Grail of 3D graphics changed into a 2D equation. Many of the equation...
Nvidia GeForce 4090, X-Fractal (Vicsek), 100 Million points
Переглядів 4286 місяців тому
2024-06-16. Nvidia GeForce RTX 4090 Laptop GPU running on my Alienware m18 R2, to showcase the number of dots it can draw in real-time a classic demoscene effect from the 1990s VGA days. MILLIONS OF DOTS 105,000,000 dots uploaded to the GPU with a vertex buffer, colors included. (I know bandwidth would be better to ignore the color, but it's not very attractive and harder to discern.) I only up...
Pseudo 3D Road - VGA - Full Tilt - play-through, 2 crashes
Переглядів 7258 місяців тому
Jason playing through the game in full using DOSBox. UPDATED in MonoGame (XNA) in 2021: youtu.be/watch?v=ck5ALX11YU4&list=PLjnbT4UISq0bnfd1RC3M4PgTgkmhlkikV My playlists: - Voxel: youtu.be/watch?v=XCVWEuhCCDM&list=PLjnbT4UISq0bQF1g85tE9jTrKfEtdRYlY - Road: youtu.be/watch?v=ck5ALX11YU4&list=PLjnbT4UISq0bnfd1RC3M4PgTgkmhlkikV - Ray Casting 3D: youtu.be/watch?v=zjswXUTMP2o&list=PLjnbT4UISq0YcFtRFj...
Pseudo 3D Road - VGA - Full Tilt - engine test #2
Переглядів 4518 місяців тому
Testing out the Full Tilt engine. UPDATED in MonoGame (XNA) in 2021: youtu.be/watch?v=ck5ALX11YU4&list=PLjnbT4UISq0bnfd1RC3M4PgTgkmhlkikV My playlists: - Voxel: youtu.be/watch?v=XCVWEuhCCDM&list=PLjnbT4UISq0bQF1g85tE9jTrKfEtdRYlY - Road: youtu.be/watch?v=ck5ALX11YU4&list=PLjnbT4UISq0bnfd1RC3M4PgTgkmhlkikV - Ray Casting 3D: youtu.be/watch?v=zjswXUTMP2o&list=PLjnbT4UISq0YcFtRFjFQqK0g6ONNCtrvY - S...
Pseudo 3D Road - VGA - Full Tilt - engine test #1
Переглядів 4748 місяців тому
Pseudo 3D Road - VGA - Full Tilt - engine test #1
Game Dev Engine #16. Elementary Cellular Automaton.
Переглядів 5888 місяців тому
Game Dev Engine #16. Elementary Cellular Automaton.
Game Dev Engine #15. Munch Man. Sprite Scalar.
Переглядів 6059 місяців тому
Game Dev Engine #15. Munch Man. Sprite Scalar.
Game Dev Engine #14. X-Fractal Recursion.
Переглядів 701Рік тому
Game Dev Engine #14. X-Fractal Recursion.
Game Dev Engine #13. Fractal: Iterated Function System.
Переглядів 621Рік тому
Game Dev Engine #13. Fractal: Iterated Function System.
Game Dev Engine #12. Order from Chaos: Sierpiński Triangle.
Переглядів 696Рік тому
Game Dev Engine #12. Order from Chaos: Sierpiński Triangle.
Game Dev Engine #10. Dynamic Pixel Zoom.
Переглядів 889Рік тому
Game Dev Engine #10. Dynamic Pixel Zoom.
Game Dev Engine #4. Window Refresh Bug.
Переглядів 804Рік тому
Game Dev Engine #4. Window Refresh Bug.
Game Dev Engine #3. Window Input System.
Переглядів 816Рік тому
Game Dev Engine #3. Window Input System.
really good!!!
Thank you! Very glad you enjoyed. I have another one coming soon. :)
❤
How do you make the camera follow the spline?
I compute the entire point cloud up front. The circle is an offset from the main spline, so even though the foundation spline is not part of the display, I store it in an array of points, and access it with basic linear interpolation. There are so many points that it's already a smooth curve; I could probably even round off to the nearest point. The camera follows this. I have a function to produce a camera view matrix from a given camera position and target position. I just update the camera position, and set the target to be one step ahead, and compute a new view matrix each time. It assumes no roll, though it may be neat if it tilted to go around turns.
@@JDoucette Oh, I see. Both the camera and the point cloud are based off this main spline.
Nice one! This is quite a departure.
Thanks Vinny. It's actually still a point cloud, just made with parameterization to make a 3D spline, where the camera automatically follows the spline, and the total size is enlarged to be larger than the view frustrum (it fades out to black quickly, and you can only see the local points). I stumbled upon this solution iteratively when trying to improve the visuals of the entire spline (as I do with the other point cloud fractals) -- as I solved each concern, it got closer to this, including the neon glow, the additive rendering, and even the octaves to give it personality!
Superbe
Thank you my friend. Heh, it wants to translate Superbe into Stunning. I'll take both! :)
Beautiful
Thank you. Glad you enjoyed.
❤
Crazy that 1. Displaying 100,00,00 points at a silky smooth framerate in real time on a *mobile* GPU! 2. At that density it looks solid. ;-)
I agree! I wonder if I can speed it up if I remove the color from the vertex data. Currently it has 4-byte color and 12-byte (3 x 4-byte) position. It would improve the bandwidth. I know I could implement view frustrum culling and level of detail, but I was hoping to not convert a dot-drawer into a full-fledged 3D engine. :P
The density! At 100,000,000 points in 1920 x 1080 resolution, which is only 2,000,000 points total, that's an overdraw of 50x for each pixel average. The video compression ruins individual pixels, even when upscaled to 4K. So each pixel is post-processed enlarged to be a 5x5 circle (3:5:5:5:3 shape), which helps fill the density as well -- though even with single pixels, it would appear solid at a distance.
@@JDoucette Not to be rude but aren't points one of the easiest things to render? People have gotten away with much more using voxels. Something about manipulating texture maps.
@durs_co Not rude at all! They sure are easy to render. Have some kind of computation to produce them (restricted chaos game, in this video's example), put them into a vertex buffer (color and position, in this example), upload to the GPU, and tell it to render as points (not lines or polygons). The GPU even depth sorts it for you.
@@durs_co I do a little bit more. I only upload the points once, and then just update the world, view, projection matrix for the perspective transform to 3D perspective every frame -- just to see how many I can render each frame. Then there's a lot of post-processing to make it visible in a video recording, as single pixels don't show well -- this includes generating my own depth buffer, then post-processing the color & depth buffer to get what you see in this video.
Can buildings be made in voxel space engine?
There are three ways I have considered: 1. with sprites, as you see here. 2. with voxels, caveat being that a single voxel being the height of a building wouldn't allow any detail (being just a single color), which could be resolved with adding "voxel detail" to voxels that extend higher than a single voxel. 3. with voxels, via more height maps, allowing overhangs, and caves. The number of height maps is unlimited, but increases rendering time, and the number of indents is limited by the number of height maps. Perhaps they could be dynamic?
❤
Beautiful! It reminded me of a video here called "Shadertoy Ray Marching Fractals | 'Alien Structures' by Chris Webb." I didn’t post the link because it might get banned. Have you seen it?
I just looked it up! Thanks for the example. I've seen many ray marchers on ShaderToy that are fractals. My engine was just supposed to see how many dots I could draw for the purposes of showing off GPU performance for silly things (though each dot is a vertex, so there is serious horsepower going on in the background), and also for point clouds (though video compression makes that challenging to show off). However, 125,000,000 dots is 60x the number of screen pixels -- so you can imagine putting all of that work into each screen pixel instead. This is basically what a ray marcher is doing. Then you can hack in all sorts of things, like glows, shadow effects, etc. It's amazing what they are doing on ShaderToy. I have not tried it myself yet.
@@JDoucette I knew about shaders, but I wasn't aware of ShaderToy. I'll have to check it out-it seems pretty fun!
@@adrikriptok7225You'll lose a day looking at shadertoy. Make sure your browser is using your main powerful GPU and not the integrated chip that is usually slower. You'll see what kind of processing power some of these shaders require.
> Dot enlargement size does not vary according to distance -- that implementation is coming soon. I like that in the current implementation, it looks like a solid object from afar, but reveals itself to be mostly empty space up close. Might be good for a game of cat and mouse where you don't know what areas can be traversed till you get closer? Anyway, really cool demo!
The GPU can render enough dots to make it appear solid even if not enlarged. It's quite amazing. There are some mathematical structures (3D grid of dots) that look great due to Moiré patterns from various solid & space mixtures. But video compression ruins it, so I had to enlarge the pixels. I guess the original dots were all 1x1 sized, so this current implementation is fine too.
Yeah -- the feeling of solid, then feeling of empty space -- sort of reminds me of atoms. I have a plan to render the electron probability cloud around various energy states. MinutePhysics has a great video on this called "A Better Way To Picture Atoms", but mine would be different: no motion, but perhaps travelling through the cloud. If I adjust the pixel enlarge size according to distance, and used the depth to determine visual priority, then I could re-introduce the translucent pixels again, to give the feeling of a point cloud fog, and not have distant dots overload the view of a single nearby dot (my recent videos show this artifact).
really great video keep it up!
@@monke2220 Thank you! I have lots of ideas for this style of data visualization.
Very cool demonstration.
Thanks! I have lots more ways to show off these strange attractors... I have to figure out what would be best for visualization and interest.
Great work.
Thanks for the kind words.
❤
lol you inspired me to make a voxel space engine of my own
Yes!! Please start one and share! I'd love to see your progress! :) Do you have any other graphics projects?
Wow , I was browsing old youtube before:2007 and I found this, this the most impressive thing i found yet :)
That's awesome! Thanks for the complement, and welcome to yesteryear! I think I uploaded this not too long after UA-cam was a thing. I guess you were looking to see some of the first videos ever uploaded... crazy thing is that this tech is from 1995, so even 2006, when I uploaded it, was a decade later.
thank you for the asteroid example, really put the idea into perspective, crazy!
Glad you enjoyed it. I think it's the easiest example to consider. Once you grok that, you can start to consider more complex examples. I still think the butterfly example is far fetched, since it requires many, many "just at the edge of impact" cases -- like those comedy movies where one funny act leads to another, leading to another -- all of which are "winning the lottery" chances.
A reasonable complex case is our Solar System: the planets are chaotic. It's not clear that some of them, and some of the moons, including our own, may be ejected from the Solar System even before the Sun runs out of energy. It's only known to be stable for at least 100 million years -- but not necessarily for the remaining 5 billion of the Sun's life.
Very very cool visualization.
I will try another one with shorter particle lengths. I could probably show more particles and have it still be readable.
I'd like to also show what happens if the particles start in any area of the 3D cube -- though it may be harder to view. The parameters themselves - sigma, rho, beta - could change, which may be interesting to explore.
These are pretty great, Jason. Does it look interesting for the particles to leave shorter trails, or does it make it hard to appreciate the resulting pattern?
That's good insight, Vince. I suspect it will look nicer when shorter. They started very long, and I continually shortened them. But I didn't go shorter until I felt it was too far -- so I will try that in the next video.
🤯
amazing engine
Thank you. This terrain is actually my favourite out of the whole bunch for showcasing.
Better keep a close eye on textbook publishers to make sure they don't steal your video to use as cover art.
Ha ha, thank you my friend, I will keep an eye out!
Nice work. ❤
Nice work! ❤
Wow, this is amazing. You can explore these "formulas" and I have not seen anything like this before.
I know -- I randomly came across a Veritasium video on the butterfly effect, and it shows this strange attractor, and even explains it quite quickly (a 2D slice of fluid dynamics, where 3 variables are being tracked, which you can plot in 3D), and how it came about it (trying to replicate a simulation, which failed to replicate, so he thought the computer was broken, and then simplified the math up until this simple 2D slice while trying to find the "bug" -- having found chaos theory instead). Anyways, the video shows a few angles, and even explains a few variations well. But I've never seen anyone show the equations in detail, up close, like this before.
LOVE the showing the 3 principal axes as a plane with dots! Q. Is there a reason one of the plane axes isn't fulling showing? i.e. Right side @2:55
@MichaelPohoreski Thanks! It gives far more context, especially when spinning and zooming around an abstract (though based on physical) graph. A. Are you watching in low resolution? The planes are always visible for me. I only draw 3 of the 6 sided cube. I suppose I could draw all 6?
@MichaelPohoreski Or I could draw the planes and axes where they actually exist... the ranges of X, Y, and Z are between -20..20, -30..30, and 0..50 respectively.
@@JDoucette Nope 4K. Ah, it is just a bad perspective. I rewound a few seconds and counted the actual dots. They are are all there. False alarm of user error. =P
@@JDoucette Not sure if drawing all 6 edges of the cube would help? I don't believe so. Worth a shot to test if it adds or subtracts from the main image.
@@MichaelPohoreski I bet I know what happened re: bad perspective and dot counting. I have the FOV very wide. When you have even a large FOV like Doom 1993 at 90°, the dot pattern of squares will make an alignment at 45°, which is not normally seen, so you're not used to processing it. Go wider FOV, and more patterns arise. It's similar to the Golden Ratio being the most irrational number, and the "Tree Gaps and Orchard Problem" (search Numberphile for a great video). I think too many of these patterns at such a wide FOV is hard to process when you're not used to them, and also had no context that the FOV was wide to begin with, so you didn't expect to see them.
This could be the next 3Blue1Brown ... just need to add some explanation in a follow up video ;-)
Well now. That's quite the compliment!! Does this mean I have to actually go out and learn what this is actually doing? :P I have a detailed description heavily based on other sources, which I reworded to suit my own understanding, but found that I had to leave it largely as-is. I think I need to make a few more of these simulations, and read a bit more, to truly grok it.
For this simulation -- just as 3B1B would likely dive into explanations -- I would love to show the limits and how this works. Starting from different parameters, and seeing which ones being chaotic, and which one settle down. I am not sure how to covey that much information -- but it could likely be done. Perhaps even showing only those that become chaotic.
This demo shows 50,000,000 dots. If you pause at the right time, you can see how close they are. I'm not even drawing lines between the dots! This could be rendered as lines with 1/100th or 1/1,000th the amount of vertices and still have the same fidelity. So I could really draw a lot of content at the same time.
@@JDoucette Yes. :-)
Nice! I subscribed because of my brief interest in old racing games, and now here you are, sharing my long-time interest in math and physics.
I will return to the retro pixel pseudo 3D racing game engine eventually! :) I am happy you are enjoying these videos as well!
Beautiful!
Thank you!
This is awesome.
I have a bunch of additional ideas for this same shape. Using a Sierpiński triangle between the 3 nodes for each plane. Or using the point cloud I have here, but biasing it towards the edges of the triangles in each plane. That could be done in 3D, not just planes, as well, with 4 points per sub-object.
It's very pretty
Why thank you! :)
Absolutely beautiful! Ignoring the Z-buffer is a nice touch. Reminds me of Monte Carlo approximations.
Thank you! I felt it is Monte Carlo to some degree, since the triangular slices through the solid are not rendered as planes, but as a point cloud of possibilities. Though there is no progression of each point to the next -- every point is independently generated -- so it may not fit the true definition.
Make this a screensaver
That, my friend, is a great idea. This kind of road scenery can be generated forever. The artwork, not so much, so that would repeat a lot. But the road twists and turns could last hours even manually created, not even procedurally.
POINT DEPTH SORTING: I saw a comment (since deleted), so I am sure others had similar thoughts: "... there is a point depth sorting issue ... points that should be in front seem to be displayed in back ... could just be a UA-cam video compression issue." Not UA-cam's fault! I am rendering them this way, as single pixels, depth-sorted. Then a post-processor enlarges them into circle. So what happens to depth sorting? The depth data is not present in the rendered texture, so it just combines the colors. Points grouped together in the distance are more dense (on 2D display), thus consuming more of the color-combination -- thus points further away seem more "powerful" to visibility. The effect is kind of cool, so I wanted to capture it before I do away with it. I plan to add depth-buffer storage to use during post processing in my next video. :)
where do I go to try it out?
I don't have it available as a demo. I've considered releasing the source code, but it's not a user friendly app at this time, and it's still in development. I wonder if I could release it on the web. It would need some work to be usable.
Cool! Thanks for the detailed description too, very interesting technically!
Thanks! The classic chaos game example is the Sierpiński triangle, which is 2D, 3 control points, and you move halfway to the randomly chosen point each iteration. It's a surprise to get beautiful order from chaos. But once you think about it, it makes sense: If you can peer into the future, by starting with the entire answer/result (Sierpiński triangle), take all points, and then move them all halfway to any of the 3 source points, you can imagine that you'll end up with the 3 sections that build the whole. It's not so simple to imaging if you start messing with the parameters and adding restrictions. :)
im no expert but dig the technique/look, i love the flower fields
Thanks for the note. I appreciate that. And this is with my terrible pixel art drawings (first time drawing flowers!) and all of the flickering aliasing (I want to add mip-maps, and handle when they invoke, but the GPU seems to want to control that itself). So it can be much, much improved. In any case, I agree with the sentiment -- these infinite fields of flowers are amazing, and I've wanted to see it done ever since Out Run pulled it off back in 1986 in the arcades...
@@JDoucette are you talking about the grainyness of the flowers in the background when they come/move forward with the image? I guess that could be nicer but it doesn't matter, the overal crispyness and smoothness/speed of the image coupled with the vast sea of flowers is what gives it charm
@@tnmrvc Yes, that's exactly what I meant. I also think I could "fill" the voids more on very steep hills (where you can see gaps in the flower lines). But, agreed, this is nit-picking, and the general aesthetic is what I love as well. I really have got to make a game out of this... :)
I still think the pseudo 3d effect looks DECENT even today personally.
Yes, for sure. I think pixel density has to be respected to really pull it off, but sprites could look flat even in 320x200 resolution, so the way it's presented matters a lot.
Is there a top level? Does the game have an end?
There is no top level. I played the game until the score wrapped, and noticed I was getting more 1UPs than I was losing, so I considered the game beat. If you search "Xona Parsec" you'll find a great page of amazing facts about this game. This caused fans from around the world to write in with details I didn't know. See the section at the bottom "Fan Feedback: Parsec Cannot Be Wrapped Indefinitely?" -- there appears to be two reasons for a Kill Screen: 1. sprite automotion using 1-byte -128..+127 causes the speed to wrap and enemies fly in the wrong direction, and 2. the larger enemies appear closer and close on each level, so perhaps eventually they are too close (though you can position your fighter to be in front of them before they appear, but that's unwieldy due to the slow acceleration of the starship).
Someone should make a ROM hack that allows you to start on any level, and see what happens. I suspect all data is stored in bytes, except for the score (though the last two digits are always 0, so they are not stored). This would mean the levels will wrap after 0..255 or there would be some oddity if it were a signed byte, -128..+127. Either way, it should reach 0 then 1 again. Outside the unlikelihood of surviving enemies that fly across the screen too quickly, I wonder if there are any actual game crushing bugs for a true kill screen.
One thing you could do with these is try weighting the different tiles; giving them similar frequencies to the rate they show up in the original might make the result look closer to the original, compared to how examples like at 7:27 and 9:12 tend to get dominated by the crisscrossing lines and have very little blank space.
Thanks for the idea. I actually have this coded already, several months ago, but not showcased due to artifacts that are caused ... but I should showcase it anyway, just to show off what happens in a trivial attempt to maintain density. The attempt to reach the density desired causes the natural flow of art generation "spreading" to mix with anxiety (yes, I'm going full A.I. in my description) that it may miss the mark, so it immediately rectifies it. This is bad. It causes a lot of activity mapped to the dynamic growth of the art -- what does that mean? Imagine that the resultant image's density hasn't been decided yet (because it hasn't), and it only has the simple view of the next few pixels (because that's how it works), so it tries to solve any discrepancies with those pixels -- right then and there. This means as soon as you get too much white, the very next pixel has to be black -- it just has to be, (unless blocked by lack of a pattern, but that only stops it for a few pixels). Now finally we have enough black. Oops, too much black! Now the next pixel has to be white. It just has to be. Back and forth. If algorithms could feel, this one would be looking for a vacation.
@@JDoucette That sounds like you’re weighting things in terms of the total number of them generated. You don’t need to do that; you can just have the chances of each one being picked during a collapse be weighted by its original frequency. For example, if there are twice as many blocks of feature A as feature B in the source, and a collapse can be either A or B, pick A 2/3 of the time and B 1/3 (like by putting two As and a B in a group and picking one at random). That should more or less get the distribution you need without falling afoul of the gambler’s fallacy. It may get a little messier with the various orientations a tile can be in (where different tiles can map to one feature), but it looks like you have that handled already.
@@KnakuanaRka Yes, you are quite right -- me trying to meet the goal, but trying to meet it too quickly, is the problem with my algorithm. Your suggestion on the chances of being picked is nice, because it's more like true randomness, vs. even forced distribution -- which are surprisingly different. However, this also has issues -- since it's a dynamic system, it can get into modes where there are simply no other choices (or few choices) ... so you just never really get a chance to choose the colors you need. It's like a 6-sided dice, for the 6 colors you expect to see, each 1/6th of the time: suddenly a 3 is rolled, which only allows a 4 or 5 from there. Even if 4 and 5 both then allow all 6 colors, that 3 will cause you to get the 4 and 5 more often than the rest. Yet, I still think this may be better than my original algorithm. I will make a note of this to try.
@@KnakuanaRka Following my thoughts above -- I think I can get out of this "oh no, I have to pick 4 or 5 AGAIN" mess by another means: The choice of the location of next pixels I should draw to. There are 2 steps: 1. pick the spot that has the least options (sort of like, "oh crap, choices are limited here, let's settle it before something else happens and reduces the choices to zero"), and 2. decide what pattern to collapse to in that spot. We were focused on step 2 in our discussion. How about focusing on step 1 as well: I don't HAVE to pick the worst case (least option location). I could pick from the bottom 10% of least option locations... then poke around at a few of them, which opens the options up, and then do your "pick A 2/3 of the time and B 1/3" with much more freedom.
@@JDoucetteYeah, sounds like that might help in terms of ensuring you can get closer to the wanted proportions. Or weight them depending on the amount generated like you said, but make it a smaller factor combined with the expected proportions I mentioned (to make it less sensitive), and see if you can find different ways to measure the proportions (like see what the difference in proportions would be compared to expected if all the remaining features were filled in proportionally to expected) so that it doesn’t veer hard to one side over a tiny difference.
This video really inspired me about a week ago right before i should've gone to bed, I'm glad i stayed up though as i drew some sprites, which eventually turned into a new breakout game on my website, with a level editor and a bunch of level packs! written in rust of course haha. thank you very much for the inspiration. i'm gonna record a quick little showcase video and put it on my channel now
One of the coolest things about making games/demos and sharing is seeing the inspiration it creates for other people. Thanks for letting me know about it, since it really adds value to the work I do.
I just checked out you video (I'll post a comment there shortly) and your game demo. That is so cool. So yeah, you made enough of a GUI to get things moving! :) -- very nice that you have an editor. Most indies, including myself, don't take this step first. The sprites and graphics are very reminiscent of my own minimalistic single-color pixel-art game. I'm glad that I am not alone in liking this very minimal art style -- as I've been wanting to make some more demos out of my Xona System 8 engine like this.
@@JDoucette I'm really glad you liked it! inspiration is a great thing and I'm very glad you could inspire me to make it. I was very inspired by many things from your game as they were just such good ideas, like the powerups and general overall aesthetic. im really excited to make more projects, maybe a nice gui system one day! haha
@@jumbledfox2098 Apologies, have been busy. A few things stand out that make me happy about this. One is that this very simple art style actually has promise, which means I can throw together quick demos that focus on gameplay and see if they are fun, while literally making 1 color art. Second, is that the power up systems here are really just a focus on gameplay and cool-factor -- again possible since the game comes together quickly with a simple backend engine and no real art. Working from first principles, you can make even a pong / breakout / arkanoid clone into a fun game. You may not have noticed, but the ball speed always gradually increases, even if you get the slowdown. This may also be a fun 2-player game.
@@jumbledfox2098 And yes, make that GUI !!
We could make a demo out of this. I could do the music.
A demo based on amazing transitions.
I agree. Demos were always about interesting effects and the transitions between them. And obviously with the music.
@@Xonatron It reminds me of the blog post "The importance of transitions" by Shawn Hargreaves --- where he quotes Johnny Christmas -- "the secret to making a game feel polished and professional lies in the transitions".
@@JDoucette 100%.
out of curiosity, how do you transition between each graph? the effect is really neat
Thank you. You may want to look at my other videos for similar examples. I morph using a smooth-step interpolation (similar to a sine wave) between the two graphs. Each graph is weighted based on the interpolation. If I clicked through more graphs before the morph is finished, it also morphs those graphs at the same time (I do this at the very end, where I run through 4 equations at once to return to the 3D graph before fade out -- which gives the very strange morph effect where you can still sense the y=x line from 3 or 4 equations back). The individual equations are multiplied by the percentage morph value (0..1) and added. I don't even ensure the sum of the weights is 1 (it always is between 2 graphs, but not if a 3rd or 4th is involved), since math is math, and it naturally has... solutions. :)
Great stuff!
Thanks Vinny! Just trying things out! :)
it should be possible to port the renderer to the gpu, right ? that way you'll probably be able to crank up the resolution of these graphs to at least 1080p (even tho the pixelated look has a charm to it)
Indeed, you are correct sir. I have been pondering that. This video is 1920 x 1080, where as my others are not (because I love the pixelated charm look, and also because it helps with video compression). Thus, this was a test. I upscale to 4K (pixel perfect) to help with compression artifacts.
GPU would help in numerous ways: Even a 32 CPU core beast is no match for thousands of GPU cores. It could render higher resolutions, render the lines more accurately (more sample points to avoid missing a solution), render the line more aesthetically (more sample points to thicken the line, and to do so in all directions). It has one issue, that it's not as abstract as my CPU model which allows me to render/morph 2+ equations at once. There are workarounds for this obviously, but it sort of has to be hacked in.
@@JDoucette it depends on how you're approaching the algorithm, but it does sound like something that'd be a perfect fit for a fragment shader, it sounds very doable (maybe even a shadertoy for that matter !)
@@dottedboxguy Yes, absolutely! I already have a basic proof of concept running in ShaderToy I made a while back, for one of my older Graph-All videos ("#5 Multi-Core Complex Equation Real-Time Render"). Let's see if I can link to it here: shadertoy.com/view/cscXRr It is under my account, JasonD.
It has the IDKFA soundtrack.
@Xonatron You can choose between the original and this remix. It's pretty cool. Every song is remixed making it interesting to play through all of the levels again.
So yeah. I found this and thought it was a bug in the new Doom I + II ReRelease -- but apparently "wall running" is a known thing from the original game, and it's called "wallrunning". Welcome to 1993. I guess there were so many motion-stopping wall glitches that I just never got near them when I played the crap out of this game back in the 90's, so I had no idea.
i've written a few comments on your videos now, i'm excited to see your responses to them haha, but GOD your work is just so inspiring!! the amazing aesthetic, the whole thing, it's just brilliant. it's really making me want to make something similar, so thank you <3 another point - how do you handle window creation and UI elements? i'm used to making immediate mode UIs as, well, they're just so simple, one function called button() or whatever in the main loop and it just works, however they're rather difficult for making nice layouts unless you get clever (which i certainly do not haha). I'm just wondering how each one stores its maze canvas and all that stuff. i'd love to hear more
You probably noticed the aesthetic improving each video -- a simple backdrop to the text to make it stand out, and the shadows on the windows are oh so easy, and recommended by viewers, and it was on my list, but it's just a matter of getting to it. When I make an improvement, it stays for all time; so it's nice to always be improving a little thing here and there, as you get on with the main work.
As mentioned in another video comment, I just wanted to align text and images for presentation, so I made some rectangle struct, and then spaced them out next to each other -- and I thought these should just be windows. Then the stuff inside the window can align itself to its window, and windows can align themselves. It's about ownership. But once you have a window and it can draw itself, then its X,Y coordinate can change. Each window has its own render target (texture) so even the components that draw to the window just believe it's the entire universe. The window manager will draw them, and ask them to update, but each window just does its own thing.
So yeah -- no immediate mode UI. Just a list/array of Windows, and the manager is in the main update/draw loop in the game system. Each frame, components are asked to update, then draw. The window manager is the only thing the game system touches (in this context). It hands off calls for update/draw to each window that was created. Each window has its own render target, so when each window calls its own things (see the example games I've made) to update/draw, like simple Sprites, it does so to the render target already set by the window manager. Thus these little games just are basic sprite movement/draw code, and they have no idea they exist in a window (other than its size info is made visible).
This is also why it was trivial to have windows own other windows. In my case, I did this just like Win32 did, to allow for status bars, title bars, etc. -- since I didn't want each window to have to maintain its own view style, and font. Therefore, you can generate a window that has 2 windows inside of it: the title and the canvas. The games live in the canvas window.
@@JDoucette aaah, that's very wise and nicely done! i'm excited to make another UI system now haha