Even if this doesn't work, even if it runs into huge hurdles related to lighting (for example), it is still great to see someone sit down and codify/imagine a new way of solving the 3d graphics problem. Lots of nerds have said, "I only have X pixels... why do you have to work so hard to show my X pixels?" These nerds asked the question and set about solving the problem. Well done.
It's cool going back to old videos making outrageous claims after a decade, almost like a case study to better notice unsubstantiated hype in the present. Smooth uninterested talk explaining how everyone else is wrong and this one simple idea will blow everything else out of the water.
yeah, listening today, I was surprised how obviously hucksterish he came off at times with that 'hard sell' talk. All the artifice of being on a time limit etc... Granted I'm 14 years older now, but back when I was genuinely excited and believed this would be a revolution in graphics.
"There was always a problem when you use point cloud data ( we say that because voxels have such a bad name , poor little things ) the problem is if you get close to an object the points either separate or they turn in to 2D rectangles to stay joined together, we have had great success this week with a new system that combines the point based on the points around them and smooths them off , thus keeping a nice real looking image, rather then blocky pixels. "
@GrandSirThebus This is called "Culling" and it is already in use with polygons. Essentally, anything that is not visible to the camera is not drawn. A tree behind a wall will not be rendered in the engine unless the player can see over, or through the wall. The computer checks to see if a specific polygon falls within the viewing range, and then decides whether or not to draw it. Its the same principal applied to a point cloud in this case.
I love everything about this video. It just seems like everything about it was a chore. This dude sounds so disinterested like this was a project his boss gave him to do and he had been putting it off for three months.
What impress me most is, that Bruce has game fans, but on paper, has zero interests in game play and never made anything playable. John Gatt had an interview about the games 2015 recently: "Bruce Dell: Yes the first is an adventure game with a sword, solid scan forests, and a lot of alien type monsters. The second is a cute, clay scanned adventure where you ride giraffes. Can’t say more than that I’m afraid."
I think it was a chore, because as much as I admire Bruce Dell, he has little patience for trolls and naysayers. I get the feeling that Dell is tired of explaining his company's technology to people who don't understand it, yet claim it is a fraud. Dell often comes across as elitist and superior in his vocal intonations, but I believe it comes from his irritation having to explain what Unlimited Detail/Solid Scan is on numerous occasions. I get the feeling that Dell is also fed up with people not giving his company credit for ending the geometry race. He's been told, indirectly through the media, that his technology is either outdated or impossible. Notch of Minecraft says it's outdated and John Carmack of id Software says it's impossible on current hardware, which is technically true. An open-world game like Fallout or The Elder Scrolls would need terabytes of drive space to be stored as 3D points. Or a game company could have a single copy of the game on a cloud server, and gamers would stream the game to their computers or consoles, but you'd have to have a fast internet connection to get a frame rate comparable to 1080p or 4K at 60 fps. Euclideons' Geoverse software automatically scales to your internet speed as far as geometric detail, but the model, even if it's low-res, always loads instantly. They've abolished loading, but gamers want hi-res, photo-realistic graphics on every frame, every moment that they're playing. Since I won't have a petabyte hard drive for some time, nor will I have a petabyte optical disc drive for several years (though they are coming), I also don't want my open world game to play at low-res then gradually build up to 1080p or 4K as the data streams from Bethesda's servers. To my understanding Euclideon is working on collision detection and improving the animation system, which started as skeletal. Euclideon, unless they've hired more people, is a team of nine. That's it, nine passionate people working to revolutionize the gaming industry. If they were 100 people the gaming industry would have already converted to 3D point cloud games, and graphics cards would optimize voxels instead of polygons. Dell says in another video that their technology would pair well with Atomontage and foresees a future for the two IPs. Lastly, I am most excited when I hear Dell say in another video that a leader in the games industry said something to him like, "we had to build that tree four times!" With Euclideon's technology, you build your tree one time and the software scales it automatically, with no model swapping. Games will be made in months rather than years, since artists will build their objects ONE TIME. Looking forward to the Elder Scrolls VII in about ten years using Euclideon's tech.
Tom Hedlund At same time Holoverse looks outdated and no interest from the industry/media, nobody wants to work in the small company, and Euclideon has a bad reputation. Congratulations!
Congratulations on what, I don't understand. It reads like sarcasm, so I assume it is, but I fail to see your point in relation to mine. Euclideon is nine people in Brisbane, Australia. Its' going to take them years to accomplish what Crytek can do in months. Bruce Dell says that they are programmers, not artists, but knows that artists will use Unlimited Detail/Solid Scan to create amazing environments in a fraction of the time. Holoverse does look outdated, Dell even says that it looks like early World of Warcraft, but again, they are not artists, they're programmers. Everyone is so busy performing fellatio on John Carmack that they forget to give Bruce Dell and Euclideon some affection. Holoverse is trillions of points flying by with no model swapping. Does it look like a cartoon rather than Call of Duty Modern Warfare? Yes, it does. But in the early 2020s Euclideon will have advanced beyond skeletal animation and will have proper collision detection. We'll need petabyte hard drives or optical readers or fast internet connections, but it's all on the horizon. Hey, gaming industry, give John Carmack's penis a rest.
Tom Hedlund You can write a book in text, but you still don't understand why the world doesn't care about Euclideon and going on as they not existed. No computer conferences, game conferences through all these years, that says that Euclideon is doing something important or successful. Why does a bragging company hide in Australia and don't hire people when they got millions in grants? So for Euclideon to be succesful we just need better hardware and bigger drives tomorrow, yeah they are so revolutionary.
@TheCubasy I think the idea is that storage space is rapidly becoming so cheap and easy that storage space is not so much of an issue. As explained in the video, the engine uses a search algorithm to find and display only one point for each pixel on your screen. My understanding is that the search algorithm can sift through a near infinite amount of data quickly. It is not unlimited from the technical definition, but in practicality, if we are no longer limited by polygon count, it is.
This is pathetic. First off, your "unlimited detail" demonstrations have some of the worst frame rates I've ever seen. Second, the main reason why games still use polygons is because they are very efficient, and by that I mean that they take up less hard drive space. If you were to have "billions" of points per level, 10 levels, and several real-time cutscenes you would be losing maybe more than a hundred gigabytes of data. I'm pretty sure most people don't have unlimited space hard drives. Aside from that voxel technology has existed for quite some time now so don't say it's new. If you were to have a game level with even a million voxels it would take up too much ram for most people. It would have to be loaded directly from the hard drive which would make loading times unbearable. In all these videos there are so many instanced models it's just sad. You obviously are missing a core function to actally rotate the model. I'm no expert but I'm pretty sure armatures don't work on voxels so you'd have to use polys for that anyway. Good luck making f**king grass animated without using polygons. I could really only see this working for static terrain in the distance so lod still looks nice but doesn't need to be high res. One thing to know about voxels is that they look really bad up close unless there's a lot of them at a high resolution. Last, stop trying to show people how s**tty polys look when your showing games from a long time ago. How about you get Halo 5, Infinite Warfare, any good modern game, and try counting all those polygons. Bet you cant if your counting skills are as good as your persuasion skills. I know this comment will be deleted but I just hope that the people who deleted this comment learned something from it. Don't act like nobody knows what you're doing, you just want someone to buy up your company because you'de all get rich. That's probably why you haven't released your game engije yet, because your afraid of all your believers knowing the truth.
That was this Sega Saturn game called "Amok", a 3rd person action where you piloted a mech tank through Voxel-based terrain. Given the limited technology at the time, the game looked pretty weak. However, "Amok" showed the mediation between Voxels/Points and Polygons, in that Voxel/Points are extremely good at vast, detailed, often stationary, objects like landscapes, building, but also calculating volumes for physical effects like destruction. Polygons used for character animation.
i really hope this becomes popular. you won't need to buy the most advanced graphics cards ever created and buy a new one every third year or so. Good luck, guys!
To give you an example: Searching for every prime number 8 digits long and containing at least one 7 might be faster if you have a list of all prime numbers (up to 8 digits at least) but the applicability is limited by the size of that list. While calculating the numbers in question directly might take more processing power keep in mind that the algorithm to calculate primes is incredibly small AND that original list used in the 'search' method had to be calculated too.
@altgeeky1 - your second point is already widely implemented. it's called culling. And the thing he describes in the video with different models depending on how close you are to them, it's called "level of detail". both these things have been around for a good long while.
@TheCubasy "The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesnt touch any unneeded points..." It searches for the points to display, not pictures to display. Perhaps I am misunderstanding you, but it seems like you think this graphics engine uses a lot of still pictures. I just read the description and it uses point data, not pictures. You are correct about massive space though.
I thought the video was going to end without him explaining how it works. This is a very interesting idea. It seems with the right tweeking, it might actually be faster to draw a frame using this method than normal model drawing and clipping.
@Eladril Not exactly. Currently graphics cards upgrade to improve the number of polygons they can render, but they also upgrade for a number of other reasons such as how much memory they have on them, and how quickly they can handle shaders, which would still be very useful with this new processing method. They also would have to upgrade for higher resolutions, since the new method's speed is based on how many pixels need to be displayed.... so that would still keep going up...
@Jasper2428 - the effects you describe can be acheived with defered rendering, displaced polygons going into zbuffers. These things are convergent. The benefit of a polygon is: you can animate it. Animation is the best way to add life to an interactive scene. animated characters have all sorts of deformation going on. Ideal hardware would be able to handle a range of techniques. voxels are the be all and end all of LOD, I know that. but teselate a dynamic surface and you have micropolygons
The geometry race he's referring to is one of hardware's ability to render geometry in real time. If this technology actually bears fruit, that will no longer be an issue, because the ability to render only the points that need to be displayed will make polygons obsolete. Of course you're right that the level of detail is still going to be variable, but it'll be based on the time and effort the artists decide to commit to it, rather than the limitations of hardware.
This makes sense and seems perfectly possible with today's computers large RAM memory, but animating these points would be a pipe dream for all but the most ingenious teams of programmers.
@Quipster99 This looks too good to be true. How big would an average game be in terms of gigabytes(bits? i never know which.,). If almost every prop is made in Zbrush, and around a million poly each, im sure the files for each model would be absolutely huge compared to the average model of today. Wouldnt you end up with game installation folders taking up entire hard drives?
@999newaccount it is 100% possible for something like this to be displayed, but it would take up lots of space on your hard drive. it would also need a lot of processing power for your computer to search through the "document" of points and their colors, because it would need to calculate what the scene looks like according to camera angles, light sources, shading, reflections, etc.
Wow! What a breakthrough. One of those "why didn't I think of that?" moments. Hopefully the next xbox or PlayStation will be able to use this. The narrator is a very good speaker, too.
This is... genius. If my understanding is sound, essentially this technique produces real time, updated 2d images on your screen appear as though you are looking into 3d simulation rather than actually producing 3d models.
One of the advantages of polygons is that they are relatively easy to animate. I've never worked with big point-based objects, but I imagine animating a big 'unlimited' point-cloud object would be significantly harder, and would require a lot more power.
Glad to see another image-order rendering technique in addition to ray-tracing. Hope your algorithm is nice and parallelizable, so that it will work on existing GPGPU/SPU hardware. A lot easier to introduce a new technology when it works with the existing infrastructure.
The one thing all of these have in common is that the amount of different objects is very little. Unlimited detail eats up memory, so if you can reference the same 3 objects it works fine. If you want to make a complex world, you're in trouble
I like the concept of using a search algorithm, and I make no claims to my own mathematical abilities. However, this video relates few benefits over the systems that work with current technologies. I've seen gorgeous ray-tracing demos that gave me more of an idea as to how that process relates to lighting and whatnot. This demo seems to show how to display detail that things like tessellation and normal/bump maps are currently in charge of, yet the focus is on those "polygon fatcats."
I agree with setting up a hybrid system of polys and point clouds. point clouds for terrain/structures and polys for models/animation or something. That sounds like the best bet to get the big guys listening.
@TheCubasy in the end you still only have the resolution on your screen. They optimised that, so much that you only need small amount of atom per pixel. So you never have to set out average colour/material per section.
@Prometheus722 There is a back drop skybox, so only the base use point cloud data. Motion blur/other is a render to texture effect, taking the image on screen and duplicating it free to 2d pixals for manipulation. This may be the only time polygons come into use, even for particals perhaps. After that, you can make a poly model of 1M polys and simply convert. Even easier is to scan natural objects for photorealism. Meaning, not much more effort is needed aside "filtering" to point cloud.
It's been a long time since I paid any attention to developments in physics engines, but the last time I checked, most physics engines used crude shapes ('hitboxes' and so forth) to define collision boundaries. That wouldn't change if the model's graphics were rendered differently, since the graphics and physics are separate. As for lighting, the textures and light-maps shouldn't be altered by this process either, only the model mesh.
@NinjaSeg Actually, this is not the same as voxel technology. Voxels are a volume element, represented in three dimensional space, which is why, in their most basic forum, they are usually represented as blocks.
a question; wouldn't all the vertex information be incredibly heavy? I'm not talking about the rendering, but sheer memory. I'd imagine a simple world in this, would be very heavy, or very complex, as the vertices could be generated, but each kind of object and surface needs a separate algorithm for calculating the vertex position, in order to simulate the surface's unlimited detail. It seems like a LOT of work to make a world run properly, without taking astronomical amounts of memory.
As a Texture Artist, I think this is awesome. The one thing that raises my eyebrow, however, is the choice of the sunset color scheme in the demo when comparing UD to poly-based games. It's not a knock against you, but honestly speaking, my eyes are telling me "boy this looks cartooney just like WoW..." I'd love to hear your thoughts.
Animating these is the rub... once you decide to animate every leaf with a little wind you have to recreate a significant portion if not all of the "point cloud data" so that it can be searched with their algorythm. They have proven that finding and rendering from an unchanging data set can be made fast... but who wants to play games in an unmoving lifeless world. Unlimited Detail, I want to see a highrez video with as much animation as Crysis before I will believe in the unlimited you claim.
Very impressive! Kind of reminds me of this infinity universe project. I know your project is still in a early phase but could you answer some questions: Are animations possible? (perhaps procedural like for example in mayas fluid fx?) Is there something equivalent to shaders ( for reflections, lightning etc.) Can it be mixed with traditional rendering for skeletal animation etc?Anyway, good luck with your project! Hopefully you find some artists to help you out with your tech demo ;-)
@msqrt I saw this demoed in its early stages at my University. Went to the same university as the guy who developed the initial concept. Very few people really get it, and as much as I understand I never fully grasped it. The initial development concept was to allow for high quality games on mobile systems. They had it running on a Nokia N-Gage and it was amazing. As the technology develops further you can bet this will be the future of graphics.
@Saob1337 Voxels are 3d pixels, and the blocks you see in Minecraft are not voxels. The blocks are stored as voxels, but are rendered in the game as polygons. That's why they can be textured. It's impossible to have unlimited resources, yes, but that's not the point. The idea is that whatever new hatrdware comes out, and no matter how powerfull it is, there is still a way to make more and more and more details. About your third point, they're both possible.
Puhh! I always wondered when this tech was going to happen, you really only need to display enough info to fill the screen pixels, which is barely 2 million pixels @1600x1200. Glad this is coming into fruition, this will advance all sorts of graphics areas into the nextgen!
@robocup30 i think he was trying to say that this technology runs on lesser systems than would be necessary to achieve an equal visual result with polygons.
it is. it's called frustum culling. the difference with this is that they take into consideration each pixel. since you are only able to see 1 color in that pixel, it searches a 3D model of infinite resolution for 1 color to represent it at that time. that's the main difference.
@TheCubasy That number can be reduced by just referencing point cloud points from a template. case in point, when they had these big pyramids of beasts that were around a billion points of data, they were probably referencing from one source model and repeating it over and over. Sure, it's going to take a good deal of memory storing all those point cloud data points, but with reused and repeated graphics like, say, grains of sand, it can be compressed to be a lot smaller than you'd think.
To those arguing "it ain't voxels, it's point clouds" sure, but that doesn't change the fact that Notch's calculations on memory requirements are accurate. He is of course assuming that they aren't using sparse oct trees for storing their data, which they likely are. This drastically reduces the memory foot print since the only points or voxels (depending on the engine) being stored are the ones on the surface. This means that you can't cut a tree down and find ringed wood on the inside.
@M3G4G0TH Tesselation basically gives software the power to signal to the 3D silicon to make more geometry without the software having to be specific about the points. It basically transfers more work from the CPU to dedicated silicon.
your "unlimited detail" requires unlimited memory and processing power. If unlimited memory and processing power you might as well get unlimited detail with polygons.
@friendofyou and the novel thing is actually how the point cloud data is being searched for points to show. Point cloud data thus far has been very computing intensive.
Does it detect bounds like polygons do? Can you have animated props and a controllable character with that? You can never change the technology used in games unless you make a demo game with your own thing.
this is really cool stuff. It must be said that the "colour race" is not actually over; only because we still only get 255 grades of any one color from darkest to lightest in a 24 bit system. Thus we still quite often see that terrible banding effect on any monochromatic area of an image. anyway, awesome video. I look forward to the days of pure software rendering. The video card race could have been avoided entirely if all that research had been focused into designed faster cpus/memory/etc.
Jeeez mate when I looked in your profile I didn't expect to see: Country: New Zealand. You just made me feel a little ashamed. A good presentation, especially in a public forum, gets information across clearly and concisely. Technical terms for you to stroke your ego with are usually kept for journals and other documentation.
@HARDCOREnl1337 The claim is that the system essentially creates a viewpoint say 1280x720 where it only needs to render 1280x720 pixels. The engine is essentially a search engine for voxel data. It finds the relevant voxels and shows only those visible. How effective this works I have no idea but that is the idea.
@TheCubasy Then polygon games are also impossible. You don't store the coordinates of each pixel or each atom in this case. You store information about an area and you use an algorithm to extrapolate how this area is constituted. That's the point of 3D in real time, you process, you don't store static values.
I don't think it would be. Like I said, the point-cloud is really just a mesh by another name. The difference is that each point is logically connected to many other points; the edges and faces aren't set in stone, and vertexes can be culled at the program's whim, causing the remainder to connect differently. However, there will always be some association between the points (this would be necessary for the binary search anyway) and therefore always a (morphable) mesh. But materials work too.
The tiny little thing they forget to mention - hard drive usage. You could stretch out your RAM with very clever data structures, but this sort of scene can take up 5 or more gigabytes of HDD space.
but i kind of agree with crudebuster that the tone is distracting (referring to being denied) where the focus should be just kept on what's being accomplished.
@HotnessTim Actually, it didn't require a super computer to render this scene. Its also "ugly" only because the art involved. You can scan real objects in for instance and its photorealistic. UD is basically as said, so it required only the simple technology of today (said to be running on a laptop). I've seen alot on point cloud graphics and it looks the equivalent of CGI in films in honest result. Also I read an interview on the storage space, and its said to not be much different.
Seeing as this came out 3 years ago, I guess it's a lot harder to implement than they originally thought. The proof is in the pudding. Let us taste it, Quipster99!
@Pandilex what do you mean they didn't look anything like unlimited detail? I dont think you quite understand whats going on in those clips. They were rendering a ridiculous number of ridiculously detailed models in real time. That was the 'show me' part. This is exactly why he went through lengths to explain it plainly.
@xilefian Yes, this does seem to be the biggest problem. I suppose this is where hardware would come into play. First, I don't think these points use textures. I'd imagine you'd create a bitmap texture, and their sdk would convert each pixel into a color value for each voxel/point. There wouldn't actually be any texture files, each color definition would be stored for each point. Thing you gotta take into account is polygons have had so much time and money invested in them, this hasn't.
@Quipster99 Interesting idea,but have you looked at dx 11 tessellation? It defeated your best argument as it allows super smooth geometry AND requires peanuts for RAM. Also it can be done using existing geometry model. How do you feel your tech stands to that?
This allows for fast rendering of static point cloud data. But there will be alot of issues with animation. Calculating a dynamic state of hundreds of thousands of animated points will be where the power of GPUs will come in. And not to mention these scenes don't have dynamic scene lighting and proper shadows.
i dunno if you'll actually be able to answer these, but i got some questions. if the computer only loads what you see,(assuming this is used in a game) what happens with things that happen out of screen that are relavent to the game? when you don't see something, does it cease to exist until you see it again? like if you blow up a wall and you turn around to run away and then it explodes. will peices of the wall fly away from the wall into your view?will the explosion not happen until you look?
@MrWolfengard It searches through the database of the entire scene to fetch the amount of points required to fill the pixels on your screen. You would need a lot of RAM to be able to store the entire scene, and a decent processor, but you wouldn't need a powerful graphics card. Overall, this would bring the cost of a gaming computer down by about %20.
@uniraptor It doesn't need to access and display all of the points at once. You are correct there. But, it does need to have access to many, if not all of the points at any given time. He said something about searching through all of the points to see which ones they would need to put on the pixels. So he would probably need rapid access to that database to execute this search 60 times a second. A hard drive would not be able to do this, so RAM is the next logical step, no?
Unlimited is of course not a 100% accurate description but hey, this is awesome! Very cleverly thought! However wil you guy(s) be releasing a engine anytime soon? Your site doesn't give much particular info on that.... Anway good project, well done. great example of out-of-the-box thinking!
Voxels are better than point-cloud data, not worse, as arranging them is more efficient, since the main difference is that you arrange them in a fixed grid. Sure it means that you have limited resolutions, but I've recently found a way around this. It also lets you do realistic physics efficiently. If you tried things like creating destructible environments with point-cloud, it would be impractical. I'm working on a voxel-based system that may end up being better than this.
@steamisM50 They haven't stated it's not possible to implemented this on a GPU. It's safe to assume they choose the CPU for their initial implementation because of the ease of development. With the way the GPU and CPU is going they'll eventually end up merging anyways. CPU development is moving towards becoming much more massively parallel and GPU development is moving towards becoming more general purpose.
@Quipster99 I don't see how this would be incompatible with perfectly conventional animation, actually. Assuming that all of the points in the mesh (which is what this 'point fog' actually is) are oriented around armatures and what have you, it should be possible to make these things move. It's probably just not a supported feature yet.
This makes total sense to me, and seeing as how within the time frame of 2/6 years from now (maybe even less), mass market computers with 16 cores will be available. Assuming in 10 years that the cost will go down, we will be playing games like this between 8/12 years from now. Hurray for the slow death of polygons; it's getting pixel cancer and will die in a few years. Pay your respects now folks...
@shultays Agreed. The argument that polygons look blurry upclose is stupid. Unless you have an infinite amount of points at a certain distance you'd be able to physically see through the UG object.
I like that Nvidia and ATI/AMD don't like each other, or rather competing with each other, because that creates innovation and pushes technology forward faster :)
@TheCubasy Very well. Suppose they get the real-time rendering working as claimed. I want to know how object collisions will work. If the rendering doesn't kill today's common processors, as I have inferred from their videos, particle physics will unless they have some fancy way to make acceptable, non-elastic collisions.
Very interesting way of creating computer graphic, but TBH, I don't think it will be used very much. Just like you said how color started at 2 colors and ended at 32bit color because our eyes can't see much difference from that point on. I think this applies same on here. The polygon count will continue to rise steadily and eventually, computers will be able to run enough polygons at once that our eye won't be able to see the difference between this and polygons.
@gabrielex Modelling wouldn't change. Obviously people wouldn't make every individual point, it just means they don't have to worry about polygon count. The lighting however is a much more complicated problem. I can see lightmaps working fine with this system, but any kind of dynamic lighting is going to be extremely difficult to achieve.
he's not saying that they can't go further, but it's like the difference of something running at 200 fps vs something running at 10000 fps. There's a difference, you just won't be able to notice it.
I think i got what you were trying to convay here, and by your video it obviously works wonders, if i tried to load up some of those scenes just to view in something like UDK with polygons the developer would shit itself. IDK if you came up with this yourself or you had help but it looks amazing. I wish you the best man and hopefully i can have a development kit to play around with soon =D
This has certain applications for terrain mostly, but it's hard to animate, hard to texture, even harder to lit, hard to do hit-detection etc. hard to do physics related stuff with.
@Oatinator Yeah, but you wouldn't have to render the polygons, just use them as collision meshes, true, it would be a lot more work, but it would be work it...
well i think that's the point of this demonstration. it looks like they have apparently come up with some design that allows the computer to load small chunks of the model data into ram, pick out info for 1 pixel, and toss the rest very quickly. yeah, it still sounds like that wouldn't be possible with commercial hardware, but maybe that's exactly what they have accomplished. maybe they have some of their own hardware.
Still nothing
no? This turned out to be legit. it's didn't end up being used for video games, but the tech is very real.
@@vegasvanga5442 Nanite in unreal 5
Even if this doesn't work, even if it runs into huge hurdles related to lighting (for example), it is still great to see someone sit down and codify/imagine a new way of solving the 3d graphics problem. Lots of nerds have said, "I only have X pixels... why do you have to work so hard to show my X pixels?" These nerds asked the question and set about solving the problem. Well done.
It's cool going back to old videos making outrageous claims after a decade, almost like a case study to better notice unsubstantiated hype in the present. Smooth uninterested talk explaining how everyone else is wrong and this one simple idea will blow everything else out of the water.
yeah, listening today, I was surprised how obviously hucksterish he came off at times with that 'hard sell' talk. All the artifice of being on a time limit etc... Granted I'm 14 years older now, but back when I was genuinely excited and believed this would be a revolution in graphics.
@@schumachersbatman5094yup hahah
I don't know, i have a feeling they might've been onto something
@schumachersbatman5094 have you heard of nanite in unreal 5??
"There was always a problem when you use point cloud data ( we say that because voxels have such a bad name , poor little things ) the problem is if you get close to an object the points either separate or they turn in to 2D rectangles to stay joined together, we have had great success this week with a new system that combines the point based on the points around them and smooths them off , thus keeping a nice real looking image, rather then blocky pixels. "
Is Bruce Del still a CEO? i heard rumors that he is homeless living in limited detail.
@GrandSirThebus
This is called "Culling" and it is already in use with polygons. Essentally, anything that is not visible to the camera is not drawn. A tree behind a wall will not be rendered in the engine unless the player can see over, or through the wall. The computer checks to see if a specific polygon falls within the viewing range, and then decides whether or not to draw it. Its the same principal applied to a point cloud in this case.
I love everything about this video. It just seems like everything about it was a chore. This dude sounds so disinterested like this was a project his boss gave him to do and he had been putting it off for three months.
What impress me most is, that Bruce has game fans, but on paper, has zero interests in game play and never made anything playable. John Gatt had an interview about the games 2015 recently: "Bruce Dell: Yes the first is an adventure game with a sword, solid scan forests, and a lot of alien type monsters. The second is a cute, clay scanned adventure where you ride giraffes. Can’t say more than that I’m afraid."
I think it was a chore, because as much as I admire Bruce Dell, he has little patience for trolls and naysayers. I get the feeling that Dell is tired of explaining his company's technology to people who don't understand it, yet claim it is a fraud.
Dell often comes across as elitist and superior in his vocal intonations, but I believe it comes from his irritation having to explain what Unlimited Detail/Solid Scan is on numerous occasions. I get the feeling that Dell is also fed up with people not giving his company credit for ending the geometry race. He's been told, indirectly through the media, that his technology is either outdated or impossible. Notch of Minecraft says it's outdated and John Carmack of id Software says it's impossible on current hardware, which is technically true.
An open-world game like Fallout or The Elder Scrolls would need terabytes of drive space to be stored as 3D points. Or a game company could have a single copy of the game on a cloud server, and gamers would stream the game to their computers or consoles, but you'd have to have a fast internet connection to get a frame rate comparable to 1080p or 4K at 60 fps. Euclideons' Geoverse software automatically scales to your internet speed as far as geometric detail, but the model, even if it's low-res, always loads instantly. They've abolished loading, but gamers want hi-res, photo-realistic graphics on every frame, every moment that they're playing. Since I won't have a petabyte hard drive for some time, nor will I have a petabyte optical disc drive for several years (though they are coming), I also don't want my open world game to play at low-res then gradually build up to 1080p or 4K as the data streams from Bethesda's servers.
To my understanding Euclideon is working on collision detection and improving the animation system, which started as skeletal. Euclideon, unless they've hired more people, is a team of nine. That's it, nine passionate people working to revolutionize the gaming industry. If they were 100 people the gaming industry would have already converted to 3D point cloud games, and graphics cards would optimize voxels instead of polygons. Dell says in another video that their technology would pair well with Atomontage and foresees a future for the two IPs.
Lastly, I am most excited when I hear Dell say in another video that a leader in the games industry said something to him like, "we had to build that tree four times!" With Euclideon's technology, you build your tree one time and the software scales it automatically, with no model swapping. Games will be made in months rather than years, since artists will build their objects ONE TIME. Looking forward to the Elder Scrolls VII in about ten years using Euclideon's tech.
Tom Hedlund At same time Holoverse looks outdated and no interest from the industry/media, nobody wants to work in the small company, and Euclideon has a bad reputation. Congratulations!
Congratulations on what, I don't understand. It reads like sarcasm, so I assume it is, but I fail to see your point in relation to mine. Euclideon is nine people in Brisbane, Australia. Its' going to take them years to accomplish what Crytek can do in months. Bruce Dell says that they are programmers, not artists, but knows that artists will use Unlimited Detail/Solid Scan to create amazing environments in a fraction of the time. Holoverse does look outdated, Dell even says that it looks like early World of Warcraft, but again, they are not artists, they're programmers. Everyone is so busy performing fellatio on John Carmack that they forget to give Bruce Dell and Euclideon some affection. Holoverse is trillions of points flying by with no model swapping. Does it look like a cartoon rather than Call of Duty Modern Warfare? Yes, it does. But in the early 2020s Euclideon will have advanced beyond skeletal animation and will have proper collision detection. We'll need petabyte hard drives or optical readers or fast internet connections, but it's all on the horizon. Hey, gaming industry, give John Carmack's penis a rest.
Tom Hedlund You can write a book in text, but you still don't understand why the world doesn't care about Euclideon and going on as they not existed. No computer conferences, game conferences through all these years, that says that Euclideon is doing something important or successful. Why does a bragging company hide in Australia and don't hire people when they got millions in grants? So for Euclideon to be succesful we just need better hardware and bigger drives tomorrow, yeah they are so revolutionary.
@TheCubasy I think the idea is that storage space is rapidly becoming so cheap and easy that storage space is not so much of an issue. As explained in the video, the engine uses a search algorithm to find and display only one point for each pixel on your screen. My understanding is that the search algorithm can sift through a near infinite amount of data quickly. It is not unlimited from the technical definition, but in practicality, if we are no longer limited by polygon count, it is.
This is pathetic. First off, your "unlimited detail" demonstrations have some of the worst frame rates I've ever seen. Second, the main reason why games still use polygons is because they are very efficient, and by that I mean that they take up less hard drive space. If you were to have "billions" of points per level, 10 levels, and several real-time cutscenes you would be losing maybe more than a hundred gigabytes of data. I'm pretty sure most people don't have unlimited space hard drives. Aside from that voxel technology has existed for quite some time now so don't say it's new. If you were to have a game level with even a million voxels it would take up too much ram for most people. It would have to be loaded directly from the hard drive which would make loading times unbearable. In all these videos there are so many instanced models it's just sad. You obviously are missing a core function to actally rotate the model. I'm no expert but I'm pretty sure armatures don't work on voxels so you'd have to use polys for that anyway. Good luck making f**king grass animated without using polygons. I could really only see this working for static terrain in the distance so lod still looks nice but doesn't need to be high res. One thing to know about voxels is that they look really bad up close unless there's a lot of them at a high resolution. Last, stop trying to show people how s**tty polys look when your showing games from a long time ago. How about you get Halo 5, Infinite Warfare, any good modern game, and try counting all those polygons. Bet you cant if your counting skills are as good as your persuasion skills. I know this comment will be deleted but I just hope that the people who deleted this comment learned something from it. Don't act like nobody knows what you're doing, you just want someone to buy up your company because you'de all get rich. That's probably why you haven't released your game engije yet, because your afraid of all your believers knowing the truth.
I guess you underestimated technology advances
That was this Sega Saturn game called "Amok", a 3rd person action where you piloted a mech tank through Voxel-based terrain. Given the limited technology at the time, the game looked pretty weak. However, "Amok" showed the mediation between Voxels/Points and Polygons, in that Voxel/Points are extremely good at vast, detailed, often stationary, objects like landscapes, building, but also calculating volumes for physical effects like destruction. Polygons used for character animation.
this was rendered with Hilary Clinton's email server...
Damn took 3 years to reach me, but good joke.
comment still good
i really hope this becomes popular. you won't need to buy the most advanced graphics cards ever created and buy a new one every third year or so.
Good luck, guys!
To give you an example:
Searching for every prime number 8 digits long and containing at least one 7 might be faster if you have a list of all prime numbers (up to 8 digits at least) but the applicability is limited by the size of that list.
While calculating the numbers in question directly might take more processing power keep in mind that the algorithm to calculate primes is incredibly small AND that original list used in the 'search' method had to be calculated too.
@TheCubasy From what I understand it doesn't use voxels, it renders the points you can you see directly into pixels on your screen.
@altgeeky1 - your second point is already widely implemented. it's called culling. And the thing he describes in the video with different models depending on how close you are to them, it's called "level of detail". both these things have been around for a good long while.
@TheCubasy "The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesnt touch any unneeded points..."
It searches for the points to display, not pictures to display. Perhaps I am misunderstanding you, but it seems like you think this graphics engine uses a lot of still pictures. I just read the description and it uses point data, not pictures. You are correct about massive space though.
I thought the video was going to end without him explaining how it works. This is a very interesting idea. It seems with the right tweeking, it might actually be faster to draw a frame using this method than normal model drawing and clipping.
@Eladril Not exactly. Currently graphics cards upgrade to improve the number of polygons they can render, but they also upgrade for a number of other reasons such as how much memory they have on them, and how quickly they can handle shaders, which would still be very useful with this new processing method. They also would have to upgrade for higher resolutions, since the new method's speed is based on how many pixels need to be displayed.... so that would still keep going up...
i just randomly remembered this. whatever happened to it? is this project dead?
@Jasper2428 -
the effects you describe can be acheived with defered rendering, displaced polygons going into zbuffers.
These things are convergent. The benefit of a polygon is: you can animate it. Animation is the best way to add life to an interactive scene. animated characters have all sorts of deformation going on.
Ideal hardware would be able to handle a range of techniques. voxels are the be all and end all of LOD, I know that. but teselate a dynamic surface and you have micropolygons
The geometry race he's referring to is one of hardware's ability to render geometry in real time. If this technology actually bears fruit, that will no longer be an issue, because the ability to render only the points that need to be displayed will make polygons obsolete.
Of course you're right that the level of detail is still going to be variable, but it'll be based on the time and effort the artists decide to commit to it, rather than the limitations of hardware.
This makes sense and seems perfectly possible with today's computers large RAM memory, but animating these points would be a pipe dream for all but the most ingenious teams of programmers.
@Quipster99 This looks too good to be true. How big would an average game be in terms of gigabytes(bits? i never know which.,). If almost every prop is made in Zbrush, and around a million poly each, im sure the files for each model would be absolutely huge compared to the average model of today. Wouldnt you end up with game installation folders taking up entire hard drives?
@999newaccount it is 100% possible for something like this to be displayed, but it would take up lots of space on your hard drive. it would also need a lot of processing power for your computer to search through the "document" of points and their colors, because it would need to calculate what the scene looks like according to camera angles, light sources, shading, reflections, etc.
Wow! What a breakthrough. One of those "why didn't I think of that?" moments. Hopefully the next xbox or PlayStation will be able to use this.
The narrator is a very good speaker, too.
still waitin bro
This is... genius. If my understanding is sound, essentially this technique produces real time, updated 2d images on your screen appear as though you are looking into 3d simulation rather than actually producing 3d models.
One of the advantages of polygons is that they are relatively easy to animate. I've never worked with big point-based objects, but I imagine animating a big 'unlimited' point-cloud object would be significantly harder, and would require a lot more power.
Glad to see another image-order rendering technique in addition to ray-tracing. Hope your algorithm is nice and parallelizable, so that it will work on existing GPGPU/SPU hardware. A lot easier to introduce a new technology when it works with the existing infrastructure.
The one thing all of these have in common is that the amount of different objects is very little. Unlimited detail eats up memory, so if you can reference the same 3 objects it works fine. If you want to make a complex world, you're in trouble
he speak in a british accent. Doesnt matter if he's speaking gibberish, he sounds brilliant.
I like the concept of using a search algorithm, and I make no claims to my own mathematical abilities.
However, this video relates few benefits over the systems that work with current technologies. I've seen gorgeous ray-tracing demos that gave me more of an idea as to how that process relates to lighting and whatnot.
This demo seems to show how to display detail that things like tessellation and normal/bump maps are currently in charge of, yet the focus is on those "polygon fatcats."
I agree with setting up a hybrid system of polys and point clouds. point clouds for terrain/structures and polys for models/animation or something.
That sounds like the best bet to get the big guys listening.
@TheCubasy
in the end you still only have the resolution on your screen. They optimised that, so much that you only need small amount of atom per pixel. So you never have to set out average colour/material per section.
I bet their fancy search algorithm "MASS CONNECTED PROCESSING" is just some kind of octree data structure
@Prometheus722
There is a back drop skybox, so only the base use point cloud data. Motion blur/other is a render to texture effect, taking the image on screen and duplicating it free to 2d pixals for manipulation. This may be the only time polygons come into use, even for particals perhaps.
After that, you can make a poly model of 1M polys and simply convert. Even easier is to scan natural objects for photorealism. Meaning, not much more effort is needed aside "filtering" to point cloud.
It's been a long time since I paid any attention to developments in physics engines, but the last time I checked, most physics engines used crude shapes ('hitboxes' and so forth) to define collision boundaries. That wouldn't change if the model's graphics were rendered differently, since the graphics and physics are separate.
As for lighting, the textures and light-maps shouldn't be altered by this process either, only the model mesh.
@NinjaSeg Actually, this is not the same as voxel technology. Voxels are a volume element, represented in three dimensional space, which is why, in their most basic forum, they are usually represented as blocks.
a question; wouldn't all the vertex information be incredibly heavy? I'm not talking about the rendering, but sheer memory. I'd imagine a simple world in this, would be very heavy, or very complex, as the vertices could be generated, but each kind of object and surface needs a separate algorithm for calculating the vertex position, in order to simulate the surface's unlimited detail. It seems like a LOT of work to make a world run properly, without taking astronomical amounts of memory.
This video just blew my mind. I hope to see more of this in the future.
Well, here we are. It's okay, I was equally enthralled by it back then too. Now just look at it.. Hasn't aged that well, lol
As a Texture Artist, I think this is awesome. The one thing that raises my eyebrow, however, is the choice of the sunset color scheme in the demo when comparing UD to poly-based games. It's not a knock against you, but honestly speaking, my eyes are telling me "boy this looks cartooney just like WoW..." I'd love to hear your thoughts.
Animating these is the rub... once you decide to animate every leaf with a little wind you have to recreate a significant portion if not all of the "point cloud data" so that it can be searched with their algorythm. They have proven that finding and rendering from an unchanging data set can be made fast... but who wants to play games in an unmoving lifeless world. Unlimited Detail, I want to see a highrez video with as much animation as Crysis before I will believe in the unlimited you claim.
Very impressive! Kind of reminds me of this infinity universe project. I know your project is still in a early phase but could you answer some questions:
Are animations possible? (perhaps procedural like for example in mayas fluid fx?)
Is there something equivalent to shaders ( for reflections, lightning etc.)
Can it be mixed with traditional rendering for skeletal animation etc?Anyway, good luck with your project! Hopefully you find some artists to help you out with your tech demo ;-)
Actually, dell seems pretty confident. I've had some correspondence with him, and it does sound as tho it is going well.
@msqrt I saw this demoed in its early stages at my University. Went to the same university as the guy who developed the initial concept. Very few people really get it, and as much as I understand I never fully grasped it. The initial development concept was to allow for high quality games on mobile systems. They had it running on a Nokia N-Gage and it was amazing. As the technology develops further you can bet this will be the future of graphics.
question - how efficiently does your system handle animation / movement of geometry?
@Saob1337 Voxels are 3d pixels, and the blocks you see in Minecraft are not voxels. The blocks are stored as voxels, but are rendered in the game as polygons. That's why they can be textured. It's impossible to have unlimited resources, yes, but that's not the point. The idea is that whatever new hatrdware comes out, and no matter how powerfull it is, there is still a way to make more and more and more details. About your third point, they're both possible.
Puhh! I always wondered when this tech was going to happen, you really only need to display enough info to fill the screen pixels, which is barely 2 million pixels @1600x1200. Glad this is coming into fruition, this will advance all sorts of graphics areas into the nextgen!
congratulations, you discovered ray tracing and instancing! hot new technology.
@robocup30 i think he was trying to say that this technology runs on lesser systems than would be necessary to achieve an equal visual result with polygons.
it is. it's called frustum culling.
the difference with this is that they take into consideration each pixel. since you are only able to see 1 color in that pixel, it searches a 3D model of infinite resolution for 1 color to represent it at that time. that's the main difference.
@TheCubasy That number can be reduced by just referencing point cloud points from a template. case in point, when they had these big pyramids of beasts that were around a billion points of data, they were probably referencing from one source model and repeating it over and over. Sure, it's going to take a good deal of memory storing all those point cloud data points, but with reused and repeated graphics like, say, grains of sand, it can be compressed to be a lot smaller than you'd think.
To those arguing "it ain't voxels, it's point clouds" sure, but that doesn't change the fact that Notch's calculations on memory requirements are accurate. He is of course assuming that they aren't using sparse oct trees for storing their data, which they likely are. This drastically reduces the memory foot print since the only points or voxels (depending on the engine) being stored are the ones on the surface. This means that you can't cut a tree down and find ringed wood on the inside.
@M3G4G0TH Tesselation basically gives software the power to signal to the 3D silicon to make more geometry without the software having to be specific about the points. It basically transfers more work from the CPU to dedicated silicon.
Wow. Just wow. Your grasp of sarcasm is unmatched.
your "unlimited detail" requires unlimited memory and processing power.
If unlimited memory and processing power you might as well get unlimited detail with polygons.
So in a sentence: Your swapping triangle polygons to circle point cloud data
@friendofyou and the novel thing is actually how the point cloud data is being searched for points to show. Point cloud data thus far has been very computing intensive.
Does it detect bounds like polygons do? Can you have animated props and a controllable character with that? You can never change the technology used in games unless you make a demo game with your own thing.
this is really cool stuff.
It must be said that the "colour race" is not actually over; only because we still only get 255 grades of any one color from darkest to lightest in a 24 bit system. Thus we still quite often see that terrible banding effect on any monochromatic area of an image.
anyway, awesome video. I look forward to the days of pure software rendering. The video card race could have been avoided entirely if all that research had been focused into designed faster cpus/memory/etc.
@Zwank36 yeah that "animation" seems to involve moving solid pieces of the creature around... without any rotation.
Jeeez mate when I looked in your profile I didn't expect to see: Country: New Zealand. You just made me feel a little ashamed. A good presentation, especially in a public forum, gets information across clearly and concisely. Technical terms for you to stroke your ego with are usually kept for journals and other documentation.
@HARDCOREnl1337 The claim is that the system essentially creates a viewpoint say 1280x720 where it only needs to render 1280x720 pixels. The engine is essentially a search engine for voxel data. It finds the relevant voxels and shows only those visible. How effective this works I have no idea but that is the idea.
@TheCubasy
Then polygon games are also impossible.
You don't store the coordinates of each pixel or each atom in this case. You store information about an area and you use an algorithm to extrapolate how this area is constituted.
That's the point of 3D in real time, you process, you don't store static values.
Sounds good! I hope this doesnt end like the Voxel graphics, like not being used/developed. Well time will show.
I don't think it would be. Like I said, the point-cloud is really just a mesh by another name. The difference is that each point is logically connected to many other points; the edges and faces aren't set in stone, and vertexes can be culled at the program's whim, causing the remainder to connect differently. However, there will always be some association between the points (this would be necessary for the binary search anyway) and therefore always a (morphable) mesh.
But materials work too.
The tiny little thing they forget to mention - hard drive usage. You could stretch out your RAM with very clever data structures, but this sort of scene can take up 5 or more gigabytes of HDD space.
@TheCubasy
+ You can't do Unlimited, because this is not a number. There is always bigger or more detailed stuff.
Wow, I learned a lot from you about computer graphics, and the color/geometry race. Thank youuu!
This WILL be the where video games go someday! I love the skeptics arguments that if it hasn't been done before it can't be done! Never give up Bruce!
seen all the videos on this, but its still mindblowing, how powerful computer do you need for this? only a processor and a shitload of memory?
but i kind of agree with crudebuster that the tone is distracting (referring to being denied) where the focus should be just kept on what's being accomplished.
This is cool but how accurate can the lighting be, and will more accurate lighting run at similar speeds to traditional style polygonal games?
@HotnessTim
Actually, it didn't require a super computer to render this scene. Its also "ugly" only because the art involved. You can scan real objects in for instance and its photorealistic.
UD is basically as said, so it required only the simple technology of today (said to be running on a laptop). I've seen alot on point cloud graphics and it looks the equivalent of CGI in films in honest result.
Also I read an interview on the storage space, and its said to not be much different.
Seeing as this came out 3 years ago, I guess it's a lot harder to implement than they originally thought. The proof is in the pudding. Let us taste it, Quipster99!
@Pandilex
what do you mean they didn't look anything like unlimited detail? I dont think you quite understand whats going on in those clips. They were rendering a ridiculous number of ridiculously detailed models in real time. That was the 'show me' part. This is exactly why he went through lengths to explain it plainly.
@xilefian
Yes, this does seem to be the biggest problem. I suppose this is where hardware would come into play.
First, I don't think these points use textures. I'd imagine you'd create a bitmap texture, and their sdk would convert each pixel into a color value for each voxel/point. There wouldn't actually be any texture files, each color definition would be stored for each point.
Thing you gotta take into account is polygons have had so much time and money invested in them, this hasn't.
@Quipster99 Interesting idea,but have you looked at dx 11 tessellation? It defeated your best argument as it allows super smooth geometry AND requires peanuts for RAM. Also it can be done using existing geometry model. How do you feel your tech stands to that?
This allows for fast rendering of static point cloud data. But there will be alot of issues with animation. Calculating a dynamic state of hundreds of thousands of animated points will be where the power of GPUs will come in. And not to mention these scenes don't have dynamic scene lighting and proper shadows.
i dunno if you'll actually be able to answer these, but i got some questions.
if the computer only loads what you see,(assuming this is used in a game) what happens with things that happen out of screen that are relavent to the game? when you don't see something, does it cease to exist until you see it again? like if you blow up a wall and you turn around to run away and then it explodes. will peices of the wall fly away from the wall into your view?will the explosion not happen until you look?
@MrWolfengard
It searches through the database of the entire scene to fetch the amount of points required to fill the pixels on your screen. You would need a lot of RAM to be able to store the entire scene, and a decent processor, but you wouldn't need a powerful graphics card. Overall, this would bring the cost of a gaming computer down by about %20.
okay, when you gave me the dumbed down explanation I literally ''woah'd'' This have enormous potential.
@uniraptor
It doesn't need to access and display all of the points at once. You are correct there. But, it does need to have access to many, if not all of the points at any given time. He said something about searching through all of the points to see which ones they would need to put on the pixels. So he would probably need rapid access to that database to execute this search 60 times a second. A hard drive would not be able to do this, so RAM is the next logical step, no?
Unlimited is of course not a 100% accurate description but hey, this is awesome! Very cleverly thought! However wil you guy(s) be releasing a engine anytime soon? Your site doesn't give much particular info on that....
Anway good project, well done. great example of out-of-the-box thinking!
Voxels are better than point-cloud data, not worse, as arranging them is more efficient, since the main difference is that you arrange them in a fixed grid. Sure it means that you have limited resolutions, but I've recently found a way around this. It also lets you do realistic physics efficiently. If you tried things like creating destructible environments with point-cloud, it would be impractical.
I'm working on a voxel-based system that may end up being better than this.
@steamisM50 They haven't stated it's not possible to implemented this on a GPU.
It's safe to assume they choose the CPU for their initial implementation because of the ease of development.
With the way the GPU and CPU is going they'll eventually end up merging anyways.
CPU development is moving towards becoming much more massively parallel and GPU development is moving towards becoming more general purpose.
@Quipster99
I don't see how this would be incompatible with perfectly conventional animation, actually. Assuming that all of the points in the mesh (which is what this 'point fog' actually is) are oriented around armatures and what have you, it should be possible to make these things move. It's probably just not a supported feature yet.
This makes total sense to me, and seeing as how within the time frame of 2/6 years from now (maybe even less), mass market computers with 16 cores will be available. Assuming in 10 years that the cost will go down, we will be playing games like this between 8/12 years from now. Hurray for the slow death of polygons; it's getting pixel cancer and will die in a few years.
Pay your respects now folks...
@shultays Agreed. The argument that polygons look blurry upclose is stupid. Unless you have an infinite amount of points at a certain distance you'd be able to physically see through the UG object.
I like that Nvidia and ATI/AMD don't like each other, or rather competing with each other, because that creates innovation and pushes technology forward faster :)
@TheCubasy Very well. Suppose they get the real-time rendering working as claimed. I want to know how object collisions will work. If the rendering doesn't kill today's common processors, as I have inferred from their videos, particle physics will unless they have some fancy way to make acceptable, non-elastic collisions.
This sounds very interesting. Can't wait how this will work out.
Wow, if that really works as good as they say it does, than graphics are about to get amazing! Great idea!
Very interesting way of creating computer graphic, but TBH, I don't think it will be used very much. Just like you said how color started at 2 colors and ended at 32bit color because our eyes can't see much difference from that point on. I think this applies same on here. The polygon count will continue to rise steadily and eventually, computers will be able to run enough polygons at once that our eye won't be able to see the difference between this and polygons.
@gabrielex Modelling wouldn't change. Obviously people wouldn't make every individual point, it just means they don't have to worry about polygon count. The lighting however is a much more complicated problem. I can see lightmaps working fine with this system, but any kind of dynamic lighting is going to be extremely difficult to achieve.
he's not saying that they can't go further, but it's like the difference of something running at 200 fps vs something running at 10000 fps. There's a difference, you just won't be able to notice it.
I think i got what you were trying to convay here, and by your video it obviously works wonders, if i tried to load up some of those scenes just to view in something like UDK with polygons the developer would shit itself. IDK if you came up with this yourself or you had help but it looks amazing. I wish you the best man and hopefully i can have a development kit to play around with soon =D
@hex37 if that's the case, it would still take tons of VRAM and processing power in order to render these objects composed of trillions of voxels.
This has certain applications for terrain mostly, but it's hard to animate, hard to texture, even harder to lit, hard to do hit-detection etc. hard to do physics related stuff with.
@Oatinator Yeah, but you wouldn't have to render the polygons, just use them as collision meshes, true, it would be a lot more work, but it would be work it...
well i think that's the point of this demonstration. it looks like they have apparently come up with some design that allows the computer to load small chunks of the model data into ram, pick out info for 1 pixel, and toss the rest very quickly. yeah, it still sounds like that wouldn't be possible with commercial hardware, but maybe that's exactly what they have accomplished. maybe they have some of their own hardware.