+Kim André Moen Bjørkede Thanks for telling a developer how things are developed, I really needed your opinion. I'm growing less patient every day of people like you. Read something sometime, do some research, study, get informed, use that fucking brain of yours for fuck sake. You are a fucking human being, you have the best computer ever made built into and onto you, USE IT or you're no different from a fucking animal. jesus fuck sorry for being rude but I'm fucking fed up of you fucking ignorants poisoning the world. You are ruining society with your demented non--reasoning, the world if fucked up because you can't make fucking 2+2, ENOUGH! WAKE THE FUCK UP! You are destroying the planet!
maybe because it's just a bunch of haters who eagerly speculate instead of giving only a minimum amount of shit and patiently wait for euclideon's next video as their technology slowly goes forward... the problem with gamers is they care waaaay too much about something that is not to be cared of yet. " (gamer sees video) wow dude !! this is pure awesomeness !!!!!!!!!11two (sadly understands that it will not be available next month as he first thought.) bullshit !! this is just a fucking pile of crap !!!!!!!!!!!!!!!!!!!!!! " after a while you are just fed up with those spamming your mailbox so you disable comments.
zecle thank you, I talked to Bruce and this is the exact reason why, he said that some of his staff was becoming depressed and so he disable the comments.
It is still active, several locations have opened and funding is great, they are still going forward with this, in Australia there are now hologram rooms that you can pay to use
The thing is that I don’t even think it was really new technology when it came out, it was just supposedly a really good implementation of it. They went on to use their technology to make software for professional city planners and people like that, where they can load in an extremely detailed 3D scan of a city down to the inch. The problem is that it was ahead of its time in terms of games. The “art” assets (atom cloud models) would have been gigantic, and 10 years ago 150GB games were not the norm. On top of that, they would have had to convince game studios to use their brand new engine, which would require training in all areas. On top of both of those (and related to the second point), as far as I’m aware shaders wouldn’t work in an engine built this way, and a very, very large portion of graphical “settings” in a video game are simply rendered by shaders (lighting effects, anti-aliasing, shadows, etc)
you could theoretically get the 300k pixels/frame needed every frame if you take advantage of the "beefy" 200mhz processor (even if by current standards, it is more like vegan appetizer) which can deliver about 300Mips and if you can find every pixel with about 20 instructions/px you could still have some processing power left for the rest of the game.
A decade later and nothing he has done with his Unlimited Detail Hoax has contradicted Notch's point in any way. His pivot away from video games to leeching government grants and making static display pieces is very telling. The Unreal 5 demo on the other hand does everything this snake oil salesman failed to achieve and shows that polygons are still the future. The most grating thing in all of this is the way the slimy salesman carries himself. His initial response to Notch was to ignore the substance of his arguments, play the victim, and attack him with empty sophistry like a politician, labeling his critique "mindcraft". Even in this interview he denies making something comparable to the examples Notch listed, but fails to elaborate or demonstrate why besides being on a more granular scale.
Also worth mentioning: he seems every bit as slimy now today as he did back then. Of course - the only folks he was able to convince to give him the time of day were politicians - is also worth noting. Many politicians are tech-illiterate. It's a massive problem.
This is STILL disgusting... Every year they put a video out, comments locked, and show off all of there broken logic to a public who can't say for sure that it's a hoax because they don't understand the technology used... But to those of us who use current methods of 3D modelling and animation, it's got a dozen obvious things that make it impossible. I'm upset, not at the "company" for making this hoax, but at the public for believing it. Every individual point would need an xyz coordinate, a value for color, and the capability to change all of those in real-time in accordance to the surroundings. This means tens of billions of points, each with there own 5+ variables attached... No matter how optimized they make that, it still wouldn't even catch up with what we have in current gen already. Not to mention how they claim big companies are already using this tech and give no names. Also how the most recent videos show graphics that aren't even on par with current gen and are obviously pre-rendered, and they try to say they're indistinguishable from real world video... Such a nuisance for real inventors and graphical enginners when fakers like this show up, it's a real shame...
well no instancing can be used, for example one tree can be put in a group, the group can have just its own xyz coordinate, rotation and scale (even transform values over time allowing for animation) this will not use that much data. a lot of the stuff in that demo is duplicated models
okhstorm You don't understand the root problem. You can't draw that much geometry on screen at once, regardless of whether or not it's repeat geometry...
+EclipsedArchon i disagree. the whole point of what the inventor of this tech is telling people is that this stuff isn't all drawn in memory at once, a search algorithm is used which queries this: "what should I display per pixel?" to answer that, the scene is divided up by currently 1024 by 768, so that's 786,432 searches per frame, and the answer for each pixel can go like this: pixel 1 has group 1 under it (using lowest z depth value for priority), what is in group 1? point cloud data 1. does any part of point cloud data 1 lie under pixel 1 (within certain tolerance)? yes=gather rgb value and send to gpu. no= skip group and look at next thing. next thing = group 2, this contains point cloud data 2, etc. see now?
I have never understood people, say such mean things about someone's invention, because it is not progressing as fast as YOU want it to, or does not live up to YOUR expectations....First off, It is NOT about You, it is about the inventor and their project...They understand the need for secrecy. In our world today, with the overly high amount of theft and the continual lack of morals, they are keeping it under wraps until all of the bugs are worked out, and they have it exactly how they want it. If they were to release their tech to company Z, (for example) to make a demo, what would stop company Z from taking the tech, tweaking it a little & releasing it as their own? For those of you actually paying attention, and know anything about writing code and creating software, these things take time & they only have 9 employees...so this is NOT going to happen overnight. They are not going to give up their lives and sit in front of a computer screen endlessly so they can finish it so they can make someone else happy. They will give it the time & energy it needs to do what they want it to do. Remember, they are creating NEW TECH, & there is NO blueprint or guide. They are creating it fresh, from the ground up. If this upsets you, then get off your butt, and create something similar quicker....If you could.... As for those of you complaining about Euclideon's recent video's. You are seeing a NEW program still in Alpha testing, that is NOT 100% finished. Complaining about "Not being impressed", is like going up to Henry Ford in 1908 and say your "Not impressed" with the Model T, because it cannot go 0-60 in 4 secs...Let's be honest for a minute(& very few of you will be) NONE of you are experts on this subject, & NONE of you have ANY right to judge their product & their decisions. Just because you spent $400 on a PS4 or XBox 1, DOES NOT make you a videogame expert, it makes you a user, a player, a customer, nothing more.....Now put that in your pipe and smoke it....
Before compare it with greatest inventions of all time, let's do a recap Euclideon 2011: "We found a way to give computer graphics UNLIMITED power" "Yes we can do animation" 2015: Static point cloud content, no growth, silence People: Not being impressed Simple, isn't it?
1. Let these guys work in peace. Give them what ever resource they need. 2. Let these guys work in peace. Give them what ever resource they need. 3. Let these guys work in peace. Give them what ever resource they need.
No they did not, they created snapshots of different point clouds. Basically they did animation in the same way it is done with sprites, by loading in a different point model. This may work with one model, but it gets an overhead on performance very fast as it is a very inefficient way to animate 3d models this way. Basically they cheated animation in order to try and rescue their claims. The same goes for their church demo. There are open source point cloud engines that do a way better job than their engine.
vuurmark Can you point me to a voxel renderer/point cloud engine that is able to load data even nearly as fast as UD...? Btw you should take a look at atomontage engine which also does real time skeletal animation with voxels.
"When it's ready".... like Duke Nukem Forever? Announced in 97 and released in 2011, only to be hated and called the worst game of 2011. Don't be another duke nukem forever.
So... how exactly do you store all the point-cloud data required for the thousands of objects found in games without taking up a lot of memory and avoiding repeating the graphics as the demo does?
Still focused on the gaming industry Euclideon is still developing exciting new technologies for the gaming industry. Despite its interest in the spatial industry, Euclideon has not forgotten its roots in gaming technologies, and is continuing to develop exciting and revolutionary technologies to be used in next-gen games. Watch this space for future announcements. Source: www.euclideon.com/company/history/
Nvidia is the reason they havent released anything. they claimed they can run great graphics without even using the graphics card and that's a big threat to nvidia. Nvidia is known for using their power to destroy threats, usually from small companies.
Their online point cloud examples (without bigger details or area) that was on their site you could read in info that it took several gigabytes in storage. They still haven't alpha, specularity, reflection etc on each atoms which all modern engines have which would add a lot more. Which explains its in Holoverse in Australia and not on computers.
Uncompressed point clouds like this can take even hundreds of terabytes. The scene looks blocky and has probably a shitload of repeating elements, so it's probably few gigabytes after compression.
i don't think they've given any downloadable game engine related stuff almost a decade later. they have great 3D scan tours and software to make them, but not usable for games.
Why aren't there any moving objects in any of the 'unlimited detail' videos? Could it be that all the 'highly detailed' areas that they show to us are just pre-loaded from the HDD and when it's rendered no new calculations are needed since nothing moves in it, so it's just a gallery. This same thing has been done before.
They have a 5-20% compression for static coloured atoms. A mapping company called Areometrex stated a few months back (since removed) that they converted a 36Mb point cloud to something like 9Mb which backs this up a bit. They said it was a 'pretty good' point cloud renderer. So storing normals/reflectance/material pointers et-cetera would increase MB sizes by quite a bit perhaps?
i can not wait for this, I am wondering if things would be destructible in it? and how would nvidia and ati react to this? would the graphics card need to be as demanding..and what would happen about that ?
In other words - nobody who knows what they are doing will use a production codec (H.264, AVCPRO HD, MPEG-4, etc) for use within a compositing/vfx suite or for re-encoding purposes if the footage is to be further edited.
Agreed, I understand that their "search algorithm" can search a database of points for screen pixel representation. The database is a known variable, i.e. a collection of points(?) with fixed coordinates. As soon as you introduce unknown variables such as the interaction of light, physics, movement and other dynamic properties etc, the search algorithm must take into account new information, and new information. Since we don't have that much info yet, these are fundamental aspects of speculation
Alright, so if there are several atoms per pixel on screen, how does it know which atom to choose for the pixel? Wouldn't this cause a lot of aliasing and graphical artifacts?
Im curious how they will implement lightning and material properties(water, clouds etc) and how it will be handled by power reserve coming from the Videocard
This is my opinion of what's going on here: from what I can see in the video, there IS a kind of LOD happening in that the atoms enlarge within the models as the camera pulls farther away. So if an object is up close, thousands of atoms-per square inch are rendered. When the object gets further away, the atoms scale up in size so that only a handful of them per inch are rendered. (Atoms-behind-other-atoms aren't rendered.)
and 2,073,600 pixels is far less than say like 1,000,000,000,000 pixels from all the polygons (and im pretty sure games these days use more than that) so if your pc runs say battlefield 3 and crysis, you'll have no problem running this engine. that's not to say theres hardly any atoms in the island demo they showed, it's just theres only going to be as much atoms showing on the screen depending on your resolution. please correct me if im wrong. hope this helps :)
It is one thing storing the information of a bunch of static coordinates somewhere, and locating them for pixel representation. It's a different thing altogether to use those same points or whatever they want to call them to actually change places, interact with photons/lightrays/electromagnetic waves, interact with other points and so on. That is why we will probably not see anything more than this, because the complexity increased by many orders of magnitude.
That is truly spectacular. Well done euclideon. Even if it was slow on the real time demo the fact that it was rendered live and didn't just fry the laptop was good enough for me. I look forward to this future. Congrats.
I mean, I think the unlimited detail refers to the environment which is made out of that cloud point data or however they're storing it. So without taking into account the memory, the environment could potentially have as much detail as is wanted. Then the rendering only using what it needs.
I was wondering if anyone can answer my question... what if I'm planning to build a high end gaming pc by buying the parts seperately which will cost me about 1500$ (without crossfire or sli) will I have problems running an unlimited polygon texture technology? will I have to wait for even better and more powerful graphic cards? sorry if this is a stupid question.. thanks in advance for taking your time to reply.
its currently all running in software, what that means, is that the graphics card is just sitting at idle and the processor is doing all the work. once its more complete, and its running in hardware the cpu usage will be a lot lower, and the gpu will be used much more.
I would like to say something that may... just may help this company now i got to say im 13 and i have good skill in graphics. But, is thier a way you can actually use the shaders in the polygons and also use the polygns and change thier shape smaller than an atom. if so that may actually help, im just giving a suggestion but if it works i want some credit. so try that.
It depends on how much data they store for each voxel, unfortunately they only seem to be storing colour at the moment. If any single area has a block that's, say, 1024*1024*1024 of empty space, then that will be represented by 1 bit being zero and the end of the traversal down the tree. It can't really be explained on a UA-cam comment section. Anyway they may be using something entirely different. Did you look at the Atomontage stuff yet?
It's not a scam. They just launched their first software package called Geoverse. It's been confirmed to display laser-scanned point cloud files up to 140 Terabytes in size. The massive file only needs to be on one computer on a network, and all other computers on the network can access it using the Geoverse software installed on a laptop. That kind of technology seems impressive, and actually bodes well for the future of online gaming.
So i hope they do have it working smoothly. People arnt going to walk around an in-game plant for several minutes waiting for its procedural polygons to turn up.
I for one, can't wait for this to happen. I hope this hits the market within the next 2-4 years so we can really see it take effect. This is the beginning of a marriage between art and the industry.
That shaders work is already shown in this video. You have the water reflection and the simple shadows. The Fox Engine looks that good in current videos because the makers actually cared about Color behaviour, and so forth. They looked at every texture and asset to behave realistically in color under any given lighting situation and practically build the lightning system around that. Something amazing no other studio did with that accuracy so far.
Why is it so hard to believe the progress they have made? Has anyone ever seen Microsoft SeaDragon project (Infinite zoom)? and Microsoft Photosynth? Euclideon looks like he combined both technology. They don't have to show you anything because the online videos isn't for you, the videos are more like a brochure for investors; focuses on solutions. What about the company Magic leap? They provided even less informative videos about their A.R Tech, yet everyone rallies around how potentially groundbreaking their tech is. I'm excited for what kind of future Euclideon will make
+Jacob Santos You either have a vested interest or use the same Reality Distortion Filter that Bruce does. The product looks so much worse than what Artificial Studios was producing in 2001. Ridiculous.
+Jacob Santos What progress? Where is the progress? I see no progress. You know how you get investors? Show that your shit actually works, they have been circle-jerking this thing for a decade now. Each video they release claims they will have something to show within the next months, yet they go quiet and do not do anything. Understand, they have missed every deadline they have set, and not even fulfilled what they promised after their deadlines. If this works, great, wow, good for all of them and all of us. Also those Microsoft projects turned out to be complete flops, SeaDragon had very poor quality, and I guess Photosynth did work, but that is only used for the modeling industry and better third party software does exist for this.
no it's not tracing any "rays" it's just tracing single points in 3d space with respect to the points of origin which are calculated with respect to the camera position and angle as for the shadows and stuff, they are changing the colors (gama) of the atoms to produce an shadow effect in combination with some swisted algorithm
It does look like a voxel renderer (point cloud... Sounds really familiar from a while back, will need to look it up). I'd be interested to look at their memory compaction stuff, maybe SSDs for swap could be the deciding factor here
Oh I agree, we will have to wait and see. This lighting in this is fixed or 'baked on' lighting. I want to see several dynamic lights moving though that scene, otherwise we'll be sticking to polygons.
you dont, they moved the problem from poly count limits to unique instances limit (and probably very hardware intensive or prerendered animation and physics)
Well, the older stuff looks like Meagher's work because of the errors down the sides of the screen in some videos. This is a 'feature' of his technique. He patented a way of drawing voxel objects 25 years ago. In the end, we'll never really know how Euclideon works, but drawing the objects directly in screen space rather than ray-tracing to them seems very efficient to me. There are other videos that use his technique on UA-cam if you search "Point Cloud Renderer".
this makes perfect sense. you only need enough memory to fit all the atoms in one map, but only X*Y atoms are visible at ANY time, where X and Y are the resolution. For a resolution of 1920*1080 you will ALWAYS have 2073600 atoms visible, one for each pixel on screen, no matter how high the atom resolution of objects or the draw distance.
If I understood it right, the key point is not the graphics, animation, voxels and whatnot ppl are discussing about. Its just the fact that they only use the stuff that can be displayed by the monitor. And this is basically how human eyesight works. If you have a complex objcect that is very far away, you propably just need to display 1 pixel but in todays games the object is still computed with all its details even though you cant see allof it. This is pretty stunning you guys get all my kudos
The skeptic in me deeply, deeply worries about collision detection and things like this, but the optimist in me thinks of how awesome the destructible environments would be. I want this to succeed!
also it seems like he talks about a process similar to tessellation in the video. which has already seen light whether or not his company was involved.
Why are the comments disabled on most of the other euclidion videos?
Of course they are, most companies does that.
+Michael Sykes Because it's a scam.
+Michael Sykes Jesus christ do some research, they've been at it for years.
Eminor It takes time to program.
And If you still think it's a scam, you can try it yourself if you have a TB drive ready
+Kim André Moen Bjørkede Thanks for telling a developer how things are developed, I really needed your opinion. I'm growing less patient every day of people like you. Read something sometime, do some research, study, get informed, use that fucking brain of yours for fuck sake. You are a fucking human being, you have the best computer ever made built into and onto you, USE IT or you're no different from a fucking animal. jesus fuck sorry for being rude but I'm fucking fed up of you fucking ignorants poisoning the world. You are ruining society with your demented non--reasoning, the world if fucked up because you can't make fucking 2+2, ENOUGH! WAKE THE FUCK UP! You are destroying the planet!
I am very concerned that Euclideon has disabled all comments on 95% of their videos...
maybe because it's just a bunch of haters who eagerly speculate instead of giving only a minimum amount of shit and patiently wait for euclideon's next video as their technology slowly goes forward...
the problem with gamers is they care waaaay too much about something that is not to be cared of yet.
"
(gamer sees video)
wow dude !! this is pure awesomeness !!!!!!!!!11two
(sadly understands that it will not be available next month as he first thought.)
bullshit !! this is just a fucking pile of crap !!!!!!!!!!!!!!!!!!!!!!
"
after a while you are just fed up with those spamming your mailbox so you disable comments.
zecle thank you, I talked to Bruce and this is the exact reason why, he said that some of his staff was becoming depressed and so he disable the comments.
so, where is it 9yrs later?
So, what about this "new technology" in 2k19?
It is still active, several locations have opened and funding is great, they are still going forward with this, in Australia there are now hologram rooms that you can pay to use
The thing is that I don’t even think it was really new technology when it came out, it was just supposedly a really good implementation of it. They went on to use their technology to make software for professional city planners and people like that, where they can load in an extremely detailed 3D scan of a city down to the inch.
The problem is that it was ahead of its time in terms of games. The “art” assets (atom cloud models) would have been gigantic, and 10 years ago 150GB games were not the norm. On top of that, they would have had to convince game studios to use their brand new engine, which would require training in all areas. On top of both of those (and related to the second point), as far as I’m aware shaders wouldn’t work in an engine built this way, and a very, very large portion of graphical “settings” in a video game are simply rendered by shaders (lighting effects, anti-aliasing, shadows, etc)
Forget PS4 and XBOX ONE, This'll run on my dreamcast!
you could theoretically get the 300k pixels/frame needed every frame if you take advantage of the "beefy" 200mhz processor (even if by current standards, it is more like vegan appetizer) which can deliver about 300Mips and if you can find every pixel with about 20 instructions/px you could still have some processing power left for the rest of the game.
SO its been a year and.... 11 or so days.. Whens this live demo?
Vid is 4 years old, still none of this shit in a game since.
I'm calling bullshit.
*****
They have a recent video out, of course comments are disabled...always a sign of bullshit.
***** They are releasing 2 games this year
Peťo D They didn't say this year, I don't think.
Llurendt
they did! visit the site
Peťo D They have a geospatial homepage with news from 2013, sounds fresh
A decade later and nothing he has done with his Unlimited Detail Hoax has contradicted Notch's point in any way. His pivot away from video games to leeching government grants and making static display pieces is very telling. The Unreal 5 demo on the other hand does everything this snake oil salesman failed to achieve and shows that polygons are still the future.
The most grating thing in all of this is the way the slimy salesman carries himself. His initial response to Notch was to ignore the substance of his arguments, play the victim, and attack him with empty sophistry like a politician, labeling his critique "mindcraft". Even in this interview he denies making something comparable to the examples Notch listed, but fails to elaborate or demonstrate why besides being on a more granular scale.
I remember when seeing this douche 10 years ago that he was full of it. Agree with everything you said. Unreal 5 is insane.
Also worth mentioning: he seems every bit as slimy now today as he did back then. Of course - the only folks he was able to convince to give him the time of day were politicians - is also worth noting. Many politicians are tech-illiterate. It's a massive problem.
I expected another video in August 2012, for another 1 year report... Haven't found one yet :(
I really do want to believe but it is 2016 now and still no more to show?
+tobo26 Same!
They have a web demo now, go check it out :)
Hiro Flux Still has a very long way to go I think.
tobo26 i'm not so sure, i think you'll see this implemented in games and what not sooner than you think
We have an update that we will post soon but if you google holoverse you will see its out and working just not in mainstream games .
OMG this is real!!! Hhaha he had some humor about it, saying that it was connected 6000 computers xD.
can i download this real time demo anywhere?
not a thing to show yet
+Eminor amen brother !
Glad this stupidity and awfulness doesn't just offend me, I want to roundhouse kick this guy in the throat.
No point talking shit, do we have dates to view finished produce demo ?
they have a web demo now- go have a look for yourself :)
Yep we do I did it last friday :) but it's not a demo its a full working game in a holodeck check out Holoverse.
This is STILL disgusting... Every year they put a video out, comments locked, and show off all of there broken logic to a public who can't say for sure that it's a hoax because they don't understand the technology used... But to those of us who use current methods of 3D modelling and animation, it's got a dozen obvious things that make it impossible. I'm upset, not at the "company" for making this hoax, but at the public for believing it.
Every individual point would need an xyz coordinate, a value for color, and the capability to change all of those in real-time in accordance to the surroundings. This means tens of billions of points, each with there own 5+ variables attached... No matter how optimized they make that, it still wouldn't even catch up with what we have in current gen already.
Not to mention how they claim big companies are already using this tech and give no names. Also how the most recent videos show graphics that aren't even on par with current gen and are obviously pre-rendered, and they try to say they're indistinguishable from real world video... Such a nuisance for real inventors and graphical enginners when fakers like this show up, it's a real shame...
+EclipsedArchon what public?
+EclipsedArchon
thank you for fighting against the dumbing down of humanity.
words of wisdom.
well no instancing can be used, for example one tree can be put in a group, the group can have just its own xyz coordinate, rotation and scale (even transform values over time allowing for animation) this will not use that much data. a lot of the stuff in that demo is duplicated models
okhstorm You don't understand the root problem. You can't draw that much geometry on screen at once, regardless of whether or not it's repeat geometry...
+EclipsedArchon i disagree. the whole point of what the inventor of this tech is telling people is that this stuff isn't all drawn in memory at once, a search algorithm is used which queries this: "what should I display per pixel?" to answer that, the scene is divided up by currently 1024 by 768, so that's 786,432 searches per frame, and the answer for each pixel can go like this: pixel 1 has group 1 under it (using lowest z depth value for priority), what is in group 1? point cloud data 1. does any part of point cloud data 1 lie under pixel 1 (within certain tolerance)? yes=gather rgb value and send to gpu. no= skip group and look at next thing. next thing = group 2, this contains point cloud data 2, etc. see now?
I think this still has a future.
I have never understood people, say such mean things about someone's invention, because it is not progressing as fast as YOU want it to, or does not live up to YOUR expectations....First off, It is NOT about You, it is about the inventor and their project...They understand the need for secrecy. In our world today, with the overly high amount of theft and the continual lack of morals, they are keeping it under wraps until all of the bugs are worked out, and they have it exactly how they want it. If they were to release their tech to company Z, (for example) to make a demo, what would stop company Z from taking the tech, tweaking it a little & releasing it as their own? For those of you actually paying attention, and know anything about writing code and creating software, these things take time & they only have 9 employees...so this is NOT going to happen overnight. They are not going to give up their lives and sit in front of a computer screen endlessly so they can finish it so they can make someone else happy. They will give it the time & energy it needs to do what they want it to do. Remember, they are creating NEW TECH, & there is NO blueprint or guide. They are creating it fresh, from the ground up. If this upsets you, then get off your butt, and create something similar quicker....If you could....
As for those of you complaining about Euclideon's recent video's. You are seeing a NEW program still in Alpha testing, that is NOT 100% finished. Complaining about "Not being impressed", is like going up to Henry Ford in 1908 and say your "Not impressed" with the Model T, because it cannot go 0-60 in 4 secs...Let's be honest for a minute(& very few of you will be) NONE of you are experts on this subject, & NONE of you have ANY right to judge their product & their decisions. Just because you spent $400 on a PS4 or XBox 1, DOES NOT make you a videogame expert, it makes you a user, a player, a customer, nothing more.....Now put that in your pipe and smoke it....
Before compare it with greatest inventions of all time, let's do a recap
Euclideon 2011: "We found a way to give computer graphics UNLIMITED power" "Yes we can do animation"
2015: Static point cloud content, no growth, silence
People: Not being impressed
Simple, isn't it?
hombacom they did a demo of animation in a video interview
aeldred28 Wikipedia 2015: "software company best known for an unreleased middleware 3D graphics engine" Yawn..
+hombacom i don't understand your comment to me, did you reply to the wrong person? what?
+aeldred28 i just said they did a demo, what does wikipedia have to do with what i said?
Imagine kerbin fully detailed. I want a KSP mod
pop1040 Unity engine sucks ass, no 64x support on unity 5! WTF! KSP will never have good graphics on unity.
Now tell me guys, is it now almost 2013? Where's the new video
Why you guy disable all of your other videos comments?.. Hiding something?
1. Let these guys work in peace. Give them what ever resource they need.
2. Let these guys work in peace. Give them what ever resource they need.
3. Let these guys work in peace. Give them what ever resource they need.
@trin__3967 They fucked up by gatekeeping their tech and not releasing anything to the public.
Hey it will soon be another year, does that mean we will get another update ??? =)
And then you ask them to make it interactive
And they did.
No they did not, they created snapshots of different point clouds. Basically they did animation in the same way it is done with sprites, by loading in a different point model. This may work with one model, but it gets an overhead on performance very fast as it is a very inefficient way to animate 3d models this way. Basically they cheated animation in order to try and rescue their claims. The same goes for their church demo. There are open source point cloud engines that do a way better job than their engine.
vuurmark Can you point me to a voxel renderer/point cloud engine that is able to load data even nearly as fast as UD...? Btw you should take a look at atomontage engine which also does real time skeletal animation with voxels.
"When it's ready".... like Duke Nukem Forever? Announced in 97 and released in 2011, only to be hated and called the worst game of 2011. Don't be another duke nukem forever.
lucky for us it only took 4 years to come out.. now its time to see where it ends up.
So... how exactly do you store all the point-cloud data required for the thousands of objects found in games without taking up a lot of memory and avoiding repeating the graphics as the demo does?
Unlimited bullshit 😂🤣😂
Fake - the scene lighting is dead - it's precalculated. Investors have been duped.
Still focused on the gaming industry
Euclideon is still developing exciting new technologies for the gaming industry.
Despite its interest in the spatial industry, Euclideon has not forgotten its roots in gaming technologies, and is continuing to develop exciting and revolutionary technologies to be used in next-gen games. Watch this space for future announcements.
Source: www.euclideon.com/company/history/
Nvidia is the reason they havent released anything. they claimed they can run great graphics without even using the graphics card and that's a big threat to nvidia. Nvidia is known for using their power to destroy threats, usually from small companies.
+Divad Mlap They haven't updated their website in over a year. I think this is a shame to get free money from investors.
+Divad Mlap
fucking idiot
+Anakin Ryan they are lying
How do you texture these models? Do you have your own model editor using the same search algorithm?
They'll hopefully show it on the next report. We'll just have to wait and see.
Any idea about the size of the game?
No clue on this. Video is many years old now. Some followups have been done.
Their online point cloud examples (without bigger details or area) that was on their site you could read in info that it took several gigabytes in storage. They still haven't alpha, specularity, reflection etc on each atoms which all modern engines have which would add a lot more. Which explains its in Holoverse in Australia and not on computers.
Uncompressed point clouds like this can take even hundreds of terabytes. The scene looks blocky and has probably a shitload of repeating elements, so it's probably few gigabytes after compression.
well i'm convinced. great work guys, i hope you release a demo soon :)
i don't think they've given any downloadable game engine related stuff almost a decade later. they have great 3D scan tours and software to make them, but not usable for games.
it's been at least 2 years since Euclideon last update. Where's the update that they'd promised :\
They show animations in an early stage AND he also said the games will not be memory consuming. Please watch the video again.
They were saying that, they need one polygon, for one pixel, but what about anti-aliasing?
36:00 exactly, it will never work fast unless you use a low-polygon collision map
Why aren't there any moving objects in any of the 'unlimited detail' videos? Could it be that all the 'highly detailed' areas that they show to us are just pre-loaded from the HDD and when it's rendered no new calculations are needed since nothing moves in it, so it's just a gallery. This same thing has been done before.
Where download demo 22:40 ?
No clue. This video is 6 years old.
And nothing has changed
www.hardocp.com/article/2016/09/12/bruce_dell_euclideon_holoverse_interview
Probably companies are against this technology, every year they need to sell video cards!
Why only one Holoverse place in the world when Unlimited Detail can run on all laptops in the world
It's no more private! You can go and watch it on their official channel.
They have a 5-20% compression for static coloured atoms. A mapping company called Areometrex stated a few months back (since removed) that they converted a 36Mb point cloud to something like 9Mb which backs this up a bit. They said it was a 'pretty good' point cloud renderer. So storing normals/reflectance/material pointers et-cetera would increase MB sizes by quite a bit perhaps?
i can not wait for this, I am wondering if things would be destructible in it? and how would nvidia and ati react to this? would the graphics card need to be as demanding..and what would happen about that ?
Look on the euclideon website, why would the software support file up to 140TB unless the files used by this engine where unusually large,
In other words - nobody who knows what they are doing will use a production codec (H.264, AVCPRO HD, MPEG-4, etc) for use within a compositing/vfx suite or for re-encoding purposes if the footage is to be further edited.
Agreed, I understand that their "search algorithm" can search a database of points for screen pixel representation. The database is a known variable, i.e. a collection of points(?) with fixed coordinates. As soon as you introduce unknown variables such as the interaction of light, physics, movement and other dynamic properties etc, the search algorithm must take into account new information, and new information. Since we don't have that much info yet, these are fundamental aspects of speculation
Alright, so if there are several atoms per pixel on screen, how does it know which atom to choose for the pixel? Wouldn't this cause a lot of aliasing and graphical artifacts?
I hope we get some sort of a demo real soon!
Im curious how they will implement lightning and material properties(water, clouds etc) and how it will be handled by power reserve coming from the Videocard
This is my opinion of what's going on here: from what I can see in the video, there IS a kind of LOD happening in that the atoms enlarge within the models as the camera pulls farther away. So if an object is up close, thousands of atoms-per square inch are rendered. When the object gets further away, the atoms scale up in size so that only a handful of them per inch are rendered. (Atoms-behind-other-atoms aren't rendered.)
Did you watch the video? They show animation.
and 2,073,600 pixels is far less than say like 1,000,000,000,000 pixels from all the polygons (and im pretty sure games these days use more than that) so if your pc runs say battlefield 3 and crysis, you'll have no problem running this engine. that's not to say theres hardly any atoms in the island demo they showed, it's just theres only going to be as much atoms showing on the screen depending on your resolution. please correct me if im wrong. hope this helps :)
they actually have come back fairly recently...they posed a new video a couple months ago on their channel euclideonofficial
does this get rid of antialiasing problems altogether?
uuh when ist it ganna be relesed
It is one thing storing the information of a bunch of static coordinates somewhere, and locating them for pixel representation. It's a different thing altogether to use those same points or whatever they want to call them to actually change places, interact with photons/lightrays/electromagnetic waves, interact with other points and so on.
That is why we will probably not see anything more than this, because the complexity increased by many orders of magnitude.
It is 2013, when are you going to reappear?
There's no rendering involved. Have you actually been listening to the technical details in this vid? It actually makes sense if you listen.
That is truly spectacular. Well done euclideon.
Even if it was slow on the real time demo the fact that it was rendered live and didn't just fry the laptop was good enough for me. I look forward to this future. Congrats.
Couldn't they do animations in this incredible engine? Still waiting
I mean, I think the unlimited detail refers to the environment which is made out of that cloud point data or however they're storing it. So without taking into account the memory, the environment could potentially have as much detail as is wanted. Then the rendering only using what it needs.
27:10 "LOD" isn't short for "Level of distance" but for "Level of detail"
I was wondering if anyone can answer my question... what if I'm planning to build a high end gaming pc by buying the parts seperately which will cost me about 1500$ (without crossfire or sli) will I have problems running an unlimited polygon texture technology? will I have to wait for even better and more powerful graphic cards? sorry if this is a stupid question.. thanks in advance for taking your time to reply.
i can tell you why it is impossible, but there is a length restriction on comments and it's a long explanation
Can someone help me figure out how these guys are doing with this technology?
because they dont want to realease any news.. how many times does that have to be said before you get it?
And what about physichs and visual effects?
"our memory compaction is going remarkably well"
"I'm not going to talk about memory compaction"
its currently all running in software, what that means, is that the graphics card is just sitting at idle and the processor is doing all the work. once its more complete, and its running in hardware the cpu usage will be a lot lower, and the gpu will be used much more.
They say they will be back next year.
Why do I have a sudden feeling we'll be seeng them on Kickstarter soon...
Might want to turn on vertical sync.
I would like to say something that may... just may help this company now i got to say im 13 and i have good skill in graphics. But, is thier a way you can actually use the shaders in the polygons and also use the polygns and change thier shape smaller than an atom. if so that may actually help, im just giving a suggestion but if it works i want some credit. so try that.
They have lighting in this demo (Shadows and such), they are aparantely working on animation atm so we will wait and see.
why is bruce worried if this guy crashes the camera into the elephant?, is it because of some bug or glitch that happens?
It depends on how much data they store for each voxel, unfortunately they only seem to be storing colour at the moment. If any single area has a block that's, say, 1024*1024*1024 of empty space, then that will be represented by 1 bit being zero and the end of the traversal down the tree. It can't really be explained on a UA-cam comment section. Anyway they may be using something entirely different. Did you look at the Atomontage stuff yet?
so whats the meaning of polygon in game rendering???
It's not a scam. They just launched their first software package called Geoverse. It's been confirmed to display laser-scanned point cloud files up to 140 Terabytes in size. The massive file only needs to be on one computer on a network, and all other computers on the network can access it using the Geoverse software installed on a laptop. That kind of technology seems impressive, and actually bodes well for the future of online gaming.
So i hope they do have it working smoothly. People arnt going to walk around an in-game plant for several minutes waiting for its procedural polygons to turn up.
I for one, can't wait for this to happen. I hope this hits the market within the next 2-4 years so we can really see it take effect. This is the beginning of a marriage between art and the industry.
Why don't you control the computer animation? It's just a pre-rendered movie
That shaders work is already shown in this video. You have the water reflection and the simple shadows.
The Fox Engine looks that good in current videos because the makers actually cared about Color behaviour, and so forth. They looked at every texture and asset to behave realistically in color under any given lighting situation and practically build the lightning system around that. Something amazing no other studio did with that accuracy so far.
but why didnt they take out the GPU just to demonstrate they dont use it at all?
Why is it so hard to believe the progress they have made? Has anyone ever seen Microsoft SeaDragon project (Infinite zoom)? and Microsoft Photosynth? Euclideon looks like he combined both technology. They don't have to show you anything because the online videos isn't for you, the videos are more like a brochure for investors; focuses on solutions. What about the company Magic leap? They provided even less informative videos about their A.R Tech, yet everyone rallies around how potentially groundbreaking their tech is. I'm excited for what kind of future Euclideon will make
+Jacob Santos You either have a vested interest or use the same Reality Distortion Filter that Bruce does. The product looks so much worse than what Artificial Studios was producing in 2001. Ridiculous.
+Jacob Santos What progress? Where is the progress? I see no progress. You know how you get investors? Show that your shit actually works, they have been circle-jerking this thing for a decade now. Each video they release claims they will have something to show within the next months, yet they go quiet and do not do anything. Understand, they have missed every deadline they have set, and not even fulfilled what they promised after their deadlines. If this works, great, wow, good for all of them and all of us.
Also those Microsoft projects turned out to be complete flops, SeaDragon had very poor quality, and I guess Photosynth did work, but that is only used for the modeling industry and better third party software does exist for this.
no it's not tracing any "rays" it's just tracing single points in 3d space with respect to the points of origin which are calculated with respect to the camera position and angle
as for the shadows and stuff, they are changing the colors (gama) of the atoms to produce an shadow effect in combination with some swisted algorithm
It does look like a voxel renderer (point cloud... Sounds really familiar from a while back, will need to look it up). I'd be interested to look at their memory compaction stuff, maybe SSDs for swap could be the deciding factor here
3 years later, there is still no animation ?
+zaphod2 4 years later, there is still no animation ?
But isn't processing power doubled like every 18th month?
Oh I agree, we will have to wait and see. This lighting in this is fixed or 'baked on' lighting. I want to see several dynamic lights moving though that scene, otherwise we'll be sticking to polygons.
so it converts vertices into atoms? Also does anyone have an idea on how this would effect light mapping and physics processing?
Most awesome concept ever. still.
+Rich3yy (〉'Richard'-) until you try the demo like I did and it feels like you were lied to, its not really unlimited detail....
+MGF100 Still good concept. Also it was just a shitty demo.
true
Rendering what pixels show ? can you please explain to me sir?
you dont, they moved the problem from poly count limits to unique instances limit (and probably very hardware intensive or prerendered animation and physics)
Well, the older stuff looks like Meagher's work because of the errors down the sides of the screen in some videos. This is a 'feature' of his technique.
He patented a way of drawing voxel objects 25 years ago. In the end, we'll never really know how Euclideon works, but drawing the objects directly in screen space rather than ray-tracing to them seems very efficient to me. There are other videos that use his technique on UA-cam if you search "Point Cloud Renderer".
this makes perfect sense. you only need enough memory to fit all the atoms in one map, but only X*Y atoms are visible at ANY time, where X and Y are the resolution. For a resolution of 1920*1080 you will ALWAYS have 2073600 atoms visible, one for each pixel on screen, no matter how high the atom resolution of objects or the draw distance.
If I understood it right, the key point is not the graphics, animation, voxels and whatnot ppl are discussing about.
Its just the fact that they only use the stuff that can be displayed by the monitor.
And this is basically how human eyesight works.
If you have a complex objcect that is very far away, you propably just need to display 1 pixel but in todays games the object is still computed with all its details even though you cant see allof it.
This is pretty stunning you guys get all my kudos
For the first time in history, Australia is actually doing something GOOD for gaming.
The skeptic in me deeply, deeply worries about collision detection and things like this, but the optimist in me thinks of how awesome the destructible environments would be. I want this to succeed!
also it seems like he talks about a process similar to tessellation in the video. which has already seen light whether or not his company was involved.
the moment where the interviewer is using the controller is very awkward, bruce dell sounds incredibly nervous.
@ZeRo2545 only if you buy for more than 1080p, it's not normal that you need another gpu for a new game just to play it at 1080p maxed
im curious as what sort of memory would be needed if every object was unique?.. ie no repetition of objects. as in real life. to an extent.