Euclideon CEO Bruce Robert Dell was kind enough to take a few minutes to answer some questions posed by our community, so read on for the full interview:
AusGamers: How did you overcome the data storage and computational problems involved in dealing with trillions of points in space? It seems like tracking that much data would be very memory intensive, to say the least!
Euclideon: If we were making our world out of little tiny atoms and had to store x, y, z, colour etc… for each atom, then yes it would certainly use up a lot of memory. But instead we’ve found another way of doing it. I could say were using less memory than what the current polygon system uses, but if I did that I think I’d exceeded my quota of unbelievable claims for the day. So we'll leave that for future demonstrations.
The sample video seemed to show a lot of replicated objects and terrain features. How well is the technology able to scale to varied terrain and many different objects?
Euclideon: Several weeks ago, we decided that we needed a demo. Our aim was to show the technology, not necessarily beautiful graphics, I think we succeeded in our task, it’s not a limitation of the technology, it simply came down to not having enough time to make more objects. We only have one artist and the poor guy has been slaving away to the point that even Cinderella would have pity on him. Please don’t accuse him of too much laziness. As said before we're a technology company, not a games company, that is all the art that could be included in the demo in such a short amount of time.
Does rendering in terms of "atoms" instead of polygons make it easier to implement destructible type environments that polygon mesh engines typically have difficulty dealing with?
Euclideon: Regarding destructible environments, we haven’t even touched that yet, but in theory there is no reason why we can’t destroy your environments down to the single atom level. So rather than a game where a table has been made to smash in a certain way, were hoping that we will be able to make the table so you can cut a piece off it and cut it into a little wooden hamster if it pleased you to do so. We’re hoping for total destructibility in our environments. So those of you who used to be mean kids who knocked down the sand castles of others will have a place to vent :)
So far it sounds like all the grunt work in your system will be done by CPU - is there any possibility for a user's GPU to be used in your engine, or can the engine intrinsically not be hardware accelerated like that?
Euclideon: At the moment we’re running everything very well in software alone, however, we're a greedy bunch and seeing as more power is available in the GPU why not use it? I’m sure in time we will make more use of that.
Several of the things you covered - like the realistic looking trees, and ground, and rocks - can be achieved with tessellation under DirectX 11 - how is your approach better?
Euclideon: Well I'd like to proceed compassionately here. Tessellation is nice, I like tessellation, it was a proposed solution to the problems with low polygon counts and it was designed by some clever people who tackled the problems that the present polygon system brings in a very good way, but no I don’t think that tessellated height bumps are better than real geometry if you put the tessellation picture next to unlimited detail there is a pretty big difference. [See picture below]
Euclideon: Also an increase of height doesn’t make blades of grass. Even if we came out 4 years from now and tessellation was actually used in games I still think infinite converted polygons would win over bumpy pictures.
As with your previous announcement, there's a fair bit of healthy skepticism. What are the chances that you might release a small-scope downloadable, playable technology demo that people can actually experiment with to get an idea of the technology?
Euclideon: I think the demo video we released is a little bit like the second movie in a trilogy, it’s like The Empire Strikes Back. It is our intention that we will disappear again, work very hard and then come back.
At our third appearance we hope to release our real time downloadable demos for our supporters.
id Software's John Carmack - a renowned name in graphics programming - has made a cautiously optimistic comment that your technology sounds feasible in a couple of years, but probably not on current hardware. Does that match your timelines?
Euclideon: Firstly I’d like to say that I greatly respect John Carmack for his enormous contribution to 3D industry.
In light of the fact that we haven’t released real time demos, his statement is… sensible, sane, reasonable, but incorrect.
We look forward to sharing our discoveries with the 3D industry when complete.
Carmack also mentioned there's not enough information to know if you're doing ray tracing or "splatting" - any comment there?
Euclideon: We are not raytracing there are no rays, we are not splatting there are no splats.
We'd like to thank Bruce for taking the time out to respond so comprehensively to our questions and we look forward to seeing what else they've got in store for us further down the track!
Posted 10:24pm 03/8/11
Posted 10:30pm 03/8/11
Posted 10:35pm 03/8/11
And a few other typo's. Sorry to be a nazi. I just assume this will get shown around the net a bit..
Posted 10:41pm 03/8/11
Posted 10:47pm 03/8/11
Have they specified how (if) they are doing lighting at all? Seeing light ain't no atom.
Posted 11:08pm 03/8/11
Posted 11:19pm 03/8/11
Posted 11:36pm 03/8/11
Posted 11:48pm 03/8/11
Fairly sure they want to patent+license the approach (or keep it closed and sell as a commercial API) rather than give out clues for competitors to produce an identical or similar implementation before they've finalised their own implementation.
Actually this quote from their About page confirms this:
last edited by parabol at 23:48:02 03/Aug/11
Posted 11:57pm 03/8/11
I suspect they're doing it this way more for the marketing benefits, which seems to be working!
Posted 12:23am 04/8/11
I thought that at first - but if a company with big bucks and massive resources (such as Intel or NVIDIA), especially one which already has extensive experience with 3D rendering (Intel: tick, NVIDIA: tick), were to decide to push a competing technology or implementation then I'm sure they can do it much faster than [from what I gather] the handful of developers that are working on Unlimited Detail.
Alternatively, perhaps they haven't got a comfortable implementation nailed down yet and don't quite know what the final algorithms will be like when they resolve all of their issues. So far it looks pretty glitchy, the lighting is immature and from their responses they haven't thought hard about how objects interaction. If you're not comfortable with your implementation, then you might not be comfortable talking about the details :)
I'm quite interested in this, but will wait and see if there's more to it than just talk and some videos. But yeah the more they talk about it without giving info, really seems an advertising thing given it might all be fluff.
last edited by parabol at 00:23:14 04/Aug/11
Posted 02:15am 04/8/11
Also this line:
Makes him sound arrogant and ignorant.
Posted 07:06am 04/8/11
Posted 08:04am 04/8/11
Oh f*** that's goto be one huge teaser right at the bottom! LOL
Even carmack thought it would be one of these two methods? But it's neither? ZOMG
Posted 08:31am 04/8/11
Surely one of the first things an artist would do to add diversity to a demo scene would be to scale and rotate the assets they have to spice things up. That they don't appear to have done this makes it seem like there's some kind of memory limitation in their implementation.
It's all speculation of course. But what else can you really do with the limited amount of info provided?
Posted 09:59am 04/8/11
Posted 10:38am 04/8/11
Posted 11:01am 04/8/11
or maybe it was the way that all the objects in the demo were static with no collision detection and looked kind of like the effect you get when you stand between two mirrors. I guess time will tell.. I'd like to see a great leap in 3D performance but without a great leap in computational power of the CPUs and GPUs I can't fathom how that would be possible. I cant think of any technology that has ever been invented that provided a 100,000 fold improvement on its direct predecessor, sounds a bit like doctor who's tardis to me lol
Posted 12:55pm 04/8/11
Posted 01:06pm 04/8/11
Posted 01:47pm 04/8/11
Posted 02:09pm 04/8/11
While in its most basic form its just applying a height or displacement map to some vertices to make them "bumpy", one of the biggest aspects of it in DX11 is that it can dynamically create new vertices and new geometry on the fly. You can have a model which is extremely low poly, like a couple of hundred triangles, and make it look super, super detailed by generating all that extra detail on the fly with tessellation.
In the Nvidia demo I posted in the other thread, with the city, the guy was saying its generating something like 8gb of vertex/index data per second to create the scene, you just can't have a scene with that level of complexity unless its generated on the card because (apart from the data storage requirements), the PCI Express bus isn't fast enough to handle that much throughput. As the tessellation units on video cards get better, you'll be able to generate all that extra detail and all that extra geometry without a significant performance hit. It'll be like using pixel shaders and vertex shaders these days, tessellation uses a shader called a geometry shader which was introduced in Dx10.
Was trying to find some screenshots from the nvidia endless city demo that shows it with tessellation on and off, cos it gives a really impressive example. With tessellation off the statues on the buildings are not much more than crudely shaped blocks made out of maybe 30 triangles, with tessellation on they're intricately sculpted statues, pretty cool stuff, and much more than just a "bumpy picture".
http://www.nvidia.com/object/tessellation.html is not a bad read either
Posted 02:21pm 04/8/11
Posted 07:22pm 04/8/11
Why surface now with a half baked video and excuses?
"we ran outta time to make it better...right after we did this video lighting looked way more awesome...our artist is more overworked than ya mum on a Saturday night"
It just smells like a big investor pitch for a cash injection and if the demo is anything to go by it looks like polygons are here to stay for a while yet.
Posted 07:08pm 04/8/11
I'm waiting to see a blue police box flying over the city and a charmingly eccentric guy running about in a suit and Chuck Taylors.
Posted 07:50pm 04/8/11
Posted 03:05pm 05/12/11
The Euclideon guys have recently given a presentation to SSSI (Surveying and Spatial Science Institute) about using their software for ALS (Airborn Laser Scanning) Millions of points of data. I've only seen the 'This is how cool our stuff works' video that they gave out but it's looking pretty cool.
Higher ups here are trying to organise to allow them access to all the ALS data for LCC so we can arrange a live demo. Would be really cool if I could see it up and running on the screen. Currently it sounds like they need to build the tools to make it useful in a Geo-Spatial sense (Measurement, grade etc) but it's an awesome application for real world data comming off the back of something that was designed for gamers.
It sounds like they have been given some cash from the Australian government to make the transition from game focus to a useful industry tool. I'm hoping that when/if we do get them some of our ALS data they will let us have a little bit of a live play in their program.
Posted 12:55pm 05/12/11
Posted 02:56pm 05/12/11
Posted 02:57pm 05/12/11
i think...
Posted 02:58pm 05/12/11
Posted 03:04pm 05/12/11
LCC = Logan City Council.