Euclideon, an Australian company based in Brisbane has made the remarkable claim that they've developed a new graphics rendering technology for video games that is "100,000 times better" than existing systems.
After their first announcement around a year ago, they dropped off the radar, and have recently resurfaced with a new statement as part of a video showcasing their new technology, built with nearly $2 million dollars of assistance from the Australian Government
While it is a bit light on the technical details, it is intriguing:
There is a better way to do computer graphics, which is used in medicine and the sciences. The better way to do computer graphics is to make everything out of tiny little atoms instead of flat panels. The problem is that this particular system uses up a lot of processing power. The more objects you have on the screen, the slower your computer will run. Having four or five detailed objects will run just fine, but you certainly can't do a level of a game.
We got a lot of attention because we made the claim that we could run unlimited "little 3D atoms" in real time.
The video below showcases a test island they've created, apparently comprising of over 21 trillion polygons and running at 20 FPS.
They're apparently a few months away from having an SDK available for game developers to start tinkering with the new technology, so we'll have to wait a bit and see. In the meantime, as AusGamers is also predominantly Brisbane-based, we've contacted them to see if we arrange a chat to discuss more about their new technology.
We're firm believers in the mantra of Carl Sagan - "extraordinary claims require extraordinary evidence" - so we look forward to seeing more from Euclidean about their new technology soon!
Well they've addressed the first issue I had with their claims last time, the existing established pipeline of polygons isn't going to be abandoned anytime soon so they'd need to be able to convert to their format, which it looks like they've done.
The other issue is lighting which they say they've addressed...by adding ambient occlusion but I still only see 1 light source and shadow. Also there is no difference in surface types, no specularity, translucency, etc
That is stunning
It has possibilities beyond gaming.
But it would put an end to continual Graphics card upgrades ?
All games would effectively have unlimited resolution ?
There's a lot of technical barriers I can see. I am massively out of the loop of graphics hardware, but 21 trillion polygons is a lot, a lot, of data. Some game-dev type might know the answer to this - are all polygons "triangles"? If so, then we're talking ~60+trillion points to track. I don't know how this stuff is stored in data structures but that seems like a pretty phenomenal amount of data to store on disk, let alone load into memory - so that must be a significant part of the technology?
Mentioned previously, another limitation may be animation - either combining animations, or mesh-morphing animations. Rotating individual elements may be the only effect possible, which is super lame.
If meshes can be combined with static scenarios made of this stuff then it would be good. But that may not be as easy as one might hope, from what I understand of each method (they'd essentially be operating in two separate universe types, hard to bridge between the two for shadows/occlusion/hit-tests/etc I imagine).
Perhaps they do some sort template system. ie A master point that contains the x,y,z together with further information that contains the framework for the placement many other points? A hologram of sorts?
21 trillion points in space = x, y, z coordinates = generally 32-bit floating point = 12 bytes per point = 234,693 GB of data uncompressed?
Er........
If their demos are real then they must have some method of handling that volume of data in real time.
I'd imagine that it would be ideal for procedural generation (pick a key at design time for each object) to get sub-detail (though I think that directx 11 tessellation may have been the same thing).
Haha, just saw this as top news post on Hacker News Mobile :D
Yeah it sounds like an awful lot of bulls*** but I sure hope it doesn't turn out to be! The renders in the actual movie are very nice though, that's for certain. Agree with Nerf, dynamic meshes are one problem and also don't forget physics. Most game engines employ physics at a large time integration step and rely on collision detection based on polygons (e.g., tria and ray or tria edge and tria edge, etc).
In the Hacker News Comments the very first one points out as well that it's useless until there is a video of a blade of grass moving and a shadow map or dynamic lighting effect being demonstrated.
Q: How did you overcome the data storage and computational problems involved in dealing with trillions of points in space?
Q: The sample video seemed to show a lot of replicated objects and terrain features. How well is the technology able to scale to varied terrain and many different objects?
Q: Does rendering in terms of "atoms" instead of polygons make it easier to implement destructible type environments that polygon mesh engines typically have difficulty dealing with?
Q: I noticed in the video he says that it is running in software, does this mean all the work is done by the CPU, this engine cannot be hardware accelerated (similar to voxels)? Does that mean anyone who wants to run this is going to need an insanely powerful CPU and their video cards would sit there essentially idle?
Q: A lot of what he covered, with the realistic looking trees, and ground, and rocks, can be achieved with tessellation under DirectX 11, how is this better?
Q: Is this going to dovetail in with existing technology at all, or is it entirely its own thing? I mean, can pixel or vertex shaders be used on these polygons that aren't polygons?
Q: I noticed in the video he says that it is running in software, does this mean all the work is done by the CPU, this engine cannot be hardware accelerated (similar to voxels)? Does that mean anyone who wants to run this is going to need an insanely powerful CPU and their video cards would sit there essentially idle?
This was answered in past vids but worth asking again for more details. They said it uses a search algorithm to decide what pixels need to be shown and when, so the amount of pixels on screen is all it needs to display at any one time. They likened it to the way google can find ass loads of results in milliseconds. Watch from about 6:25
Q: Collision calculations to the trillions? Is your approach even possible with animations and collisions?
I see this pop up every few months and just shake my head. It's not feasible to have dynamic scenes (ie, animation) and dynamic lighting at a reasonable framerate and in realtime and within memory constraints on current hardware, and won't be for years to come. It seriously is just spreading silly hype and is rather disappointing to see this as news - I thought people here knew better.
In no way, shape or form should this be newsworthy content. Effectively, this s*** is like fox news for dumb americans - you just suck it up. OMG UNLIMITED POWER!!!11oneonehyperbole
I see this pop up every few months and just shake my head. It's not feasible to have dynamic scenes (ie, animation) and dynamic lighting at a reasonable framerate and in realtime and within memory constraints on current hardware, and won't be for years to come. It seriously is just spreading silly hype and is rather disappointing to see this as news - I thought people here knew better.
In no way, shape or form should this be newsworthy content. Effectively, this s*** is like fox news for dumb americans - you just suck it up. OMG UNLIMITED POWER!!!11oneonehyperbole
if you read my post, perhaps you might notice the somewhat skeptical attitude I presented. Something I did specifically because I knew when this gets picked up on other sources it is going to be the circle-jerk you describe.
Further, we're chasing it up with them to get more information to find out exactly how real it is, something I also mentioned in the post. So while you can hate on us for reporting it at all, I like to think that we've done it in a way that is not only responsible and skeptical but one that encourages readers to know that we're not just blindly posting press releases, but following things up to verify them (you know, sort of like journalists should do).
Q: A lot of what he covered, with the realistic looking trees, and ground, and rocks, can be achieved with tessellation under DirectX 11, how is this better?
Tessellation is still limited to the detail of the normal map, and in the end, it's still just a polygon mesh. It's not particularly cheap, either. I have a reasonably high-end card, but enabling tessellation delivers a fairly large drop in framerate.
I'm willing to give them benefit of the doubt that what they're showing is exactly what they claim it to be, but I have other concerns. Storing that amount of data is going to be f*****g enormous. Hopefully they have some clever compression algorithms or something to help with that, otherwise storing the locations of trillions of points in space is going to be insane.
The other is animation. I would think that calculating new positions for millions of points on every frame is going to be pretty draining on the CPU.
Tessellation is still limited to the detail of the normal map, and in the end, it's still just a polygon mesh. It's not particularly cheap, either. I have a reasonably high-end card, but enabling tessellation delivers a fairly large drop in framerate.
I've seen examples of trees, walls, ground, etc that looks just as good as that demo in the OP, using tessellation. And its not just used for displacement mapping/normal mapping/bump mapping type effects, it can be used to smoothly and dynamically change the LOD of a model over distance (eliminating the pop in/pop out effect a lot of LOD currently has), it can be used to make the game dynamically scaleable to work on a wide range of hardware without artists having to create each level of LOD by hand, it can be used as a smoothing technique to smooth out curved surfaces and the such. Personally I think its the most exciting thing to happen to graphics tech since shaders first hit the scene.
And its only a new technology, cards have only really started coming out in the last 6 - 12 months that are focussing on it, my old Radeon 5900 for example would take a pretty big hit with tessellation on, something like a 40 - 50% hit, but now the new card I got only takes like a 15 - 20% hit with tessellation on. As newer and better cards are released, with better/more tesselation units, performance will become less and less of an issue, till its at the same level pixel and vertex shaders are today.
With the technology (DX11 and tesselation that is) still relatively young, and stuff already looking as good as it does, and performance starting to catch up, it seems like a much better path to go down than something which isn't even hardware accelerated and can't even match the capabilities of what the previous generation of DirectX cards could do.
They would have to deliver something absolutely amazing, something that isn't just a tech demo that focuses on one thing (OMG look how many polys we have, its UNLIMITED!!!!) to change the direction that real-time computer graphics are going in.
A ray of light, tracing one out from every screen pixel (from the camera) until it "hits" something (an object, like an enemy eyeball or something), so you know what to draw there.
A ray of light, tracing one out from every screen pixel (from the camera) until it "hits" something (an object, like an enemy eyeball or something), so you know what to draw there.
Yeah, the "We aren't artists" thing seemed kind of retarded, if you're doing an engine demonstration and the main thing you're trying to demonstrate is how pretty everything looks, wouldn't you at least hire an artist to make it look its best?
I'm gonna bang the tessellation drum some more, cos I found some cool videos on youtube when I went looking, that I hadn't seen before.
Nvidia demo of a procedurally generated, heavily tessellated endless city (OMG its infinite! Infinite polygons!)
This one is pretty cool, shows a more dynamic use of tessellation on game characters. If you get bored, skip forward to 3:20 to see how they do really cool things with the internals of the character, but the whole thing is a pretty cool video.
And a video demonstration of tessellation wouldn't be complete without the Unigine Heaven demo. Keep in mind this now almost two years old, and it still (in my opinion) looks absolutely beautiful.
I read on another forum this technology would be hugely challenging to produce life-like animation.
For what reason, i dont know. But im keen to hear more info from this company.
Reading the slashdot comments, it's all coming back to me now. I spent a few weeks last year trying to write a voxel renderer in JavaScript, until I eventually abandoned the idea. I think that I got onto that approach after seeing the previous "unlimited detail" vid in fact.
Atomotage is one which I think I linked here or on facebook back then, which even in their old vids appears to be ahead of where these guys are now (that being said, creating buzz is an important part of the process as well I guess ^_^).
I didn't think that notch has necessarily proven to be an amazing coder himself anyway, moreso just the lucky creator of a fad which caught on. Not saying that he's wrong, just... it seems big of him to now speak as an authority on everything computer game programming related, seeing as how unimpressive his own work appears to be in that area (just graphic work, not the rest of it, and yes I do appreciate style).
I presume that AG is still going ahead with some form of interview regardless, so it should at least make for interesting news if they do/don't see this stuff in action (and ask up on some of this stuff).
What's to say you can't use polygon characters on voxel terrain anyway?
You probably could, but you'd be rendering them in software mode, cos voxels don't work with hardware acceleration. So I don't imagine you'd get very good performance.
I imagine that draw order for each pixel would be a b****. :/
That being said, I recall now that that was Carmack's intended approach for ID 5 or 6, and is possibly even what was being done in rage (if I wasn't so immensely uninterested in ID's shooter games, I might know more).
While I understand that they could display all the 'atoms' in view efficiently, I haven't seen much mentioned anywhere about the 'atoms' behind the scenes. Physics events would be interacting with an awful lot of these 'atoms'?
I didn't see Notch's comments as 'douche baggery' - I thought all his points were pretty valid. He probably just needed to drop the name-calling (i.e., "snake oil salesman") though to be a bit more credible.
“No! No, this isn’t a hoax,” Bruce Dell laughs, in response to our first, obvious question. “If this was a hoax then we’ve convinced the Australian government it was a hoax.
“We have a government grant – so no, it is not a hoax! We have real time demonstrations.”
Surely it's not that hard to get the Australian government to throw money at you these days?
Don't you just need photocopied proof of residence in a marginal federal Labor seat?
yeah, and that other developer that had simular concerns is a d***??
door, it might be worth re reading, notch has a valid point, sure it is something that with investment will be over come, but the procedures just aren't in place yet (thus making animations harder to do on this system than they are now)
all notch is saying it it is far easier to code animations currently, (and notch knows all about coding things that are hard.....by avoiding it)
when the tools making interfacing with the "atoms" to construct animations in an easy fashion, then he might change his tune, but it is mind boggling how much processing would be involved in animation in this format
My response was regarding his comments 'scam' and 'snakeoil salesman'. Notch has valid points but this does not override the manner which they they were projected (douchbaggedly).
I'm lazy, and you didn't link notchs remarks, they are only mentioned by the writer,
not sure what the context of the original remarks were (because writers dont edit things to and beat up stories ever)
without view this, I still don't the douchbag tag is earned
(if his comments are liking Euclideon to a snake oil salemans, then it is kinda true (at this point of time))
Euclideon has made promises, have shown all the good things it's oil can do, but have no yet delivered a product that lives up its claims,
""“But Notch, it’s NOT a scam!”
I’ve been getting a bunch of feedback that my last blog post is wrong for various reasons, and I’d just like to say that I would absolutely LOVE to be proven wrong. Being wrong is awesome, that’s how you learn.
If you want to read my reasoning behind various assumptions, click “read more”.
Why I assume it’s voxels and not point clouds:
* Voxels store only the information about each point, and their positions are implicit in the location of where the voxel is stored. Point cloud data stores both the information about each point and the position of each point.
* They mention “64 atoms per cubic millimeter”, which is 4*4*4 points per mm^2. While it’s possible they only refer to the sampling frequency for turning polygonal structures into point data, the numbers are just too round for me to ignore as a programmer.
* All repeated structures in the world are all facing the same direction. To me, that means they aren’t able to easily rotate them arbitrarily.
About the size calculation:
* I was trying to show that there was no way there was that much UNIQUE data in the world, and that everything had to be made up of repeated chunks.
* One byte per voxel is way lower than the raw data you’d need. In reality, you’d probably want to track at least 24 bits of color and eight bits of normal vector data per voxel. That’s four times as much data. It’s quite possible you’d want to track even more data.
* If the data compresses down to 1%, it would still be 1 700 three-terrabyte hard drives of data at one byte of raw data per voxel.
Animated voxels:
* Holy crap, people sent me videos of this actually being done!
* I was wrong! :D
* http://www.youtube.com/watch?v=tkn6ubbp1SE
* (But please note that just that single animated character runs at 36 fps)
Why it’s a scam:
* They pretend like they’re doing something new and unique, but in reality a lot of people are researching this. There are a lot of known draw-backs to doing this.
* They refuse to address the known flaws. They don’t show non-repeated architecture, they don’t show animation, they don’t show rotated geometry, and they don’t show dynamic lighting.
* They invent new terminology and use superlatives and plenty of unverifiable claims.
* They say it’s a “search algorithm”. That’s just semantics to confuse the issue. Sparse voxel octrees is a search algorithm to do very fast ray casting in a voxel space.
* They seem to be doing some very impressive voxel rendering stuff, which could absolutely be used to make very interesting games, but it’s not as great as they claim it is. The only reason I can see for them misrepresenting it this bad is that I assume they’re looking for funding and/or to get bought up.
If these guys were being honest with the drawbacks and weaknesses of their system, I’d be their biggest fan. As it is now, it’s almost like they’re trying NOT to be trustworthy.
All this said, voxels are amazing. So is raytracing and raycasting. As computers get more powerful, and storage gets faster and cheaper, we will see amazing things happen.
And a final word to the engineers who worked on this: Great job, I am impressed! But please tell your marketing department to stop lying. ;)""
Posted 10:42am 02/8/11
Posted 10:42am 02/8/11
Posted 10:46am 02/8/11
Posted 10:52am 02/8/11
Probably going to need a machine sent from the future to run a game using this!
Posted 10:52am 02/8/11
The other issue is lighting which they say they've addressed...by adding ambient occlusion but I still only see 1 light source and shadow. Also there is no difference in surface types, no specularity, translucency, etc
Posted 10:56am 02/8/11
Posted 11:00am 02/8/11
Posted 11:02am 02/8/11
nerf was 3 hours ahead of you, trog
very interesting concept. would be great to see the end of circles that aren't really circles!
going as far as the individual dirt pebbles is pretty ridiculous.
Posted 11:04am 02/8/11
Good stuff so far tho
Posted 11:06am 02/8/11
Posted 11:15am 02/8/11
Posted 11:31am 02/8/11
It has possibilities beyond gaming.
But it would put an end to continual Graphics card upgrades ?
All games would effectively have unlimited resolution ?
Posted 11:41am 02/8/11
Posted 11:51am 02/8/11
Posted 11:54am 02/8/11
Everything there is sitting completely still, which means they haven't gotten animation working yet either.
The other thing is that there simply isn't much variety shown there, sure there is heaps of s***, but it's all just instances of a few items.
Posted 12:00pm 02/8/11
Also, "Unlimited" - You keep using that word. I do not think it means what you think it means.
Posted 12:01pm 02/8/11
Posted 12:26pm 02/8/11
Posted 01:05pm 02/8/11
If meshes can be combined with static scenarios made of this stuff then it would be good. But that may not be as easy as one might hope, from what I understand of each method (they'd essentially be operating in two separate universe types, hard to bridge between the two for shadows/occlusion/hit-tests/etc I imagine).
Posted 02:26pm 02/8/11
Er........
Posted 02:36pm 02/8/11
Posted 02:37pm 02/8/11
Posted 02:42pm 02/8/11
Posted 02:52pm 02/8/11
If their demos are real then they must have some method of handling that volume of data in real time.
I'd imagine that it would be ideal for procedural generation (pick a key at design time for each object) to get sub-detail (though I think that directx 11 tessellation may have been the same thing).
Posted 02:58pm 02/8/11
Yeah it sounds like an awful lot of bulls*** but I sure hope it doesn't turn out to be! The renders in the actual movie are very nice though, that's for certain. Agree with Nerf, dynamic meshes are one problem and also don't forget physics. Most game engines employ physics at a large time integration step and rely on collision detection based on polygons (e.g., tria and ray or tria edge and tria edge, etc).
In the Hacker News Comments the very first one points out as well that it's useless until there is a video of a blade of grass moving and a shadow map or dynamic lighting effect being demonstrated.
Posted 02:57pm 02/8/11
Posted 03:03pm 02/8/11
Posted 03:06pm 02/8/11
Posted 03:15pm 02/8/11
Posted 03:15pm 02/8/11
Q: The sample video seemed to show a lot of replicated objects and terrain features. How well is the technology able to scale to varied terrain and many different objects?
Q: Does rendering in terms of "atoms" instead of polygons make it easier to implement destructible type environments that polygon mesh engines typically have difficulty dealing with?
Posted 03:41pm 02/8/11
Q: A lot of what he covered, with the realistic looking trees, and ground, and rocks, can be achieved with tessellation under DirectX 11, how is this better?
Q: Is this going to dovetail in with existing technology at all, or is it entirely its own thing? I mean, can pixel or vertex shaders be used on these polygons that aren't polygons?
Posted 03:44pm 02/8/11
Posted 03:47pm 02/8/11
Posted 03:52pm 02/8/11
Q: Do they think that they'll be able to merge the scenes with polygon objects? (if it'd help overcome those limitations)
Posted 03:58pm 02/8/11
Posted 04:04pm 02/8/11
This was answered in past vids but worth asking again for more details. They said it uses a search algorithm to decide what pixels need to be shown and when, so the amount of pixels on screen is all it needs to display at any one time. They likened it to the way google can find ass loads of results in milliseconds. Watch from about 6:25
Stole my question, but I'll add another.
Q. How will they handle transparency?
Posted 04:33pm 02/8/11
(wtf how did I not know he used Twitter)
Posted 04:29pm 02/8/11
Posted 04:42pm 02/8/11
Posted 04:58pm 02/8/11
In no way, shape or form should this be newsworthy content. Effectively, this s*** is like fox news for dumb americans - you just suck it up. OMG UNLIMITED POWER!!!11oneonehyperbole
Posted 05:12pm 02/8/11
Posted 05:21pm 02/8/11
Posted 05:28pm 02/8/11
Further, we're chasing it up with them to get more information to find out exactly how real it is, something I also mentioned in the post. So while you can hate on us for reporting it at all, I like to think that we've done it in a way that is not only responsible and skeptical but one that encourages readers to know that we're not just blindly posting press releases, but following things up to verify them (you know, sort of like journalists should do).
Posted 05:33pm 02/8/11
There's this, though one person said that "the problem is that it's pre-computed" - though I can't tell if it morphs the mesh.
Hell no, this stuff is hard, I've even dabbled in it (polygons and voxels) and still don't know wtf is going on. :P
Posted 05:37pm 02/8/11
You know how they make pixar animated movies right? How it can take full render farms months to render an entire 2 hour movie?
Well until it only takes 2 hours to render a 2 hour movie, this crap isn't happening.
Posted 05:40pm 02/8/11
Posted 05:46pm 02/8/11
Posted 05:48pm 02/8/11
Tessellation is still limited to the detail of the normal map, and in the end, it's still just a polygon mesh. It's not particularly cheap, either. I have a reasonably high-end card, but enabling tessellation delivers a fairly large drop in framerate.
I'm willing to give them benefit of the doubt that what they're showing is exactly what they claim it to be, but I have other concerns. Storing that amount of data is going to be f*****g enormous. Hopefully they have some clever compression algorithms or something to help with that, otherwise storing the locations of trillions of points in space is going to be insane.
The other is animation. I would think that calculating new positions for millions of points on every frame is going to be pretty draining on the CPU.
Posted 05:56pm 02/8/11
Posted 06:01pm 02/8/11
Posted 06:08pm 02/8/11
They stated in one of their vids that this is not like ray tracing and that the issue with ray tracing is that it's very slow.
Posted 06:26pm 02/8/11
I've seen examples of trees, walls, ground, etc that looks just as good as that demo in the OP, using tessellation. And its not just used for displacement mapping/normal mapping/bump mapping type effects, it can be used to smoothly and dynamically change the LOD of a model over distance (eliminating the pop in/pop out effect a lot of LOD currently has), it can be used to make the game dynamically scaleable to work on a wide range of hardware without artists having to create each level of LOD by hand, it can be used as a smoothing technique to smooth out curved surfaces and the such. Personally I think its the most exciting thing to happen to graphics tech since shaders first hit the scene.
And its only a new technology, cards have only really started coming out in the last 6 - 12 months that are focussing on it, my old Radeon 5900 for example would take a pretty big hit with tessellation on, something like a 40 - 50% hit, but now the new card I got only takes like a 15 - 20% hit with tessellation on. As newer and better cards are released, with better/more tesselation units, performance will become less and less of an issue, till its at the same level pixel and vertex shaders are today.
With the technology (DX11 and tesselation that is) still relatively young, and stuff already looking as good as it does, and performance starting to catch up, it seems like a much better path to go down than something which isn't even hardware accelerated and can't even match the capabilities of what the previous generation of DirectX cards could do.
They would have to deliver something absolutely amazing, something that isn't just a tech demo that focuses on one thing (OMG look how many polys we have, its UNLIMITED!!!!) to change the direction that real-time computer graphics are going in.
Posted 06:11pm 02/8/11
Posted 06:11pm 02/8/11
Posted 06:17pm 02/8/11
A ray of light, tracing one out from every screen pixel (from the camera) until it "hits" something (an object, like an enemy eyeball or something), so you know what to draw there.
Posted 06:24pm 02/8/11
http://games.slashdot.org/story/11/08/02/0443250/Making-Graphics-In-Games-100000-Times-Better
Who the heck is "trawg" though - any relation to our "trog"?
Posted 06:29pm 02/8/11
One and the same AFAIK. I guess his alternate nick when 'trog' is already taken.
Posted 06:33pm 02/8/11
Given that I see an xbox logo on satchel bag in the foreground, I'd say you're about 10 years too early.
Posted 06:45pm 02/8/11
Nerf did you forget your sarcasm hat today?
Posted 07:18pm 02/8/11
I think he meant 2003 from reddit
I know that they say that they aren't "artists" but everything seems just ..meh.
P.S when are these forums updating to something made this decade :O
Posted 07:41pm 02/8/11
Posted 08:18pm 02/8/11
Posted 08:21pm 02/8/11
Posted 08:43pm 02/8/11
Posted 09:56pm 02/8/11
Posted 10:52pm 02/8/11
Posted 11:34pm 02/8/11
Nvidia demo of a procedurally generated, heavily tessellated endless city (OMG its infinite! Infinite polygons!)
This one is pretty cool, shows a more dynamic use of tessellation on game characters. If you get bored, skip forward to 3:20 to see how they do really cool things with the internals of the character, but the whole thing is a pretty cool video.
And a video demonstration of tessellation wouldn't be complete without the Unigine Heaven demo. Keep in mind this now almost two years old, and it still (in my opinion) looks absolutely beautiful.
Posted 12:28am 03/8/11
For what reason, i dont know. But im keen to hear more info from this company.
Posted 02:54am 03/8/11
Atomotage is one which I think I linked here or on facebook back then, which even in their old vids appears to be ahead of where these guys are now (that being said, creating buzz is an important part of the process as well I guess ^_^).
edit, hah, yes. Man, my memory is terrible. :/
Posted 11:27am 03/8/11
Posted 12:19pm 03/8/11
Posted 01:32pm 03/8/11
Posted 02:56pm 03/8/11
Posted 03:01pm 03/8/11
Posted 03:20pm 03/8/11
Posted 03:36pm 03/8/11
You probably could, but you'd be rendering them in software mode, cos voxels don't work with hardware acceleration. So I don't imagine you'd get very good performance.
Posted 04:01pm 03/8/11
That being said, I recall now that that was Carmack's intended approach for ID 5 or 6, and is possibly even what was being done in rage (if I wasn't so immensely uninterested in ID's shooter games, I might know more).
Posted 05:03pm 03/8/11
Posted 05:53pm 03/8/11
I didn't see Notch's comments as 'douche baggery' - I thought all his points were pretty valid. He probably just needed to drop the name-calling (i.e., "snake oil salesman") though to be a bit more credible.
Posted 09:19pm 03/8/11
Posted 09:24pm 03/8/11
Posted 09:40pm 03/8/11
Surely it's not that hard to get the Australian government to throw money at you these days?
Don't you just need photocopied proof of residence in a marginal federal Labor seat?
Posted 09:43pm 03/8/11
yeah, and that other developer that had simular concerns is a d***??
door, it might be worth re reading, notch has a valid point, sure it is something that with investment will be over come, but the procedures just aren't in place yet (thus making animations harder to do on this system than they are now)
all notch is saying it it is far easier to code animations currently, (and notch knows all about coding things that are hard.....by avoiding it)
when the tools making interfacing with the "atoms" to construct animations in an easy fashion, then he might change his tune, but it is mind boggling how much processing would be involved in animation in this format
Posted 09:52pm 03/8/11
Posted 10:06pm 03/8/11
I'm lazy, and you didn't link notchs remarks, they are only mentioned by the writer,
not sure what the context of the original remarks were (because writers dont edit things to and beat up stories ever)
without view this, I still don't the douchbag tag is earned
(if his comments are liking Euclideon to a snake oil salemans, then it is kinda true (at this point of time))
Euclideon has made promises, have shown all the good things it's oil can do, but have no yet delivered a product that lives up its claims,
Posted 07:17am 04/8/11
""“But Notch, it’s NOT a scam!”
I’ve been getting a bunch of feedback that my last blog post is wrong for various reasons, and I’d just like to say that I would absolutely LOVE to be proven wrong. Being wrong is awesome, that’s how you learn.
If you want to read my reasoning behind various assumptions, click “read more”.
Why I assume it’s voxels and not point clouds:
* Voxels store only the information about each point, and their positions are implicit in the location of where the voxel is stored. Point cloud data stores both the information about each point and the position of each point.
* They mention “64 atoms per cubic millimeter”, which is 4*4*4 points per mm^2. While it’s possible they only refer to the sampling frequency for turning polygonal structures into point data, the numbers are just too round for me to ignore as a programmer.
* All repeated structures in the world are all facing the same direction. To me, that means they aren’t able to easily rotate them arbitrarily.
About the size calculation:
* I was trying to show that there was no way there was that much UNIQUE data in the world, and that everything had to be made up of repeated chunks.
* One byte per voxel is way lower than the raw data you’d need. In reality, you’d probably want to track at least 24 bits of color and eight bits of normal vector data per voxel. That’s four times as much data. It’s quite possible you’d want to track even more data.
* If the data compresses down to 1%, it would still be 1 700 three-terrabyte hard drives of data at one byte of raw data per voxel.
Animated voxels:
* Holy crap, people sent me videos of this actually being done!
* I was wrong! :D
* http://www.youtube.com/watch?v=tkn6ubbp1SE
* (But please note that just that single animated character runs at 36 fps)
Why it’s a scam:
* They pretend like they’re doing something new and unique, but in reality a lot of people are researching this. There are a lot of known draw-backs to doing this.
* They refuse to address the known flaws. They don’t show non-repeated architecture, they don’t show animation, they don’t show rotated geometry, and they don’t show dynamic lighting.
* They invent new terminology and use superlatives and plenty of unverifiable claims.
* They say it’s a “search algorithm”. That’s just semantics to confuse the issue. Sparse voxel octrees is a search algorithm to do very fast ray casting in a voxel space.
* They seem to be doing some very impressive voxel rendering stuff, which could absolutely be used to make very interesting games, but it’s not as great as they claim it is. The only reason I can see for them misrepresenting it this bad is that I assume they’re looking for funding and/or to get bought up.
If these guys were being honest with the drawbacks and weaknesses of their system, I’d be their biggest fan. As it is now, it’s almost like they’re trying NOT to be trustworthy.
All this said, voxels are amazing. So is raytracing and raycasting. As computers get more powerful, and storage gets faster and cheaper, we will see amazing things happen.
And a final word to the engineers who worked on this: Great job, I am impressed! But please tell your marketing department to stop lying. ;)""
i think it is a sound argument