[Fiware-miwi] A Candidate AssetPipe: glTF aka. Collada JSON

Toni Alatalo toni at playsign.net
Mon Dec 2 11:09:49 CET 2013


On 28 Nov 2013, at 12:03, Kristian Sons <kristian.sons at dfki.de> wrote:
> As also mentioned in the paper, I see three issues:
> 1. The minimum of number of request is given by the number of meshes. For scenes with many meshes, this will be the bottleneck. Additionally, this minimum can only be reached if the attribute data is interleaved and not indexed. I don't know the status for current exporters, but I guess most exporters would just support interleaving for very specific signatures

I think this has been addressed: there is a single file for all meshes, in the case of our Oulu test scene: http://playsign.tklapp.com:8000/glTF-webgl-viewer/model/oulu/MastersceneBlendercompression.bin

So for the geometry it is 2 requests for the whole scene: the scene JSON, which includes the materials, & the geometry binary. Then additional fetches for the texture images but that’s normal and good AFAIK (supposing small individual textures are in larger texture atlas images anyway). Also shader sources are separate requests now (but typically there’s not that many - if it’s a problem I suppose they could be joined or embedded in the JSON or something).

Or do you mean that even when the data is in a single file there will be multiple requests for the different arrays in it, for vertices and normals etc? I’ve been assuming that’s not the case. Was not able to verify with the Chrome debugger yet.

In the scene the references to the mesh geometry arrays in that one big collection are like this, in http://playsign.tklapp.com:8000/glTF-webgl-viewer/model/oulu/mastersceneBlender.json
"PallasBlock-mesh": {
            "extensions": {
                "Open3DGC-compression": {
                    "compressedData": {
                        "bufferView": "bufferView_2669",
                        "byteOffset": 603141,
                        "count": 9417,
                        "floatAttributesIndexes": {
                            "attribute_861": 0
                        },
                        "indicesCount": 1053,
                        "mode": "binary",
                        "type": 5121,
                        "verticesCount": 1053
                    }
                }
            },
            "name": "PallasBlock",
…

As for exporters, the point is that the official COLLADA2glTF converter is used to create the files for transfer so only one implementation of optimal organization of the data is needed.

> 2. The registry for compressors is way better than a fixed set of compression methods and Khronos has a good practice of extension registries. However, I think that 3D data is so inhomogeneous that even this is not flexible enough. Instead we propose to use a "Code on demand" appraoch, which means that a fallback decoder implementation could be downloaded if the rendering system has no built-in (possible HW accelerated) decompressor for a data set. Neil liked this idea, I don't know if maybe glTF has moved into this direction

Ok. I didn’t see indications for this yet, Erno looked closer so can perhaps fill in, but I think the whole compression plugin system was to be revisited still. Anyhow I’m really happy that the Open3DGC compression is there already, both in the converter for encoding and in the webgl loader for decoding so we have something that could use already.

Perhaps your proposed code-on-demand is an improvement for a bit later? Actually the idea of having registry / extensions does not seem that far from also being able to load the decoder on-demand.

> 3. Using the GL buffers as common ground for all mesh formats is a good idea. However, finding a suitable abstraction layer for materials and animations is way harder and not even solved in a high abstraction format such as COLLADA

My assumption here has been that COLLADA solves it to some extent and glTF carries over the same. Basics seem to work and arbitrary shaders are AFAIK supported so again this so far looks like something we could use. Certainly what you say is true but I think we can live with a limited solution first: for example support for basic materials, custom shaders and animations enough for basic skeletal animations to work.

> BTW, it would probably be easy (and interesting) to implement a glTF plug-in for XML3D.

Agreed.

> Jan (in CC) is a PhD student in our group who started to design and implement a transmission format that addresses the requirements we collected in the position paper above. It is very generic and allows to stream structured binary data and to decode the data in parallel. Currently it's not possible to chunks of a request if ArrayBuffers are requested (this is only possible for text requests). We identified this issue in the W3C CG together with Fraunhofer IGD (and some Audio guys who are interested in the same feature). If we had this, it would be perfect cause everything could be in one request.

Ok great. I’ll still need to properly read that paper and probably discuss with Erno about the specifics you mention there.

Regarding the roadmap, I’ve been thinking we should have *some* recommended asset pipeline / scene & mesh etc. format very soon (within weeks). Ideally further down that road we’d then the advances you have described. 

glTF still seems like a valid candidate for the immediate / near term. I’m a bit unsure still though whether understood correctly your concerns & how glTF has AFAIK addressed them. Also we still need to investigate and test materials and animations (the git log of the three loader seemed that it would have already gotten animation support ok this fall but we don’t really know yet).

For the long run, have you considered it a suitable umbrella / framework where your advances could be integrated and standardized later on, or do you consider it somehow flawed or impractical so that your solutions won’t fit in — and hence propose we would not adopt it for FI-WARE &/ realXtend?

I’ve been wondering whether it really means anything that something is proposed and driven by Chronos. It sounds great and looks convincing on the surface but is the work still up to random volunteers who may or may not complete the work in some uncertain schedule? They are behind the planned schedules from spring and also the later autumn SIGGRAPH talks. If the basics and the framework are good we can help there and take care of the required implementations ourselves so that’s not really a problem for us though, just something that have been wondering.

> In a next step, we will implemnt some encoder/decoders. We want to show that some of the decoding could be achieved in the vertex shader (e.g. decoding of quantized vertex attributes) other could be decoded using Web Workers/Xflow and/or WebCL. Neil is also very open for good technical solutions.

Cool — such parallel number crunching indeed does not seem like the best task for normal browser JS, though it always surprises me how quickly for example that Oulu scene is shown from the compressed Open3DGC data (same with the OpenCTM tests earlier) with the current solutions (I suppose the JIT compilers succeed well there).

> Best regards,
> Kristian

Thanks a lot for the info and was great to hear how you indeed have been active here for long already,
~Toni

> 
> 
> 
> 
> -- 
> _______________________________________________________________________________
> 
> Kristian Sons
> Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, DFKI
> Agenten und Simulierte Realität
> Campus, Geb. D 3 2, Raum 0.77
> 66123 Saarbrücken, Germany
> 
> Phone: +49 681 85775-3833
> Phone: +49 681 302-3833
> Fax:   +49 681 85775–2235
> kristian.sons at dfki.de
> http://www.xml3d.org
> 
> Geschäftsführung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
> Dr. Walter Olthoff
> 
> Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
> Amtsgericht Kaiserslautern, HRB 2313
> _______________________________________________________________________________
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.fiware.org/private/fiware-miwi/attachments/20131202/84340f8b/attachment.html>


More information about the Fiware-miwi mailing list

You can get more information about our cookies and privacy policies clicking on the following links: Privacy policy   Cookies policy