[Fiware-miwi] 3D UI: what should we use for rendering, and why?

Toni Alatalo toni at playsign.net
Sun Sep 29 09:22:25 CEST 2013


There is nothing up from the DFKI side for the 3D GE specs yet so I don't know whether those texts will address this. However,  I think it is time to raise this important question regarding the 3D UI Epic: what tech should we use for rendering, and why? If the GE spec texts answer this, great, as then we have both the question and the answer.

I know Philipp already once stated that he does not even want to discuss this as XML3d was defined in the proposal. This is a misunderstanding due to the fact that XML3d means two things: The proposal defined the 1. spec, the XML schema and the way of DOM integration, over e.g. X3D and X3DOM or TXML. The proposals from all stages explicitly define how the 2. rendering technology is to be investigated -- that’s why we had the discussions in Winterthur about for example whether it would work to use Three for rendering within the XML3d.js framework etc. This was also made explicit in the architecture diagram that we delivered in the negotiation phase in spring.

My main point here is: no matter what tech we decide to use, we must have a solid explicit rationale for choosing it. When a realXtend user asks me why we chose that particular tech I want to be able to explain it clearly and even point to a design rationale document with more details. It is such an important decision in this very important project so we can’t just overlook it and go with something.

Regarding the goals of FI-WARE, in my understanding, this is a business decision. That is: the purpose of FI-WARE is to boost Internet application business, and the evaluation criteria is the rate and success in adoption by developers. If someone knows a useful framework for business decision evaluation or has experience in doing this kind of analysis, please do tell. I’m only formally educated in tech (and humanities) and just a self-taught businessman (though for 19 years already :)

My simple view from a business perspective is this: when targeting business use, it helps to start the use as soon as possible. This way you get developers and even end users involved right in the beginning, get feedback of how things work for them and can continue development in a more informed manner. Community is born and knowledge about your offering starts to spread. Ideally other platform developers join you in the effort or start developing their own related product so you start getting an economy with multiple parties etc. (I've heard of the the term time-to-market, probably means something at least close to this).

To end this post, where the aim is only to rise the question and not have answers yet, a concrete example about the situation we have here right now: The Oulu3DLive city model is now being prepared for first launch. With some simple applications that hopefully are already useful for the citizens. The Oulu3DInfra-project on the University of Oulu side got a grant (800k€ for 2013-2014) for the modeling and city-specific applications. There we have -- together with the Oulu3d Ltd. company which the city chose to operate the service and develop it into a business -- decided to target already the first major public launch with the Web client to not require the masses to install Tundra or anything. The Infra-project knows about FI-WARE and, I think rightfully, expects MIWI to deliver a functional Web client and the whole platform for the launch.

So where are we with the tech on the FI-WARE side? The first rendering test with the old optimized 9-blocks model has been made both with XML3d.js in Three.JS (linked from the reqs/use cases doc). Both load and show the geometry OK. XML3d.js misses most of the textures, I think because they are not in a compressed format so they won’t fit memory? The Three.JS version shows all the textures from the desktop-gpu friendly compressed data from DDS files. XML3d folks have already described how they plan to add support for compressed textures and how it should be easy enough.

So one way forward would be to work on improvements to XML3d.js: compressed texture use, shadows (discussed in Winterthur), what else is missing?

But how would that make sense businesswise? As with Three.JS we already have all that working and a successful test has been online for weeks now so that the potential platform users (e.g. app developers e.g. at Oulu3d Ltd. and university) have been able to verify that it works etc. Why would we go back, instead of continuing forward to meet new challenges? For example the culling / resource management that the extended model (>30 blocks heavily textured) will require and probably there are other things too. It would be nice to have a first solution for that scalability challenge in a few weeks already. It does not require novel research, I think, as dealing with complex scenes is already well known in the literature.

Please don’t get me wrong: I don’t have anything against XML3d.js. If there are good reasons to use it -- for example better performance, a good architecture that facilitates scalability for large complex scenes well, and a great API for app development -- that’s great! It then solves business problems for us and makes good sense. I just want to point out that we should know that explicitly, and am willing to work on this analysis if we figure it’s useful (the reqs & use cases doc already gives some ground for this effort too). Ease of XFlow integration is an important point in my understanding: for example if an app dev uses XFlow to declare a generated GLSL shader, is it then easy to use that with whatever renderer, or much more straightforward with XML3d.js?

~Toni

P.S. I’m not discussing the declarative authoring & the scene serialization formats etc. here as, like mentioned above, that was settled to begin with in favour of XML3d.




More information about the Fiware-miwi mailing list

You can get more information about our cookies and privacy policies clicking on the following links: Privacy policy   Cookies policy