Ok so here is info about the DOM manipulation performance test that we recently started working on. I totally agree with the idea of having small targeted groups for specific topics, such as this 3d web client architecture. However as those don't exist yet let's use this list now for the first communications before the conf call that's being scheduled for this week. This benchmark is really simple and by no means fancy at this point but already gives some information. 'Massive' in the email subject refers to the fact that it can do a massive amount of DOM manipulations, not that the test code would be massive at all :) The code and little documentation is available at: https://github.com/playsign/wex-experiments/tree/master/domBenchmark The rationale is that in the current native realXtend implementation in Tundra SDK we have a similar mechanism: the core scene has all the data is in entity-attributes, and for example incoming network messages that move objects are handled so that the net part modifies that scene, which the renderer observes and updates the graphics engine scene accordingly. In the FI-WARE goals and plans, and in the declarative 3d web movement in general, there are good reasons for having all the data in the DOM -- for example being able to use the browser debugger to see and modify the values. We have the same in native Tundra SDK's scene & entity editor GUIs. I drew a first sketch of an architecture diagram about how this could work in the browser client, in a minimally modified WebTundra first where using a JS array for the scene data is just switched to using the DOM document instead: https://rawgithub.com/realXtend/doc/master/dom/rexdom.svg (in case someone wants to edit it, the source is at https://github.com/realXtend/doc/tree/master/dom - I used Inkscape where SVG is the native format). To examine the feasibility of this design I wanted to know how it performs. We don't have conclusive analysis nor any final verdict yet. We test with quite large numbers because we can have quite a lot happening in the scenes -- at least tens or hundreds of constantly moving objects etc. In this design every change in every object position would go via the DOM. To contrast, we have the similar mechnism implemented in pure JS - just an array with the data and a reference to the observing callback function. It is much simpler in functionality than what the DOM and Mutation Observers provide -- can be perhaps considered as an upper bound how fast an optimized solution could work. So not a fair comparison. Some early figures: Creating DOM elements is quite fast, I now created 10k elements in 0.136 seconds. Manipulating is quite fast too, 10k attribute modifications were made in 84ms for me now. Erno did a little better benchmarking with a 100 repeated runs, both for the DOM using and pure JS versions. His figures are on the benchmark projects website but I paste to here too: "running the test for 10k elements 100 times in a row, the pure-js version takes 19 ms. The DOM version of same takes 4700 ms. So a 250x difference." We are aware that this is a microbenchmark with all kinds of fallacies, can be improved if real benchmarking is necessary. But now I think the first question is: what are the real application requirements? How much DOM manipulations do we really need? We talked a bit about that and I think we can construct a few scenarios for that. Then we can calculate whether the performance is ok. I wrote that question to the stub overall doc that plan to continue in https://github.com/realXtend/doc/tree/master/dom This whole track was overridden a bit by the campus party preparations recently but I think now is good time to continue. I hope this clarified the situation, am sorry for the confusion that the quick '300x diff' pre-info during the weekend caused.. And to make it clear: we haven't done this with a real 3d client or networking or anything like that but decided to benchmark first in isolation. About the performance difference, we suspect that it is due to the internal machinery that the browser executes for every DOM change, and possibly also something in the C++ <-> JS interfacing. I looked a bit and found that to optimize large DOM manipulations web folks batch e.g. element creations etc. and don't do them individually. Do ask for any further info, and please bear with us and the early & bit cumbersome to use test case (we mostly just made it for ourselves for now at least). Cheers, ~Toni -------------- next part -------------- An HTML attachment was scrubbed... URL: <https://lists.fiware.org/private/fiware-miwi/attachments/20130902/23c25093/attachment.html>
You can get more information about our cookies and privacy policies clicking on the following links: Privacy policy Cookies policy