[Fiware-miwi] massive DOM manipulation benchmark

"Lasse Öörni" lasse.oorni at ludocraft.com
Tue Sep 3 10:07:23 CEST 2013


> The first rough figures we had for these were approximately:
>
> 1. Creating a few thousand objects at start. In a networked setup when a
> client enters a new scene - in that case takes a while for the data to
> arrive, I'm not sure how long exactly, Lasse probably has some estimate
> and I think we can measure too. When reading the scene from a local file,
> getting the data is very fast of course (excluding the loading of the
> assets which can take time even locally, I mean just the elements/entities
> and the attribute data with references to assets).

A typical Tundra entity with Mesh, Name, Placeable and RigidBody
components takes about 256 bytes to initially transmit (strangely even
number, and in fact I was surprised how high this data amount is,
certainly there is room for optimization), so if we assume a bandwidth of
100 KB/s, which is on the low side, we would take 2.5 seconds to
synchronize 1000 such entities.

- Lasse




> 2. A couple of hundred constantly moving objects. In the current net sync
> we use max 20Hz, and then the client scene code inter(- and extra)polates
> the movement with the max 60fps used for drawing
>
> A local script for e.g. a swarm can (ideally) do much more manipulations
> with the scene than what we typically can get over the net.
>
>> XML3D (which is the target) and some other SceneGraph and create a small
>> benchmark that does some synthetic test of the relevant operations. This
>> should give us a better option of what the performance differences
>> between the two version is for realistic workloads.
>
> Agreed.
>
> Thanks for the clarification and further info about how you've made it,
> seems that that's the strategy for dealing with the data flows instead of
> going via the DOM document.
>
> ~Toni
>
>> I would assume that most of these operations might actually bypass the
>> generic DOM interfaces and go straight to XML3D in JS (e.g. via typed
>> arrays), which should not be much slower than directly using JS. All the
>> internal XML3D/Xflow operations like rendering, animation, and similar
>> operate in pure JS anyway (as we explained with the layered model in
>> Winterthus), so there should not be a big difference for those
>> operations anyway (other than differences in the rendering strategy
>> itself.
>>
>> Since Sergiy and Torsten are implementing our sync server right now, we
>> should be able to recreate a very similar setup to basic realXtend on
>> our side and do some real live measurements there as well. Depending on
>> how fast this will work, we may be able to go straight to this version
>> for comparisons. Nothing is optimized much there, but its may be a good
>> reference point.
>>
>> Since our implementation is based on the ECA model of realXtend, it
>> might not be too hard to create common network layer (we use a very
>> simple and inefficient protocol right now) and drive XML3D also from
>> realXtend. This would be a great step forward (plus a perfect way for
>> performance comparisons and tuning).
>>
>>
>> Best,
>>
>> 	Philipp
>>
>> Am 02.09.2013 16:02, schrieb Toni Alatalo:
>>> On Sep 2, 2013, at 4:37 PM, Kristian Sons <kristian.sons at dfki.de
>>> <mailto:kristian.sons at dfki.de>> wrote:
>>>> I saw that the JS modification in the test were just calling a
>>>> function that sets the data of the object (something the JS engine has
>>>> inlined quickly). To make it a little better comparable, I added a
>>>> callback that sums up the ids. The DOM mutation events are clustered
>>>> and scheduled in a separate phase:
>>>> 10k DOM modifications: ~40ms + 6ms in callback
>>>> 10k JS modifications: ~19ms
>>>> Now we have _2.5x_ difference!
>>>
>>> Yes, we also figured that the most simple minimal JS thing there is the
>>> 'upper bound' and understood that what is going on with the DOM is much
>>> more complex.
>>>
>>> Please do feel free to fork/share if you want to show how you did the
>>> clustering & scheduling, we also assumed that for real code to do the
>>> DOM manipulations efficiently we need that. Of course we can also study
>>> it further in the xml3d.js codebase where I figure you have it in
>>> action.
>>>
>>> This first version of the benchmark was basically to see quickly how
>>> well the naive usage of the DOM performs. I thought that it's quite
>>> good
>>> actually.
>>>
>>>> The DOM is meant as in interface to the user. That's how we designed
>>>> XML3D. All medium-range modification (a few hundereds per frame) are
>>>> okay. One must not forget that users will use jQuery etc, which can --
>>>> used the wrong way -- slow down the whole application (not only for 3D
>>>> content). For all other operations, we have concept like Xflow, where
>>>> the calculation is hidden behind the DOM interface. Also the rendering
>>>> is also hidden behind the DOM structure. We even have our own memory
>>>> management to get a better JS performance.
>>>> Thus it really depends on the use case and the design of the DOM
>>>> interface. The proposed benchmark might be too simple and one should
>>>> be carefully with statements

>>>
>>> Yes, I think that is the question exactly for the overall design of the
>>> architecture: what the usage of the DOM should be like, with respect to
>>> network sync, rendering, scripts, input handling etc. And further in
>>> how
>>> much and what kind of DOM manipulations that results in and how they
>>> would be good to handle.
>>>
>>> Thanks for the quick and well informed feedback!
>>>
>>> And certainly this naive test was not meant as an end-all-be-all
>>> benchmark for any final conclusions but as I tried to emphasise the
>>> total opposite: just a starting point and also us a way to learn about
>>> the mutation observers (we'd never used them before).
>>>
>>>> Best regards,
>>>>  Kristian
>>>
>>> Cheers,
>>> ~Toni
>>>
>>>>> Ok so here is info about the DOM manipulation performance test that
>>>>> we recently started working on. I totally agree with the idea of
>>>>> having small targeted groups for specific topics, such as this 3d web
>>>>> client architecture. However as those don't exist yet let's use this
>>>>> list now for the first communications before the conf call that's
>>>>> being scheduled for this week.
>>>>>
>>>>> This benchmark is really simple and by no means fancy at this point
>>>>> but already gives some information. 'Massive' in the email subject
>>>>> refers to the fact that it can do a massive amount of DOM
>>>>> manipulations, not that the test code would be massive at all :)
>>>>>
>>>>> The code and little documentation is available at:
>>>>> https://github.com/playsign/wex-experiments/tree/master/domBenchmark
>>>>>
>>>>> The rationale is that in the current native realXtend implementation
>>>>> in Tundra SDK we have a similar mechanism: the core scene has all the
>>>>> data is in entity-attributes, and for example incoming network
>>>>> messages that move objects are handled so that the net part modifies
>>>>> that scene, which the renderer observes and updates the graphics
>>>>> engine scene accordingly.
>>>>>
>>>>> In the FI-WARE goals and plans, and in the declarative 3d web
>>>>> movement in general, there are good reasons for having all the data
>>>>> in the DOM -- for example being able to use the browser debugger to
>>>>> see and modify the values. We have the same in native Tundra SDK's
>>>>> scene & entity editor GUIs.
>>>>>
>>>>> I drew a first sketch of an architecture diagram about how this could
>>>>> work in the browser client, in a minimally modified WebTundra first
>>>>> where using a JS array for the scene data is just switched to using
>>>>> the DOM document
>>>>> instead: https://rawgithub.com/realXtend/doc/master/dom/rexdom.svg
>>>>> (in case
>>>>> someone wants to edit it, the source is
>>>>> at https://github.com/realXtend/doc/tree/master/dom - I used Inkscape
>>>>> where SVG is the native format).
>>>>>
>>>>> To examine the feasibility of this design I wanted to know how it
>>>>> performs. We don't have conclusive analysis nor any final verdict
>>>>> yet. We test with quite large numbers because we can have quite a lot
>>>>> happening in the scenes -- at least tens or hundreds of constantly
>>>>> moving objects etc. In this design every change in every object
>>>>> position would go via the DOM.
>>>>>
>>>>> To contrast, we have the similar mechnism implemented in pure JS -
>>>>> just an array with the data and a reference to the observing callback
>>>>> function. It is much simpler in functionality than what the DOM and
>>>>> Mutation Observers provide -- can be perhaps considered as an upper
>>>>> bound how fast an optimized solution could work. So not a fair
>>>>> comparison.
>>>>>
>>>>> Some early figures:
>>>>>
>>>>> Creating DOM elements is quite fast, I now created 10k elements in
>>>>> 0.136 seconds.
>>>>>
>>>>> Manipulating is quite fast too, 10k attribute modifications were made
>>>>> in 84ms for me now.
>>>>>
>>>>> Erno did a little better benchmarking with a 100 repeated runs, both
>>>>> for the DOM using and pure JS versions. His figures are on the
>>>>> benchmark projects website but I paste to here too:
>>>>>
>>>>> "running the test for 10k elements 100 times in a row, the pure-js
>>>>> version takes 19 ms. The DOM version of same takes 4700 ms. So a 250x
>>>>> difference."
>>>>>
>>>>> We are aware that this is a microbenchmark with all kinds of
>>>>> fallacies, can be improved if real benchmarking is necessary.
>>>>>
>>>>> But now I think the first question is: what are the real application
>>>>> requirements? How much DOM manipulations do we really need?
>>>>>
>>>>> We talked a bit about that and I think we can construct a few
>>>>> scenarios for that. Then we can calculate whether the performance is
>>>>> ok. I wrote that question to the stub overall doc that plan to
>>>>> continue in https://github.com/realXtend/doc/tree/master/dom
>>>>>
>>>>> This whole track was overridden a bit by the campus party
>>>>> preparations recently but I think now is good time to continue.
>>>>>
>>>>> I hope this clarified the situation, am sorry for the confusion that
>>>>> the quick '300x diff' pre-info during the weekend caused.. And to
>>>>> make it clear: we haven't done this with a real 3d client or
>>>>> networking or anything like that but decided to benchmark first in
>>>>> isolation.
>>>>>
>>>>> About the performance difference, we suspect that it is due to the
>>>>> internal machinery that the browser executes for every DOM change,
>>>>> and possibly also something in the C++ <-> JS interfacing. I looked a
>>>>> bit and found that to optimize large DOM manipulations web folks
>>>>> batch e.g. element creations etc. and don't do them individually.
>>>>>
>>>>> Do ask for any further info, and please bear with us and the early &
>>>>> bit cumbersome to use test case (we mostly just made it for ourselves
>>>>> for now at least).
>>>>>
>>>>> Cheers,
>>>>> ~Toni
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Fiware-miwi mailing list
>>>>> Fiware-miwi at lists.fi-ware.eu
>>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi
>>>>
>>>>
>>>> --
>>>> _______________________________________________________________________________
>>>>
>>>> Kristian Sons
>>>> Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, DFKI
>>>> Agenten und Simulierte Realität
>>>> Campus, Geb. D 3 2, Raum 0.77
>>>> 66123 Saarbrücken, Germany
>>>>
>>>> Phone: +49 681 85775-3833
>>>> Phone: +49 681 302-3833
>>>> Fax:   +49 681 85775–2235
>>>> kristian.sons at dfki.de
>>>> http://www.xml3d.org
>>>>
>>>> Geschäftsführung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster
>>>> (Vorsitzender)
>>>> Dr. Walter Olthoff
>>>>
>>>> Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
>>>> Amtsgericht Kaiserslautern, HRB 2313
>>>> _______________________________________________________________________________
>>>> _______________________________________________
>>>> Fiware-miwi mailing list
>>>> Fiware-miwi at lists.fi-ware.eu <mailto:Fiware-miwi at lists.fi-ware.eu>
>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi
>>>
>>>
>>>
>>> _______________________________________________
>>> Fiware-miwi mailing list
>>> Fiware-miwi at lists.fi-ware.eu
>>> https://lists.fi-ware.eu/listinfo/fiware-miwi
>>>
>>
>>
>> --
>>
>> -------------------------------------------------------------------------
>> Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) GmbH
>> Trippstadter Strasse 122, D-67663 Kaiserslautern
>>
>> Geschäftsführung:
>>  Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
>>  Dr. Walter Olthoff
>> Vorsitzender des Aufsichtsrats:
>>  Prof. Dr. h.c. Hans A. Aukes
>>
>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
>> USt-Id.Nr.: DE 148646973, Steuernummer:  19/673/0060/3
>> ---------------------------------------------------------------------------
>> <slusallek.vcf>
>
> _______________________________________________
> Fiware-miwi mailing list
> Fiware-miwi at lists.fi-ware.eu
> https://lists.fi-ware.eu/listinfo/fiware-miwi
>


-- 
Lasse Öörni
Game Programmer
LudoCraft Ltd.





More information about the Fiware-miwi mailing list

You can get more information about our cookies and privacy policies clicking on the following links: Privacy policy   Cookies policy