On 27 Oct 2013, at 17:12, Philipp Slusallek <Philipp.Slusallek at dfki.de> wrote: > I understand that the work you have done on the model scalability side will be ported to XML3D within WP13 and the current three.js version is just an intermediate step. Is this understanding correct? If so, I have no problem with that. I’m sorry but I can’t parse this. XML3D is an xml schema, right? How would you expect a scene manager to be ported to it? If I rephrase your statement: ‘within WP13 a system will be created where both a) scalable rendering and b) declarative authoring with xml3d work’ — then the answer is yes. On way to reach that goal is to have xml3d work with three.js — in that case that grid scene manager does not need porting as it’d already work. > Maybe I missed this part, but maybe we need to communicate things like this a bit more. See my suggestion a few email ago of setting up an Agile backlog where we record the tasks to be done and their schedule. Agreed, and I think that’s a good idea. We used agile practices in the earlier Tekes-funded reX project. One of those that I’ve been thinking for MIWI is sprint emails - short descs about what tasks a party is starting. I don’t know how the different companies do the scheduling etc. now — back then we worked so that the sprint master (first Jani Pirkola and then Antti Ilomäki) coordinated the whole project and all the tasks for all the companies were in one backlog, sprints synchronised in the same schedule, and we made releases from the common codebase then as a part of that then 3 week cycle. Now the GEs are more separate (not all part of one codebase) so I suppose is fine for internal cycles to differ but that wouldn’t prevent e.g. those emails from being used, they’d just come with different intervals from the different parties. As an old example here’s Mikko Pallari’s beginning-of-sprint email from 2010 .. working on adding arbitrary EC storage support to Opensimulator in ModRex :) https://groups.google.com/forum/#!topic/realxtend-dev/7_SvYPzi3VI For the backlog we just had a google spreadsheet. In Playsign’s game project we’ve now used the Pivotal tracker web service (Admino used that at some point too, I don’t know what they use nowadays). > But also let me make sure there is no missunderstanding regarding "declarative" either: While realXtend may have a declarative approach, declarative in the Web context refers to HTML and its in-memory representation in the DOM. Yes as described in the plans and in this attempt at a more detailed arch diagram about the client internals the plan to have the DOM integration has been firm all along, https://rawgithub.com/realXtend/doc/master/dom/rexdom-dom_as_ui.svg — that’s why we made the DOM tests with mutation observers etc. to begin with in July. > Let me try to better explain my point about using the KIARA API: KIARA offers a nice (at least I would argue :-), and even declarative API. As we talked about using a common component models and a common translation layer between ECA and XML3D, it would make sense to hide the realXtend protocol adaptation behind this KIARA layer. This is one of the two versions of arriving at a common model by doing the unification on the client side with a common API and data model but having two different protocols behind it: one using your knet approach for realXtend and one using KIARA all the way across the network for our server. This would make it so much easier to port from the direct driving of three.js to XML3D as discussed above. Ok this seems helpful. Another angle to this would be how I’ve been thinking and we’ve been using the entity system. In realXtend applications work by using the entity system API to create entities and components, set their attributes etc. That is abstracted away from networking — the ECA model is protocol independent. When you say myent.mycomponent.someattr = 1, the fact that in Tundra then kNet is used for the synchronisation is completely hidden, there isn’t even any way for application code to touch it (from javascript - in c++ you could probably hack something). Perhaps in this way Kiara and the ECA model are similar? And about porting or interoperability between xml3d.js and three.js: again the entity system is also renderer independent — you just create an entity with e.g. mesh and placeable components, add it to your scene, and it shows automatically. When authoring a scene and creating an application even with native Tundra when doing the basics Ogre is abstracted out the same way as the networking is. In our (Playsign) use case / test driven dev style, we earlier created the multiplayer networked pong game and later the oulu city car driving with the idea that their codebases will become *much* simpler and cleaner, but the functionality should become better too, while the platform gets there so that we can gradually port those applications to an entity system which integrates visuals, networking and physics. That is explained in the last paragraph of the intro do chapter 4. Use cases and tests in the requirements doc draft, https://docs.google.com/document/d/1P03BgfEG1Ly2dI2Cs9ODVDmBhH4A438Ynlaxc4vXn1o/edit#heading=h.qflk54puc3rr It is possible to look at the GRID.Manager made for the city model in the same way. It only needs very little from the renderer: ability to load and unload objects and assets. In Tundra that would work via the entity system. This is visible in the old first texture unload -> memory freeing test that I made for Tundra some 2-3 years ago .. that only uses the material reference attribute of the Mesh component to load textures and then the Asset API to unload them. It does not access Ogre API directly. https://github.com/realXtend/naali/blob/tundra2/bin/scenes/TextureMemoryManagement/texmanager.js So if we go via a similar entity (& asset) system in the web client it becomes easy to port, perhaps similar to what you say above. It also becomes possible to support multiple renderers if that’s ever needed. To return to DOM integration, one way would be to use the DOM as the API in which case that grid scene manager would add/remove/configure DOM nodes. But I understood earlier that the idea is to use the DOM as the UI only, not as the way for the rendering internals to work with itself (for performance reasons). > Essentially, in both cases the JS application gets delivered the components that are coming in over the wire. We just hie the knet code behind the KIARA API. Since all the network code is already there and we can probably easily agree on a common format for the deliveres components in JS (should be pretty obvious), this might not be a big effort anyway. > What do you think. I think we are very close if not already in the exact same thoughts. I don’t know KIARA enough to understand this suggestion completely. If KIARA is similar to the ECA model and provides a nice way for example for the custom component type definitions we need etc. then it can suite well. In any case I think we’ve all assumed that the kNet code in the web client will be hidden by apps the same way as it is in native Tundra. OTOH networking is not in our (Playsign) agenda anyways (we just added the WebRTC part to the Pong use case to get a networked test quickly too as it seemed to be far away otherwise back in August). But certainly how it works is an essential part of the client architecture and important for our businesses etc. so I’ll try to understand more, will talk with Erno and perhaps Lasse about that KIARA session they were in (mentioned this earlier too but didn’t get a chance yet). Thanks! And again hopefully this clarified something.. > Philipp ~Toni > Am 25.10.2013 05:02, schrieb Toni Alatalo: >> On 24 Oct 2013, at 16:28, Philipp Slusallek <Philipp.Slusallek at dfki.de> wrote: >> >> Continuing the, so far apparently successful, technique of clarifying a single point at a time a note about scene declarations and description of the scalability work: >> >>> I am not too happy that we are investing the FI-WARE resources into circumventing the declarative layer completely. >> >> We are not doing that. realXtend has had a declarative layer for the past 4-5 years(*) and we totally depend on it — that’s not going away. The situation is totally the opposite: it is assumed to always be there. There’s absolutely no work done anywhere to circumvent it somehow. [insert favourite 7th way of saying this]. >> >> In my view the case with the current work on scene rendering scalability is this: We already have all the basics implemented and tested in some form - realXtend web client implementations (e.g. ‘WebTundra’ in form of Chiru-Webclient on github, and other works) have complete entity systems integrated with networking and rendering. XML3d.js is the reference implementation for XML3d parsing, rendering etc. But one of the identified key parts missing was managing larger complex scenes. And that is a pretty hard requirement from the Intelligent City use case which has been the candidate for the main integrated larger use case. IIRC scalability was also among the original requirements and proposals. Also Kristian stated here that he finds it a good area to work on now so the basic motivation for the work seemed clear. >> >> So we tackled this straight on by first testing the behaviour of loading & unloading scene parts and then proceeded to implement a simple but effective scene manager. We’re documenting that separately so I won’t go into details here. So far it works even surprisingly well which has been a huge relief during the past couple of days — not only for us on the platform dev side but also for the modelling and application companies working with the city model here (I demoed the first version in a live meet on Wed), we’ll post demo links soon (within days) as soon as can confirm a bit more that the results seem conclusive. Now in general for the whole 3D UI and nearby GEs I think we have most of the parts (and the rest are coming) and ‘just’ need to integrate.. >> >> The point here is that in that work the focus is on the memory management of the rendering and the efficiency & non-blockingness of loading geometry data and textures for display. In my understanding that is orthogonal to scene declaration formats — or networking for that matter. In any case we get geometry and texture data to load and manage. An analogue (just to illustrate, not a real case): When someone works on improving the CPU process scheduler in Linux kernel he/she does not touch file system code. That does not mean that the improved scheduler proposes to remove file system support from Linux. Also, it is not investing resources into circumventing (your term) file systems — even if in the scheduler dev it is practical to just create competing processes from code, and not load applications to execute from the file system. It is absolutely clear for the scheduler developer how filesystems are a part of the big picture but they are just not relevant to the task at > hand. >> >> Again I hope this clarifies what’s going on. Please note that I’m /not/ addressing renderer alternatives and selection here *at all* — only the relationship of the declarative layer and of the scalability work that you seemed to bring up in the sentence quoted in the beginning. >> >>> I suggest that we start to work on the shared communication layer using the KIARA API (part of a FI-WARE GE) and add the code to make the relevant components work in XML3D. Can someone put together a plan for this. We are happy to help where necessary -- but from my point of view we need to do this as part of the Open Call. >> >> I’m sorry I don’t get how this is related. Then again I was not in the KIARA session that one Wed morning — Erno and Lasse were so I can talk with them to get an understanding. Now I can’t find a thought-path from renderer to networking here yet.. :o >> >> Also, I do need to (re-)read all these posts — so far have had mostly little timeslots to quickly clarify some basic miscommunications (like the poi data vs. poi data derived visualisations topic in the other thread, and the case with the declarative layer & scalability work in this one). I’m mostly not working at all this Friday though (am with kids) and also in general only work on fi-ware 50% of my work time (though I don’t mind when both the share and the total times are more, this is business development!) so it can take a while from my part. >> >>> Philipp >> >> Cheers, >> ~Toni >> >> (*) "realXtend has had a declarative layer for the past 4-5 years(*)": in the very beginning in 2007-2008 we didn’t have it in the same way, due to how the first prototype was based on Opensimulator and Second Life (tm) viewer. Only way to create a scene was to, in technical terms, to send object creation commands over UDP to the server. Or write code to run in the server. That is how Second Life was originally built: people use the GUI client to build the worlds one object at a time and there was no support for importing nor exporting objects or scenes (people did write scripts to generate objects etc.). For us that was a terrible nightmare (ask anyone from Ludocraft who worked on the Beneath the Waves demo scene for reX 0.3 — I was fortunate enough to not be involved in that period). As a remedy to that insanity I first implemented importing from Ogre’s very simple .scene (‘dotScene’) format in the new Naali viewer (which later became the Tundra codebase). Then > we could finally bring full scenes from Blender and Max. We were still using Opensimulator as the server then and after my client-side prototype Mikko Pallari implemented dotScene import to the server side and we got an ok production solution. Nowadays Opensimulator has OAR files and likewise the community totally depends on those. On reX side, Jukka Jylänki & Lasse wrote Tundra and we switched to it and the TXML & TBIN support there which still seem ok as machine authored formats. We do support Ogre dotScene import in current Tundra too. And even Linden (the Second Life company) has gotten to support COLLADA import, I think mostly meant for single objects but IIRC works for scenes too. >> >> Now XML3d seems like a good next step to get a human friendly (and perhaps just a more sane way to use xml in general) declarative format. It actually addresses an issue I created in our tracker 2 years ago, "xmlifying txml” https://github.com/realXtend/naali/issues/215 .. the draft in the gist linked from there is a bit more like xml3d than txml. I’m very happy that you’ve already made xml3d so we didn’t have to try to invent it :) >> >>> Am 23.10.2013 09:51, schrieb Toni Alatalo: >>>> On Oct 23, 2013, at 8:00 AM, Philipp Slusallek <Philipp.Slusallek at dfki.de> wrote: >>>> >>>>> BTW, what is the status with the Rendering discussion (Three.js vs. xml3d.js)? I still have the feeling that we are doing parallel work here that should probably be avoided. >>>> >>>> I'm not aware of any overlapping work so far -- then again I'm not fully aware what all is up with xml3d.js. >>>> >>>> For the rendering for 3D UI, my conclusion from the discussion on this list was that it is best to use three.js now for the case of big complex fully featured scenes, i.e. typical realXtend worlds (e.g. Ludocraft's Circus demo or the Chesapeake Bay from the LVM project, a creative commons realXtend example scene). And in miwi in particular for the city model / app now. Basically because that's where we already got the basics working in the August-September work (and had earlier in realXtend web client codebases). That is why we implemented the scalability system on top of that too now -- scalability was the only thing missing. >>>> >>>> Until yesterday I thought the question was still open regarding XFlow integration. Latest information I got was that there was no hardware acceleration support for XFlow in XML3d.js either so it seemed worth a check whether it's better to implement it for xml3d.js or for three. >>>> >>>> Yesterday, however, we learned from Cyberlightning that work on XFlow hardware acceleration was already on-going in xml3d.js (I think mostly by DFKI so far?). And that it was decided that work within fi-ware now is limited to that (and we also understood that the functionality will be quite limited by April, or?). >>>> >>>> This obviously affects the overall situation. >>>> >>>> At least in an intermediate stage this means that we have two renderers for different purposes: three.js for some apps, without XFlow support, and xml3d.js for others, with XFlow but other things missing. This is certain because that is the case today and probably in the coming weeks at least. >>>> >>>> For a good final goal I think we can be clever and make an effective roadmap. I don't know yet what it is, though -- certainly to be discussed. The requirements doc -- perhaps by continuing work on it -- hopefully helps. >>>> >>>>> Philipp >>>> >>>> ~Toni >>>> >>>>> >>>>> Am 22.10.2013 23:03, schrieb toni at playsign.net: >>>>>> Just a brief note: we had some interesting preliminary discussion >>>>>> triggered by how the data schema that Ari O. presented for the POI >>>>>> system seemed at least partly similar to what the Real-Virtual >>>>>> interaction work had resulted in too -- and in fact about how the >>>>>> proposed POI schema was basically a version of the entity-component >>>>>> model which we’ve already been using for scenes in realXtend (it is >>>>>> inspired by / modeled after it, Ari told). So it can be much related to >>>>>> the Scene API work in the Synchronization GE too. As the action point we >>>>>> agreed that Ari will organize a specific work session on that. >>>>>> I was now thinking that it perhaps at least partly leads back to the >>>>>> question: how do we define (and implement) component types. I.e. what >>>>>> was mentioned in that entity-system post a few weeks back (with links >>>>>> to reX IComponent etc.). I mean: if functionality such as POIs and >>>>>> realworld interaction make sense as somehow resulting in custom data >>>>>> component types, does it mean that a key part of the framework is a way >>>>>> for those systems to declare their types .. so that it integrates nicely >>>>>> for the whole we want? I’m not sure, too tired to think it through now, >>>>>> but anyhow just wanted to mention that this was one topic that came up. >>>>>> I think Web Components is again something to check - as in XML terms reX >>>>>> Components are xml(3d) elements .. just ones that are usually in a group >>>>>> (according to the reX entity <-> xml3d group mapping). And Web >>>>>> Components are about defining & implementing new elements (as Erno >>>>>> pointed out in a different discussion about xml-html authoring in the >>>>>> session). >>>>>> BTW Thanks Kristian for the great comments in that entity system >>>>>> thread - was really good to learn about the alternative attribute access >>>>>> syntax and the validation in XML3D(.js). >>>>>> ~Toni >>>>>> P.S. for (Christof &) the DFKI folks: I’m sure you understand the >>>>>> rationale of these Oulu meets -- idea is ofc not to exclude you from the >>>>>> talks but just makes sense for us to meet live too as we are in the same >>>>>> city afterall etc -- naturally with the DFKI team you also talk there >>>>>> locally. Perhaps is a good idea that we make notes so that can post e.g. >>>>>> here then (I’m not volunteering though! 😜) . Also, the now agreed >>>>>> bi-weekly setup on Tuesdays luckily works so that we can then summarize >>>>>> fresh in the global Wed meetings and continue the talks etc. >>>>>> *From:* Erno Kuusela >>>>>> *Sent:* Tuesday, October 22, 2013 9:57 AM >>>>>> *To:* Fiware-miwi >>>>>> >>>>>> Kari from CIE offered to host it this time, so see you there at 13:00. >>>>>> >>>>>> Erno >>>>>> _______________________________________________ >>>>>> Fiware-miwi mailing list >>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Fiware-miwi mailing list >>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>>> >>>>> >>>>> >>>>> -- >>>>> >>>>> ------------------------------------------------------------------------- >>>>> Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) GmbH >>>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>>> >>>>> Geschäftsführung: >>>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>>> Dr. Walter Olthoff >>>>> Vorsitzender des Aufsichtsrats: >>>>> Prof. Dr. h.c. Hans A. Aukes >>>>> >>>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>>> --------------------------------------------------------------------------- >>>>> <slusallek.vcf> >>>> >>>> _______________________________________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>> >>> >>> >>> -- >>> >>> ------------------------------------------------------------------------- >>> Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Geschäftsführung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> --------------------------------------------------------------------------- >>> <slusallek.vcf> >> > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Geschäftsführung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > <slusallek.vcf>
You can get more information about our cookies and privacy policies clicking on the following links: Privacy policy Cookies policy