From toni at playsign.net Sat Nov 2 00:38:37 2013 From: toni at playsign.net (Toni Alatalo) Date: Sat, 2 Nov 2013 01:38:37 +0200 Subject: [Fiware-miwi] three.js WebComponents Message-ID: <51CDE540-A80E-401A-A661-A96FF9E4B9B4@playsign.net> Apparently some three.js user/dev has gotten inspired by WebComponents & the Polymeer and written https://github.com/kaosat-dev/polymer-threejs :) Now another guy has continued with https://github.com/JoshGalvin/three-polymer ? there?s a demo of custom element (?spinner?), similar to the Door case discussed here earlier. Had a brief chat with him, will return to this later but was fun to see the minimal webgl web component example there as that has been in our agenda. ~Toni 01:01 < galv> https://github.com/JoshGalvin/three-polymer added support for more basic geometry types 01:02 < galv> Going to do materials next 01:25 < antont> galv: hee - are you aware of these btw? http://www.w3.org/community/declarative3d/ , e.g. https://github.com/xml3d/xml3d.js 01:27 < galv> yeah, different level of abstraction 01:27 < antont> perhaps 01:28 < galv> I expect people to wrap up their game objects 01:28 < galv> aka "spinner" 01:28 < galv> (index.html) 01:29 < antont> we've been also planning to enable saying things like if that's what you mean 01:30 < antont> right, seems like the same idea 01:31 < antont> very cool to see, gotta check the codes etc later .. but sleep now, laters -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Sat Nov 2 08:09:57 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Sat, 02 Nov 2013 08:09:57 +0100 Subject: [Fiware-miwi] three.js WebComponents In-Reply-To: <51CDE540-A80E-401A-A661-A96FF9E4B9B4@playsign.net> References: <51CDE540-A80E-401A-A661-A96FF9E4B9B4@playsign.net> Message-ID: <5274A545.3090509@dfki.de> Hi Toni, Nice stuff. from my perspective there are two ways to look at this work: One is to provide high level UI elements on top of a three.js implementation and the other is the start of creating a declarative layer on top of the three.js renderer. This seems more along the first line but both should be similarly interesting to us. It great to see that other people are coming up with similar ideas now. It would be good to get the message about our XML3D design and implementation to these people out there. That way we could improve what we already have instead of reinventing the wheel. It would be good if you can point people also to our papers from this year (http://graphics.cg.uni-saarland.de/publications/). They explain a lot of the background of why we have chose thing to work the way they work. More specifically: -- The "xml3d.js" paper explain a lot about the design of XML3D and its implementation (https://graphics.cg.uni-saarland.de/2013/xml3djs-architecture-of-a-polyfill-implementation-of-xml3d/). -- The "Declarative image processing" paper explains all the advantages one gets from exposing processing elements to the DOM instead of implementing them only in some JS libraries (https://graphics.cg.uni-saarland.de/2013/declarative-ar-and-image-processing-on-the-web-with-xflow/). -- And the 2012 paper on "XFlow" shows this usage for animation (https://graphics.cg.uni-saarland.de/2012/xflow-declarative-data-processing-for-the-web/) Getting into a constructive discussion with some of these three.js people would be a good thing. I tried to find an email address for the polymer-threejs person but could not find any. Feel free to farward this email to him (and maybe others). I would love to get their feedback and engage in discussions. Best, Philipp Am 02.11.2013 00:38, schrieb Toni Alatalo: > Apparently some three.js user/dev has gotten inspired by WebComponents & > the Polymeer and written https://github.com/kaosat-dev/polymer-threejs :) > > Now another guy has continued with > https://github.com/JoshGalvin/three-polymer ? there?s a demo of custom > element (?spinner?), similar to the Door case discussed here earlier. > > Had a brief chat with him, will return to this later but was fun to see > the minimal webgl web component example there as that has been in our > agenda. > > ~Toni > > 01:01 *<* galv*>* https://github.com/JoshGalvin/three-polymer added > support for more basic geometry types > 01:02 *<* galv*>* Going to do materials next > 01:25 *<* *antont**>* galv: hee - are you aware of these btw? > http://www.w3.org/community/declarative3d/ , e.g. > https://github.com/xml3d/xml3d.js > 01:27 *<* galv*>* yeah, different level of abstraction > 01:27 *<* *antont**>* perhaps > 01:28 *<* galv*>* I expect people to wrap up their game objects > 01:28 *<* galv*>* aka "spinner" > 01:28 *<* galv*>* (index.html) > 01:29 *<* *antont**>* we've been also planning to enable saying things > like if that's what you mean > 01:30 *<* *antont**>* right, seems like the same idea > 01:31 *<* *antont**>* very cool to see, gotta check the codes etc later > .. but sleep now, laters > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From toni at playsign.net Sat Nov 2 09:27:18 2013 From: toni at playsign.net (Toni Alatalo) Date: Sat, 2 Nov 2013 10:27:18 +0200 Subject: [Fiware-miwi] three.js WebComponents In-Reply-To: <5274A545.3090509@dfki.de> References: <51CDE540-A80E-401A-A661-A96FF9E4B9B4@playsign.net> <5274A545.3090509@dfki.de> Message-ID: <5EC2C34A-3BF5-4218-8738-6EC54FB670FB@playsign.net> On 02 Nov 2013, at 09:09, Philipp Slusallek wrote: > Nice stuff. from my perspective there are two ways to look at this work: One is to provide high level UI elements on top of a three.js implementation and the other is the start of creating a declarative layer on top of the three.js renderer. This seems more along the first line but both should be similarly interesting to us. Indeed. > It great to see that other people are coming up with similar ideas now. It would be good to get the message about our XML3D design and implementation to these people out there. That way we could improve what we already have instead of reinventing the wheel. That was my immediate first thought as well: it seemed like people have started to reinvent declarative 3d for the web from scratch. That?s why I asked whether they knew about the existing work ? I understood that this Josh Galvin person (don?t know him from before) who made the Spinner demo, did (am not sure). Thanks for the views and pointers, I?ll keep an eye open for talks about this (actually just joined the #three.js irc channel on freenode yesterday, haven?t been involved in their community before really ? Tapani from us has been hanging out there though). They seem to communicate most in the github issue tracker and pull requests (which I think is a great way). I also did not find an e-mail address to the polymer-threejs person, but kaosat.net is his personal site and apparently he made the original announcement of the declarative three.js thing in August in Google+ so I figure e.g. replying there would be one way to comment: https://plus.google.com/112899217323877236232/posts/bUW1hrwHcAW .. I can do that on Monday. BTW it seems that this guy is into hardware and cad and all sorts of things and declarative 3d xml is just a side thing for fun, perhaps related to his work on some cad thing ? is not like he?d be pursuing a career or a product or anything out of it. It seems like a straightforward mapping of the three.js API to xml elements: what I struggle to understand now is whether that?s a good abstraction level and how does it correspond to xml3d?s vocabulary. ~Toni > It would be good if you can point people also to our papers from this year (http://graphics.cg.uni-saarland.de/publications/). They explain a lot of the background of why we have chose thing to work the way they work. > > More specifically: > -- The "xml3d.js" paper explain a lot about the design of XML3D and its implementation (https://graphics.cg.uni-saarland.de/2013/xml3djs-architecture-of-a-polyfill-implementation-of-xml3d/). > -- The "Declarative image processing" paper explains all the advantages one gets from exposing processing elements to the DOM instead of implementing them only in some JS libraries (https://graphics.cg.uni-saarland.de/2013/declarative-ar-and-image-processing-on-the-web-with-xflow/). > -- And the 2012 paper on "XFlow" shows this usage for animation (https://graphics.cg.uni-saarland.de/2012/xflow-declarative-data-processing-for-the-web/) > > Getting into a constructive discussion with some of these three.js people would be a good thing. I tried to find an email address for the polymer-threejs person but could not find any. Feel free to farward this email to him (and maybe others). I would love to get their feedback and engage in discussions. > > > Best, > > Philipp > > Am 02.11.2013 00:38, schrieb Toni Alatalo: >> Apparently some three.js user/dev has gotten inspired by WebComponents & >> the Polymeer and written https://github.com/kaosat-dev/polymer-threejs :) >> >> Now another guy has continued with >> https://github.com/JoshGalvin/three-polymer ? there?s a demo of custom >> element (?spinner?), similar to the Door case discussed here earlier. >> >> Had a brief chat with him, will return to this later but was fun to see >> the minimal webgl web component example there as that has been in our >> agenda. >> >> ~Toni >> >> 01:01 *<* galv*>* https://github.com/JoshGalvin/three-polymer added >> support for more basic geometry types >> 01:02 *<* galv*>* Going to do materials next >> 01:25 *<* *antont**>* galv: hee - are you aware of these btw? >> http://www.w3.org/community/declarative3d/ , e.g. >> https://github.com/xml3d/xml3d.js >> 01:27 *<* galv*>* yeah, different level of abstraction >> 01:27 *<* *antont**>* perhaps >> 01:28 *<* galv*>* I expect people to wrap up their game objects >> 01:28 *<* galv*>* aka "spinner" >> 01:28 *<* galv*>* (index.html) >> 01:29 *<* *antont**>* we've been also planning to enable saying things >> like if that's what you mean >> 01:30 *<* *antont**>* right, seems like the same idea >> 01:31 *<* *antont**>* very cool to see, gotta check the codes etc later >> .. but sleep now, laters >> >> >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > From Philipp.Slusallek at dfki.de Sat Nov 2 12:21:41 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Sat, 02 Nov 2013 12:21:41 +0100 Subject: [Fiware-miwi] three.js WebComponents In-Reply-To: <5EC2C34A-3BF5-4218-8738-6EC54FB670FB@playsign.net> References: <51CDE540-A80E-401A-A661-A96FF9E4B9B4@playsign.net> <5274A545.3090509@dfki.de> <5EC2C34A-3BF5-4218-8738-6EC54FB670FB@playsign.net> Message-ID: <5274E045.2020106@dfki.de> Hi Toni, all, I have now looked at video on the main Polymer page (http://www.polymer-project.org/), which is actually very nicely done. They make a very good point why its advantageous to put things in the DOM compared to pure JS applications (or even Angular, which already uses the DOM). They highlight that with WebComponents (Polymer is a polyfill implementation of them) this becomes now even easier and creates an object-oriented aspect for HTML. BTW, this aspect is exactly what we were aiming at when we suggested the use of WebComponents for the 2D-UI Epic in the objectives of the FI-WARE Open Call and I think we should push even more in this direction, similar to what I wrote in response to Jonne's email earlier. Regarding the mapping: We already have the mapping of 3D to the DOM (XML3D) as well as many modules that build on top of that mapping (Xflow with animation, image processing, and AR; portable materials, etc.). I see no reason why we should throw this out just because there is a slightly different way of doing it. I would even speculate that if we would try to offer similar functionality that at the end we would end up with something that would pretty much resemble XML3D, maybe with a slightly different syntax. There are usually not many way to do the same thing in a good way. What I would support 100%, however, is that we try to use Web*Components* to implement the *ECA components* (and possibly entities). Essentially for the same reasons explained in the video. That makes a lot of sense in the Web context and is fully compatible with the DOM and all the DOM-based frameworks that can then be used as well on this data. I have been saying for a long time that we have had libraries in C/C++. Just porting them to JS to run in the Web does not give us any advantages at all (likely only disadvantages like performance and an inferior language). Its only if we embrace the key elements of the Web -- and the runtime DOM arguably is the core of it all -- that we can tap into all the benefits of the Web. And THE key advantage of the DOM is that you get *way more* than the sum of the pieces when putting things together. Instead, of making sure that one library can talk to all the others (which gets you into an N^2 complexity problem), each component only needs to work with the DOM (constant effort per component and you have to deal with one data structure anyway, so its essentially no overhead anyway). Then, you get the benefit from all other components *automatically and completely for free* (jquery and all the other tools "just work" also with XML3D, and we should be able to use XML3D within Polymer also). Note, that none of the Web technologies discussed here (WebComponents, Polymer, Angular, etc.) would work at all if they would not use the DOM at their core. Angular for example depends on the DOM and just allows a nicer binding of custom functionality (controller implemented in JS) to the DOM. This all applies exactly the same way also to 3D data -- at least logically. However, on the implementation side 3D is special because you have to take care of large data structures, typed arrays, and so on. This is what the main work in XML3D was all about. Why should someone invest all the effort to do it again for likely very similar results in the end? We can talk about using three.js as a renderer, though, but I would not touch the 3D DOM binding and data model that we already have in XML3D (unless someone comes with very good reasons to do so). Best, Philipp Am 02.11.2013 09:27, schrieb Toni Alatalo: > On 02 Nov 2013, at 09:09, Philipp Slusallek wrote: >> Nice stuff. from my perspective there are two ways to look at this work: One is to provide high level UI elements on top of a three.js implementation and the other is the start of creating a declarative layer on top of the three.js renderer. This seems more along the first line but both should be similarly interesting to us. > > Indeed. > >> It great to see that other people are coming up with similar ideas now. It would be good to get the message about our XML3D design and implementation to these people out there. That way we could improve what we already have instead of reinventing the wheel. > > That was my immediate first thought as well: it seemed like people have started to reinvent declarative 3d for the web from scratch. That?s why I asked whether they knew about the existing work ? I understood that this Josh Galvin person (don?t know him from before) who made the Spinner demo, did (am not sure). > > Thanks for the views and pointers, I?ll keep an eye open for talks about this (actually just joined the #three.js irc channel on freenode yesterday, haven?t been involved in their community before really ? Tapani from us has been hanging out there though). They seem to communicate most in the github issue tracker and pull requests (which I think is a great way). > > I also did not find an e-mail address to the polymer-threejs person, but kaosat.net is his personal site and apparently he made the original announcement of the declarative three.js thing in August in Google+ so I figure e.g. replying there would be one way to comment: https://plus.google.com/112899217323877236232/posts/bUW1hrwHcAW .. I can do that on Monday. > > BTW it seems that this guy is into hardware and cad and all sorts of things and declarative 3d xml is just a side thing for fun, perhaps related to his work on some cad thing ? is not like he?d be pursuing a career or a product or anything out of it. > > It seems like a straightforward mapping of the three.js API to xml elements: what I struggle to understand now is whether that?s a good abstraction level and how does it correspond to xml3d?s vocabulary. > > ~Toni > >> It would be good if you can point people also to our papers from this year (http://graphics.cg.uni-saarland.de/publications/). They explain a lot of the background of why we have chose thing to work the way they work. >> >> More specifically: >> -- The "xml3d.js" paper explain a lot about the design of XML3D and its implementation (https://graphics.cg.uni-saarland.de/2013/xml3djs-architecture-of-a-polyfill-implementation-of-xml3d/). >> -- The "Declarative image processing" paper explains all the advantages one gets from exposing processing elements to the DOM instead of implementing them only in some JS libraries (https://graphics.cg.uni-saarland.de/2013/declarative-ar-and-image-processing-on-the-web-with-xflow/). >> -- And the 2012 paper on "XFlow" shows this usage for animation (https://graphics.cg.uni-saarland.de/2012/xflow-declarative-data-processing-for-the-web/) >> >> Getting into a constructive discussion with some of these three.js people would be a good thing. I tried to find an email address for the polymer-threejs person but could not find any. Feel free to farward this email to him (and maybe others). I would love to get their feedback and engage in discussions. >> >> >> Best, >> >> Philipp >> >> Am 02.11.2013 00:38, schrieb Toni Alatalo: >>> Apparently some three.js user/dev has gotten inspired by WebComponents & >>> the Polymeer and written https://github.com/kaosat-dev/polymer-threejs :) >>> >>> Now another guy has continued with >>> https://github.com/JoshGalvin/three-polymer ? there?s a demo of custom >>> element (?spinner?), similar to the Door case discussed here earlier. >>> >>> Had a brief chat with him, will return to this later but was fun to see >>> the minimal webgl web component example there as that has been in our >>> agenda. >>> >>> ~Toni >>> >>> 01:01 *<* galv*>* https://github.com/JoshGalvin/three-polymer added >>> support for more basic geometry types >>> 01:02 *<* galv*>* Going to do materials next >>> 01:25 *<* *antont**>* galv: hee - are you aware of these btw? >>> http://www.w3.org/community/declarative3d/ , e.g. >>> https://github.com/xml3d/xml3d.js >>> 01:27 *<* galv*>* yeah, different level of abstraction >>> 01:27 *<* *antont**>* perhaps >>> 01:28 *<* galv*>* I expect people to wrap up their game objects >>> 01:28 *<* galv*>* aka "spinner" >>> 01:28 *<* galv*>* (index.html) >>> 01:29 *<* *antont**>* we've been also planning to enable saying things >>> like if that's what you mean >>> 01:30 *<* *antont**>* right, seems like the same idea >>> 01:31 *<* *antont**>* very cool to see, gotta check the codes etc later >>> .. but sleep now, laters >>> >>> >>> >>> _______________________________________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>> >> >> >> -- >> >> ------------------------------------------------------------------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> --------------------------------------------------------------------------- >> > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From jonne at adminotech.com Sat Nov 2 17:35:24 2013 From: jonne at adminotech.com (Jonne Nauha) Date: Sat, 2 Nov 2013 18:35:24 +0200 Subject: [Fiware-miwi] three.js WebComponents In-Reply-To: <5274E045.2020106@dfki.de> References: <51CDE540-A80E-401A-A661-A96FF9E4B9B4@playsign.net> <5274A545.3090509@dfki.de> <5EC2C34A-3BF5-4218-8738-6EC54FB670FB@playsign.net> <5274E045.2020106@dfki.de> Message-ID: Yeah WebComponents are nice. I've already implemented quick tests for Tundras Entity and IComponent (common interface for all components, WebComponents has inheritance so it plays nicely into this) like 3-4 months ago in our WebRocket web client. At that point I left it alone, I wanted to have my JS code that implements the component in a .js file not inside a .html polymer template. Currently I'm using requirejs for the whole repo as the amount of code and dependency tracking started to get out of hand, so I wanted a modular system where I can tell the system what each module needs as a dependency. This has been great but will make writing comp implementations in a WebComponent html template a bit trickier as it would expect everything to in global scope, requierejs effectively removes everything from global scope. I have also a simple DOM integration plugin for WebRocket. If you want to turn it on (configurable when you construct the client instance) it will mirror the whole scene/entity/component/attribute chain in to the DOM as my own ......... nodes. This would be trivial to change to create XML3D nodes, but I cant test that out before everything on the asset side that has been discussed are complete. Because nothing would simply render if I wont provide the Ogre asset loaders for XML3D and it knowing how to used DDS textures. What I would prefer even more is to just pass you the geometry as data, because I have my own fully working AssetAPI to fetch the assets, and I have my own loaders for each asset type I support. I would kind of be duplicate work to re-implement then as XML3D loaders, when I alreayd have the ready typed gl arrays to give to you right now. For my current DOM "mirroring" plugin, if you manipulate the attributes in the DOM they wont sync up back to the in mem JavaScript attributes. So currently its read only. I haven't really had the need to implement the sync the other way as we are perfectly happy living in JS land and using the WebRocket SDK API to write our application logic. The declarative side is less important for Meshmoon WebRocket as the scenes are always coming from our Tundra based servers and not declared on the html page itself. For FIWARE the declarative and DOM side is more important and more in focus of course so I understand the direction of the talks and why XML3D is an valuable asset. Instantiating Scene, Entity and each of the Component implementation from a WebComponent template would solve this problem, as its designed to encapsulate the JS implementation and it has a great way to get callbacks when any attribute in the DOM node is manipulated. It can also expose DOM events and fire them, not to mention normal functions eg. $("", entityNode).setPosition(10,20,30); so you dont have to use the raw DOM api to set string attributes by random. These functions would have type checking and not let you put carbage into the attributes. I believe the attribute changed handler can also effectively abort the change if you are trying to put a boolean to a Vector3 attribute. This all is great and would just require us WebRocket devs to port our current EC implementations to WebComponent templates. But here is where I get confused. So you would be fine by us implementing Tundra components as WebComponent templates (as said this would be fairly trivial). How would XML3D then play into this situation? Would it now monitor our DOM elements instead of the ones you specify (afaik eg. )? Can you open up this a bit? *UI and WebComponents* WebComponents are also being looked at in our 2D-UI GE, but to be frank there is very little to do there. The system is incredibly simple to use. You include polymer.js to you page, add tags to your Polymer style .html templates, then you just use the tags declared in them on the markup or you create the DOM nodes during runtime from JS with your preferred method. I'm having a bit of hard time figuring out what we should focus on in the 2D part for WebComponents. I mean they are there and you can use them, you don't need any kind of special APIs or supporting code from our side for them to work. Only thing I can think of is implementing some widgets and maybe 3D related templates for anyone when they use WebTundra. Best regards, Jonne Nauha Meshmoon developer at Adminotech Ltd. www.meshmoon.com On Sat, Nov 2, 2013 at 1:21 PM, Philipp Slusallek wrote: > Hi Toni, all, > > I have now looked at video on the main Polymer page ( > http://www.polymer-project.org/), which is actually very nicely done. > They make a very good point why its advantageous to put things in the DOM > compared to pure JS applications (or even Angular, which already uses the > DOM). They highlight that with WebComponents (Polymer is a polyfill > implementation of them) this becomes now even easier and creates an > object-oriented aspect for HTML. > > BTW, this aspect is exactly what we were aiming at when we suggested the > use of WebComponents for the 2D-UI Epic in the objectives of the FI-WARE > Open Call and I think we should push even more in this direction, similar > to what I wrote in response to Jonne's email earlier. > > Regarding the mapping: We already have the mapping of 3D to the DOM > (XML3D) as well as many modules that build on top of that mapping (Xflow > with animation, image processing, and AR; portable materials, etc.). I see > no reason why we should throw this out just because there is a slightly > different way of doing it. > > I would even speculate that if we would try to offer similar functionality > that at the end we would end up with something that would pretty much > resemble XML3D, maybe with a slightly different syntax. There are usually > not many way to do the same thing in a good way. > > What I would support 100%, however, is that we try to use Web*Components* > to implement the *ECA components* (and possibly entities). Essentially for > the same reasons explained in the video. That makes a lot of sense in the > Web context and is fully compatible with the DOM and all the DOM-based > frameworks that can then be used as well on this data. > > I have been saying for a long time that we have had libraries in C/C++. > Just porting them to JS to run in the Web does not give us any advantages > at all (likely only disadvantages like performance and an inferior > language). Its only if we embrace the key elements of the Web -- and the > runtime DOM arguably is the core of it all -- that we can tap into all the > benefits of the Web. > > And THE key advantage of the DOM is that you get *way more* than the sum > of the pieces when putting things together. Instead, of making sure that > one library can talk to all the others (which gets you into an N^2 > complexity problem), each component only needs to work with the DOM > (constant effort per component and you have to deal with one data structure > anyway, so its essentially no overhead anyway). Then, you get the benefit > from all other components *automatically and completely for free* (jquery > and all the other tools "just work" also with XML3D, and we should be able > to use XML3D within Polymer also). > > Note, that none of the Web technologies discussed here (WebComponents, > Polymer, Angular, etc.) would work at all if they would not use the DOM at > their core. Angular for example depends on the DOM and just allows a nicer > binding of custom functionality (controller implemented in JS) to the DOM. > > This all applies exactly the same way also to 3D data -- at least > logically. However, on the implementation side 3D is special because you > have to take care of large data structures, typed arrays, and so on. This > is what the main work in XML3D was all about. Why should someone invest all > the effort to do it again for likely very similar results in the end? > > We can talk about using three.js as a renderer, though, but I would not > touch the 3D DOM binding and data model that we already have in XML3D > (unless someone comes with very good reasons to do so). > > > Best, > > Philipp > > > Am 02.11.2013 09:27, schrieb Toni Alatalo: > > On 02 Nov 2013, at 09:09, Philipp Slusallek >> wrote: >> >>> Nice stuff. from my perspective there are two ways to look at this work: >>> One is to provide high level UI elements on top of a three.js >>> implementation and the other is the start of creating a declarative layer >>> on top of the three.js renderer. This seems more along the first line but >>> both should be similarly interesting to us. >>> >> >> Indeed. >> >> It great to see that other people are coming up with similar ideas now. >>> It would be good to get the message about our XML3D design and >>> implementation to these people out there. That way we could improve what we >>> already have instead of reinventing the wheel. >>> >> >> That was my immediate first thought as well: it seemed like people have >> started to reinvent declarative 3d for the web from scratch. That?s why I >> asked whether they knew about the existing work ? I understood that this >> Josh Galvin person (don?t know him from before) who made the Spinner demo, >> did (am not sure). >> >> Thanks for the views and pointers, I?ll keep an eye open for talks about >> this (actually just joined the #three.js irc channel on freenode yesterday, >> haven?t been involved in their community before really ? Tapani from us has >> been hanging out there though). They seem to communicate most in the github >> issue tracker and pull requests (which I think is a great way). >> >> I also did not find an e-mail address to the polymer-threejs person, but >> kaosat.net is his personal site and apparently he made the original >> announcement of the declarative three.js thing in August in Google+ so I >> figure e.g. replying there would be one way to comment: >> https://plus.google.com/112899217323877236232/posts/bUW1hrwHcAW .. I can >> do that on Monday. >> >> BTW it seems that this guy is into hardware and cad and all sorts of >> things and declarative 3d xml is just a side thing for fun, perhaps related >> to his work on some cad thing ? is not like he?d be pursuing a career or a >> product or anything out of it. >> >> It seems like a straightforward mapping of the three.js API to xml >> elements: what I struggle to understand now is whether that?s a good >> abstraction level and how does it correspond to xml3d?s vocabulary. >> >> ~Toni >> >> It would be good if you can point people also to our papers from this >>> year (http://graphics.cg.uni-saarland.de/publications/). They explain a >>> lot of the background of why we have chose thing to work the way they work. >>> >>> More specifically: >>> -- The "xml3d.js" paper explain a lot about the design of XML3D and its >>> implementation (https://graphics.cg.uni-saarland.de/2013/xml3djs- >>> architecture-of-a-polyfill-implementation-of-xml3d/). >>> -- The "Declarative image processing" paper explains all the advantages >>> one gets from exposing processing elements to the DOM instead of >>> implementing them only in some JS libraries (https://graphics.cg.uni- >>> saarland.de/2013/declarative-ar-and-image-processing-on- >>> the-web-with-xflow/). >>> -- And the 2012 paper on "XFlow" shows this usage for animation ( >>> https://graphics.cg.uni-saarland.de/2012/xflow- >>> declarative-data-processing-for-the-web/) >>> >>> Getting into a constructive discussion with some of these three.js >>> people would be a good thing. I tried to find an email address for the >>> polymer-threejs person but could not find any. Feel free to farward this >>> email to him (and maybe others). I would love to get their feedback and >>> engage in discussions. >>> >>> >>> Best, >>> >>> Philipp >>> >>> Am 02.11.2013 00:38, schrieb Toni Alatalo: >>> >>>> Apparently some three.js user/dev has gotten inspired by WebComponents & >>>> the Polymeer and written https://github.com/kaosat-dev/polymer-threejs:) >>>> >>>> Now another guy has continued with >>>> https://github.com/JoshGalvin/three-polymer ? there?s a demo of custom >>>> element (?spinner?), similar to the Door case discussed here earlier. >>>> >>>> Had a brief chat with him, will return to this later but was fun to see >>>> the minimal webgl web component example there as that has been in our >>>> agenda. >>>> >>>> ~Toni >>>> >>>> 01:01 *<* galv*>* https://github.com/JoshGalvin/three-polymer added >>>> support for more basic geometry types >>>> 01:02 *<* galv*>* Going to do materials next >>>> 01:25 *<* *antont**>* galv: hee - are you aware of these btw? >>>> http://www.w3.org/community/declarative3d/ , e.g. >>>> https://github.com/xml3d/xml3d.js >>>> 01:27 *<* galv*>* yeah, different level of abstraction >>>> 01:27 *<* *antont**>* perhaps >>>> 01:28 *<* galv*>* I expect people to wrap up their game objects >>>> 01:28 *<* galv*>* aka "spinner" >>>> 01:28 *<* galv*>* (index.html) >>>> 01:29 *<* *antont**>* we've been also planning to enable saying things >>>> like if that's what you mean >>>> 01:30 *<* *antont**>* right, seems like the same idea >>>> 01:31 *<* *antont**>* very cool to see, gotta check the codes etc later >>>> .. but sleep now, laters >>>> >>>> >>>> >>>> _______________________________________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>> >>>> >>> >>> -- >>> >>> ------------------------------------------------------------ >>> ------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> ------------------------------------------------------------ >>> --------------- >>> >>> >> >> > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------------------------------------ > --------------- > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Sun Nov 3 08:12:27 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Sun, 03 Nov 2013 08:12:27 +0100 Subject: [Fiware-miwi] three.js WebComponents In-Reply-To: References: <51CDE540-A80E-401A-A661-A96FF9E4B9B4@playsign.net> <5274A545.3090509@dfki.de> <5EC2C34A-3BF5-4218-8738-6EC54FB670FB@playsign.net> <5274E045.2020106@dfki.de> Message-ID: <5275F75B.6010405@dfki.de> Hi Jonne, all, Thanks for the information. It seems you have done some nice work there already and we can certainly build on your experience. I think it would be excellent if we could create a WebComponent mapping of your ECA model for the Web and use XML3D for the 3D scene and interaction part as I mentioned in my email earlier. The ability to sync in both direction would be essential for this, however. From my perspective (Kristian etc., please correct me where necessary), your components would live within an tag and would use xml3d elements for the 3D geometry. They would also get the XML3D DOM events through these XML3D elements and could react appropriately. We might have to change the XML3D implementation a bit in order to deal with WebComponents but this seems doable to me (but I have not looked into the issue in detail). Adding additional function interfaces for the components (like you setPosition()) is perfectly fine from my point of view. We do the same for the XML3D elements to deal with 3D data in typed arrays and such. We also have some policies regarding changes to the string attributes in parallel to using the JS API. Kristian/Felix can go into the details about this and we should actually document them well in the documentation. Regarding WebComponents in the 2D UI, its great that we now have such a great tool (Polymer). This takes a lot of efforts away, which we can use these efforts for other things. I would still like to see that we test and where necessary adapt Polymer to also be usable with XML3D if that should not be the case yet. Also there might be convenient components that would make 3D app programming with XML3D and WebComponents much easier. So for instance having the XML3D-based components automatically define the set of variable that need to be synchronized, registering them with the networking module (if available), and making sure this all gets done in the background would be excellent. Also providing an 3D avatar WebComponent with embedded animations and such would be ideal. It would be excellent if we could provide to Web developers a whole set of predefined and modular WebComponents that they could just use to create 3D worlds with behavior. The one really big issue where I still see a lot of work on finding new solutions (not just implementation but real conceptual work) is dealing with the hugely varying set of input devices. One person has mouse and keyboard, the next has only a multi-touch surface, the next is using a Kinect, while yet another other is using a Wii Remote, etc. A single Web app will have to work with all of those situations simultaneously (on different devices, though). Letting Web application and their developers handle this in their apps will simply not work and be frustrating for developer and users. We have started to work on a concept for this but this has not gone very far yet. The coarse idea is that applications use virtual interaction elements to say what kind of input (metaphor) they are expected (e.g. selection of an object, moving an object). These are modules that can be configured and instantiated. They can also be nested within/on top each other (like non-visible dialog boxes) dynamically to have a hierarchical finite state machine of interactions. These interaction elements would consist of the interaction logic as well as (optionally) some interaction gadgets that visually represent the possible interactions (like handles to move and rotate an object). The key thing is that these interaction elements would be independent of the application, anyone can use them, and we could develop a whole library of them to cover the wide range of input metaphors that applications use (but applications can extend them with their own if needed). There are two extensions to this approach: (i) instead of having a single interaction element for each interaction metaphor we could have interfaces for them and develop entire user interface libraries that each have a different look and feel but still implement the same basic interaction metaphor. (ii) One could also design them such that users can change the mapping of the devices used for certain actions. Think of it as a reconfigurable gamer mouse, where the user can assign different action to certain buttons, gestures, or such. So a user can say, I will use the mouse to select objects but move them with the cursor keys (for a stupid simple example). These would be optional extensions for later, though. It would be great to work on those aspects together if this would be interesting to anyone of you. You probably have quite some experience in terms of good UI for 3D games/apps. Identifying, generalizing, and implementing them as a reusable set of components would a great step forward for 3D apps on the Web (or elsewhere). This would make great papers for conferences as well! If you want to go that way, we should probably set up a special session for discussing this in more detail. Best, Philipp Am 02.11.2013 17:35, schrieb Jonne Nauha: > Yeah WebComponents are nice. I've already implemented quick tests for > Tundras Entity and IComponent (common interface for all components, > WebComponents has inheritance so it plays nicely into this) like 3-4 > months ago in our WebRocket web client. At that point I left it alone, I > wanted to have my JS code that implements the component in a .js file > not inside a .html polymer template. Currently I'm using requirejs for > the whole repo as the amount of code and dependency tracking started to > get out of hand, so I wanted a modular system where I can tell the > system what each module needs as a dependency. This has been great but > will make writing comp implementations in a WebComponent html template a > bit trickier as it would expect everything to in global scope, > requierejs effectively removes everything from global scope. > > I have also a simple DOM integration plugin for WebRocket. If you want > to turn it on (configurable when you construct the client instance) it > will mirror the whole scene/entity/component/attribute chain in to the > DOM as my own > ......... > nodes. This would be trivial to change to create XML3D nodes, but I cant > test that out before everything on the asset side that has been > discussed are complete. Because nothing would simply render if I wont > provide the Ogre asset loaders for XML3D and it knowing how to used DDS > textures. What I would prefer even more is to just pass you the geometry > as data, because I have my own fully working AssetAPI to fetch the > assets, and I have my own loaders for each asset type I support. I would > kind of be duplicate work to re-implement then as XML3D loaders, when I > alreayd have the ready typed gl arrays to give to you right now. > > For my current DOM "mirroring" plugin, if you manipulate the attributes > in the DOM they wont sync up back to the in mem JavaScript attributes. > So currently its read only. I haven't really had the need to implement > the sync the other way as we are perfectly happy living in JS land and > using the WebRocket SDK API to write our application logic. The > declarative side is less important for Meshmoon WebRocket as the scenes > are always coming from our Tundra based servers and not declared on the > html page itself. For FIWARE the declarative and DOM side is more > important and more in focus of course so I understand the direction of > the talks and why XML3D is an valuable asset. > > Instantiating Scene, Entity and each of the Component implementation > from a WebComponent template would solve this problem, as its designed > to encapsulate the JS implementation and it has a great way to get > callbacks when any attribute in the DOM node is manipulated. It can also > expose DOM events and fire them, not to mention normal functions eg. > $("", entityNode).setPosition(10,20,30); so you dont have to > use the raw DOM api to set string attributes by random. These functions > would have type checking and not let you put carbage into the > attributes. I believe the attribute changed handler can also effectively > abort the change if you are trying to put a boolean to a Vector3 > attribute. This all is great and would just require us WebRocket devs to > port our current EC implementations to WebComponent templates. > > But here is where I get confused. So you would be fine by us > implementing Tundra components as WebComponent templates (as said this > would be fairly trivial). How would XML3D then play into this situation? > Would it now monitor our DOM elements instead of the ones you specify > (afaik eg. )? Can you open up this a bit? > > *UI and WebComponents* > * > * > WebComponents are also being looked at in our 2D-UI GE, but to be frank > there is very little to do there. The system is incredibly simple to > use. You include polymer.js to you page, add tags to your Polymer > style .html templates, then you just use the tags declared in them on > the markup or you create the DOM nodes during runtime from JS with your > preferred method. I'm having a bit of hard time figuring out what we > should focus on in the 2D part for WebComponents. I mean they are there > and you can use them, you don't need any kind of special APIs or > supporting code from our side for them to work. Only thing I can think > of is implementing some widgets and maybe 3D related templates for > anyone when they use WebTundra. > > Best regards, > Jonne Nauha > Meshmoon developer at Adminotech Ltd. > www.meshmoon.com > > > On Sat, Nov 2, 2013 at 1:21 PM, Philipp Slusallek > > wrote: > > Hi Toni, all, > > I have now looked at video on the main Polymer page > (http://www.polymer-project.__org/ > ), which is actually very nicely > done. They make a very good point why its advantageous to put things > in the DOM compared to pure JS applications (or even Angular, which > already uses the DOM). They highlight that with WebComponents > (Polymer is a polyfill implementation of them) this becomes now even > easier and creates an object-oriented aspect for HTML. > > BTW, this aspect is exactly what we were aiming at when we suggested > the use of WebComponents for the 2D-UI Epic in the objectives of the > FI-WARE Open Call and I think we should push even more in this > direction, similar to what I wrote in response to Jonne's email earlier. > > Regarding the mapping: We already have the mapping of 3D to the DOM > (XML3D) as well as many modules that build on top of that mapping > (Xflow with animation, image processing, and AR; portable materials, > etc.). I see no reason why we should throw this out just because > there is a slightly different way of doing it. > > I would even speculate that if we would try to offer similar > functionality that at the end we would end up with something that > would pretty much resemble XML3D, maybe with a slightly different > syntax. There are usually not many way to do the same thing in a > good way. > > What I would support 100%, however, is that we try to use > Web*Components* to implement the *ECA components* (and possibly > entities). Essentially for the same reasons explained in the video. > That makes a lot of sense in the Web context and is fully compatible > with the DOM and all the DOM-based frameworks that can then be used > as well on this data. > > I have been saying for a long time that we have had libraries in > C/C++. Just porting them to JS to run in the Web does not give us > any advantages at all (likely only disadvantages like performance > and an inferior language). Its only if we embrace the key elements > of the Web -- and the runtime DOM arguably is the core of it all -- > that we can tap into all the benefits of the Web. > > And THE key advantage of the DOM is that you get *way more* than the > sum of the pieces when putting things together. Instead, of making > sure that one library can talk to all the others (which gets you > into an N^2 complexity problem), each component only needs to work > with the DOM (constant effort per component and you have to deal > with one data structure anyway, so its essentially no overhead > anyway). Then, you get the benefit from all other components > *automatically and completely for free* (jquery and all the other > tools "just work" also with XML3D, and we should be able to use > XML3D within Polymer also). > > Note, that none of the Web technologies discussed here > (WebComponents, Polymer, Angular, etc.) would work at all if they > would not use the DOM at their core. Angular for example depends on > the DOM and just allows a nicer binding of custom functionality > (controller implemented in JS) to the DOM. > > This all applies exactly the same way also to 3D data -- at least > logically. However, on the implementation side 3D is special because > you have to take care of large data structures, typed arrays, and so > on. This is what the main work in XML3D was all about. Why should > someone invest all the effort to do it again for likely very similar > results in the end? > > We can talk about using three.js as a renderer, though, but I would > not touch the 3D DOM binding and data model that we already have in > XML3D (unless someone comes with very good reasons to do so). > > > Best, > > Philipp > > > Am 02.11.2013 09:27, schrieb Toni Alatalo: > > On 02 Nov 2013, at 09:09, Philipp Slusallek > > > wrote: > > Nice stuff. from my perspective there are two ways to look > at this work: One is to provide high level UI elements on > top of a three.js implementation and the other is the start > of creating a declarative layer on top of the three.js > renderer. This seems more along the first line but both > should be similarly interesting to us. > > > Indeed. > > It great to see that other people are coming up with similar > ideas now. It would be good to get the message about our > XML3D design and implementation to these people out there. > That way we could improve what we already have instead of > reinventing the wheel. > > > That was my immediate first thought as well: it seemed like > people have started to reinvent declarative 3d for the web from > scratch. That?s why I asked whether they knew about the existing > work ? I understood that this Josh Galvin person (don?t know him > from before) who made the Spinner demo, did (am not sure). > > Thanks for the views and pointers, I?ll keep an eye open for > talks about this (actually just joined the #three.js irc channel > on freenode yesterday, haven?t been involved in their community > before really ? Tapani from us has been hanging out there > though). They seem to communicate most in the github issue > tracker and pull requests (which I think is a great way). > > I also did not find an e-mail address to the polymer-threejs > person, but kaosat.net is his personal site > and apparently he made the original announcement of the > declarative three.js thing in August in Google+ so I figure e.g. > replying there would be one way to comment: > https://plus.google.com/__112899217323877236232/posts/__bUW1hrwHcAW > .. > I can do that on Monday. > > BTW it seems that this guy is into hardware and cad and all > sorts of things and declarative 3d xml is just a side thing for > fun, perhaps related to his work on some cad thing ? is not like > he?d be pursuing a career or a product or anything out of it. > > It seems like a straightforward mapping of the three.js API to > xml elements: what I struggle to understand now is whether > that?s a good abstraction level and how does it correspond to > xml3d?s vocabulary. > > ~Toni > > It would be good if you can point people also to our papers > from this year > (http://graphics.cg.uni-__saarland.de/publications/ > ). They > explain a lot of the background of why we have chose thing > to work the way they work. > > More specifically: > -- The "xml3d.js" paper explain a lot about the design of > XML3D and its implementation > (https://graphics.cg.uni-__saarland.de/2013/xml3djs-__architecture-of-a-polyfill-__implementation-of-xml3d/ > ). > -- The "Declarative image processing" paper explains all the > advantages one gets from exposing processing elements to the > DOM instead of implementing them only in some JS libraries > (https://graphics.cg.uni-__saarland.de/2013/declarative-__ar-and-image-processing-on-__the-web-with-xflow/ > ). > -- And the 2012 paper on "XFlow" shows this usage for > animation > (https://graphics.cg.uni-__saarland.de/2012/xflow-__declarative-data-processing-__for-the-web/ > ) > > Getting into a constructive discussion with some of these > three.js people would be a good thing. I tried to find an > email address for the polymer-threejs person but could not > find any. Feel free to farward this email to him (and maybe > others). I would love to get their feedback and engage in > discussions. > > > Best, > > Philipp > > Am 02.11.2013 00:38, schrieb Toni Alatalo: > > Apparently some three.js user/dev has gotten inspired by > WebComponents & > the Polymeer and written > https://github.com/kaosat-dev/__polymer-threejs > :) > > Now another guy has continued with > https://github.com/JoshGalvin/__three-polymer > ? there?s > a demo of custom > element (?spinner?), similar to the Door case discussed > here earlier. > > Had a brief chat with him, will return to this later but > was fun to see > the minimal webgl web component example there as that > has been in our > agenda. > > ~Toni > > 01:01 *<* galv*>* > https://github.com/JoshGalvin/__three-polymer > added > support for more basic geometry types > 01:02 *<* galv*>* Going to do materials next > 01:25 *<* *antont**>* galv: hee - are you aware of these > btw? > http://www.w3.org/community/__declarative3d/ > , e.g. > https://github.com/xml3d/__xml3d.js > > 01:27 *<* galv*>* yeah, different level of abstraction > 01:27 *<* *antont**>* perhaps > 01:28 *<* galv*>* I expect people to wrap up their game > objects > 01:28 *<* galv*>* aka "spinner" > 01:28 *<* galv*>* (index.html) > 01:29 *<* *antont**>* we've been also planning to enable > saying things > like if that's what you mean > 01:30 *<* *antont**>* right, seems like the same idea > 01:31 *<* *antont**>* very cool to see, gotta check the > codes etc later > .. but sleep now, laters > > > > _________________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > https://lists.fi-ware.eu/__listinfo/fiware-miwi > > > > > -- > > ------------------------------__------------------------------__------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz > (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------__------------------------------__--------------- > > > > > > -- > > ------------------------------__------------------------------__------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------__------------------------------__--------------- > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From toni at playsign.net Sun Nov 3 08:16:54 2013 From: toni at playsign.net (Toni Alatalo) Date: Sun, 3 Nov 2013 09:16:54 +0200 Subject: [Fiware-miwi] three.js WebComponents In-Reply-To: References: <51CDE540-A80E-401A-A661-A96FF9E4B9B4@playsign.net> <5274A545.3090509@dfki.de> <5EC2C34A-3BF5-4218-8738-6EC54FB670FB@playsign.net> <5274E045.2020106@dfki.de> Message-ID: On 02 Nov 2013, at 18:35, Jonne Nauha wrote: > But here is where I get confused. So you would be fine by us implementing Tundra components as WebComponent templates (as said this would be fairly trivial). How would XML3D then play into this situation? Would it now monitor our DOM elements instead of the ones you specify (afaik eg. )? Can you open up this a bit? My understanding here is that we?d agree on one entity-component model, a set of basic components and the extensibility mechanism for adding new components. This is why we worked in finding a reX EC <-> xml3d elements mapping earlier: to see whether we actually already use the same model. So one option is to use the xml3d schema for having the Tundra components in the DOM. It is much more readable so better for human xml authoring, viewing in the debugger etc. We can of course still support TXML loading too. Now I need to read Philipp?s post that just came in to see what goes on there.. :p ~Toni > UI and WebComponents > > WebComponents are also being looked at in our 2D-UI GE, but to be frank there is very little to do there. The system is incredibly simple to use. You include polymer.js to you page, add tags to your Polymer style .html templates, then you just use the tags declared in them on the markup or you create the DOM nodes during runtime from JS with your preferred method. I'm having a bit of hard time figuring out what we should focus on in the 2D part for WebComponents. I mean they are there and you can use them, you don't need any kind of special APIs or supporting code from our side for them to work. Only thing I can think of is implementing some widgets and maybe 3D related templates for anyone when they use WebTundra. > > Best regards, > Jonne Nauha > Meshmoon developer at Adminotech Ltd. > www.meshmoon.com > > > On Sat, Nov 2, 2013 at 1:21 PM, Philipp Slusallek wrote: > Hi Toni, all, > > I have now looked at video on the main Polymer page (http://www.polymer-project.org/), which is actually very nicely done. They make a very good point why its advantageous to put things in the DOM compared to pure JS applications (or even Angular, which already uses the DOM). They highlight that with WebComponents (Polymer is a polyfill implementation of them) this becomes now even easier and creates an object-oriented aspect for HTML. > > BTW, this aspect is exactly what we were aiming at when we suggested the use of WebComponents for the 2D-UI Epic in the objectives of the FI-WARE Open Call and I think we should push even more in this direction, similar to what I wrote in response to Jonne's email earlier. > > Regarding the mapping: We already have the mapping of 3D to the DOM (XML3D) as well as many modules that build on top of that mapping (Xflow with animation, image processing, and AR; portable materials, etc.). I see no reason why we should throw this out just because there is a slightly different way of doing it. > > I would even speculate that if we would try to offer similar functionality that at the end we would end up with something that would pretty much resemble XML3D, maybe with a slightly different syntax. There are usually not many way to do the same thing in a good way. > > What I would support 100%, however, is that we try to use Web*Components* to implement the *ECA components* (and possibly entities). Essentially for the same reasons explained in the video. That makes a lot of sense in the Web context and is fully compatible with the DOM and all the DOM-based frameworks that can then be used as well on this data. > > I have been saying for a long time that we have had libraries in C/C++. Just porting them to JS to run in the Web does not give us any advantages at all (likely only disadvantages like performance and an inferior language). Its only if we embrace the key elements of the Web -- and the runtime DOM arguably is the core of it all -- that we can tap into all the benefits of the Web. > > And THE key advantage of the DOM is that you get *way more* than the sum of the pieces when putting things together. Instead, of making sure that one library can talk to all the others (which gets you into an N^2 complexity problem), each component only needs to work with the DOM (constant effort per component and you have to deal with one data structure anyway, so its essentially no overhead anyway). Then, you get the benefit from all other components *automatically and completely for free* (jquery and all the other tools "just work" also with XML3D, and we should be able to use XML3D within Polymer also). > > Note, that none of the Web technologies discussed here (WebComponents, Polymer, Angular, etc.) would work at all if they would not use the DOM at their core. Angular for example depends on the DOM and just allows a nicer binding of custom functionality (controller implemented in JS) to the DOM. > > This all applies exactly the same way also to 3D data -- at least logically. However, on the implementation side 3D is special because you have to take care of large data structures, typed arrays, and so on. This is what the main work in XML3D was all about. Why should someone invest all the effort to do it again for likely very similar results in the end? > > We can talk about using three.js as a renderer, though, but I would not touch the 3D DOM binding and data model that we already have in XML3D (unless someone comes with very good reasons to do so). > > > Best, > > Philipp > > > Am 02.11.2013 09:27, schrieb Toni Alatalo: > > On 02 Nov 2013, at 09:09, Philipp Slusallek wrote: > Nice stuff. from my perspective there are two ways to look at this work: One is to provide high level UI elements on top of a three.js implementation and the other is the start of creating a declarative layer on top of the three.js renderer. This seems more along the first line but both should be similarly interesting to us. > > Indeed. > > It great to see that other people are coming up with similar ideas now. It would be good to get the message about our XML3D design and implementation to these people out there. That way we could improve what we already have instead of reinventing the wheel. > > That was my immediate first thought as well: it seemed like people have started to reinvent declarative 3d for the web from scratch. That?s why I asked whether they knew about the existing work ? I understood that this Josh Galvin person (don?t know him from before) who made the Spinner demo, did (am not sure). > > Thanks for the views and pointers, I?ll keep an eye open for talks about this (actually just joined the #three.js irc channel on freenode yesterday, haven?t been involved in their community before really ? Tapani from us has been hanging out there though). They seem to communicate most in the github issue tracker and pull requests (which I think is a great way). > > I also did not find an e-mail address to the polymer-threejs person, but kaosat.net is his personal site and apparently he made the original announcement of the declarative three.js thing in August in Google+ so I figure e.g. replying there would be one way to comment: https://plus.google.com/112899217323877236232/posts/bUW1hrwHcAW .. I can do that on Monday. > > BTW it seems that this guy is into hardware and cad and all sorts of things and declarative 3d xml is just a side thing for fun, perhaps related to his work on some cad thing ? is not like he?d be pursuing a career or a product or anything out of it. > > It seems like a straightforward mapping of the three.js API to xml elements: what I struggle to understand now is whether that?s a good abstraction level and how does it correspond to xml3d?s vocabulary. > > ~Toni > > It would be good if you can point people also to our papers from this year (http://graphics.cg.uni-saarland.de/publications/). They explain a lot of the background of why we have chose thing to work the way they work. > > More specifically: > -- The "xml3d.js" paper explain a lot about the design of XML3D and its implementation (https://graphics.cg.uni-saarland.de/2013/xml3djs-architecture-of-a-polyfill-implementation-of-xml3d/). > -- The "Declarative image processing" paper explains all the advantages one gets from exposing processing elements to the DOM instead of implementing them only in some JS libraries (https://graphics.cg.uni-saarland.de/2013/declarative-ar-and-image-processing-on-the-web-with-xflow/). > -- And the 2012 paper on "XFlow" shows this usage for animation (https://graphics.cg.uni-saarland.de/2012/xflow-declarative-data-processing-for-the-web/) > > Getting into a constructive discussion with some of these three.js people would be a good thing. I tried to find an email address for the polymer-threejs person but could not find any. Feel free to farward this email to him (and maybe others). I would love to get their feedback and engage in discussions. > > > Best, > > Philipp > > Am 02.11.2013 00:38, schrieb Toni Alatalo: > Apparently some three.js user/dev has gotten inspired by WebComponents & > the Polymeer and written https://github.com/kaosat-dev/polymer-threejs :) > > Now another guy has continued with > https://github.com/JoshGalvin/three-polymer ? there?s a demo of custom > element (?spinner?), similar to the Door case discussed here earlier. > > Had a brief chat with him, will return to this later but was fun to see > the minimal webgl web component example there as that has been in our > agenda. > > ~Toni > > 01:01 *<* galv*>* https://github.com/JoshGalvin/three-polymer added > support for more basic geometry types > 01:02 *<* galv*>* Going to do materials next > 01:25 *<* *antont**>* galv: hee - are you aware of these btw? > http://www.w3.org/community/declarative3d/ , e.g. > https://github.com/xml3d/xml3d.js > 01:27 *<* galv*>* yeah, different level of abstraction > 01:27 *<* *antont**>* perhaps > 01:28 *<* galv*>* I expect people to wrap up their game objects > 01:28 *<* galv*>* aka "spinner" > 01:28 *<* galv*>* (index.html) > 01:29 *<* *antont**>* we've been also planning to enable saying things > like if that's what you mean > 01:30 *<* *antont**>* right, seems like the same idea > 01:31 *<* *antont**>* very cool to see, gotta check the codes etc later > .. but sleep now, laters > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > > > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Sun Nov 3 08:23:32 2013 From: toni at playsign.net (Toni Alatalo) Date: Sun, 3 Nov 2013 09:23:32 +0200 Subject: [Fiware-miwi] three.js WebComponents In-Reply-To: <5275F75B.6010405@dfki.de> References: <51CDE540-A80E-401A-A661-A96FF9E4B9B4@playsign.net> <5274A545.3090509@dfki.de> <5EC2C34A-3BF5-4218-8738-6EC54FB670FB@playsign.net> <5274E045.2020106@dfki.de> <5275F75B.6010405@dfki.de> Message-ID: Again a brief note about a single point: On 03 Nov 2013, at 09:12, Philipp Slusallek wrote: > Also there might be convenient components that would make 3D app programming with XML3D and WebComponents much easier. So for instance having the XML3D-based components automatically define the set of variable that need to be synchronized, registering them with the networking module (if available), and making sure this all gets done in the background would be excellent. Also providing an 3D avatar WebComponent with embedded animations and such would be ideal. It would be excellent if we could provide to Web developers a whole set of predefined and modular WebComponents that they could just use to create 3D worlds with behaviour. This is what realXtend has been doing for several years ? just outside the Web ? and plan is to continue doing the same on the Web as well :) That?s how it works in the native Tundra, and plan for this fi-ware work to produce the web client (which also works standalone, like the native one does too) is to implement the same. That?s also how it works in Chiru-WebClient and WebRocket, i.e. the pre-existing WebTundra?s. ~Toni > The one really big issue where I still see a lot of work on finding new solutions (not just implementation but real conceptual work) is dealing with the hugely varying set of input devices. One person has mouse and keyboard, the next has only a multi-touch surface, the next is using a Kinect, while yet another other is using a Wii Remote, etc. A single Web app will have to work with all of those situations simultaneously (on different devices, though). Letting Web application and their developers handle this in their apps will simply not work and be frustrating for developer and users. > > We have started to work on a concept for this but this has not gone very far yet. The coarse idea is that applications use virtual interaction elements to say what kind of input (metaphor) they are expected (e.g. selection of an object, moving an object). These are modules that can be configured and instantiated. They can also be nested within/on top each other (like non-visible dialog boxes) dynamically to have a hierarchical finite state machine of interactions. > > These interaction elements would consist of the interaction logic as well as (optionally) some interaction gadgets that visually represent the possible interactions (like handles to move and rotate an object). The key thing is that these interaction elements would be independent of the application, anyone can use them, and we could develop a whole library of them to cover the wide range of input metaphors that applications use (but applications can extend them with their own if needed). > > There are two extensions to this approach: (i) instead of having a single interaction element for each interaction metaphor we could have interfaces for them and develop entire user interface libraries that each have a different look and feel but still implement the same basic interaction metaphor. (ii) One could also design them such that users can change the mapping of the devices used for certain actions. Think of it as a reconfigurable gamer mouse, where the user can assign different action to certain buttons, gestures, or such. So a user can say, I will use the mouse to select objects but move them with the cursor keys (for a stupid simple example). These would be optional extensions for later, though. > > It would be great to work on those aspects together if this would be interesting to anyone of you. You probably have quite some experience in terms of good UI for 3D games/apps. Identifying, generalizing, and implementing them as a reusable set of components would a great step forward for 3D apps on the Web (or elsewhere). This would make great papers for conferences as well! > > If you want to go that way, we should probably set up a special session for discussing this in more detail. > > > Best, > > Philipp > > Am 02.11.2013 17:35, schrieb Jonne Nauha: >> Yeah WebComponents are nice. I've already implemented quick tests for >> Tundras Entity and IComponent (common interface for all components, >> WebComponents has inheritance so it plays nicely into this) like 3-4 >> months ago in our WebRocket web client. At that point I left it alone, I >> wanted to have my JS code that implements the component in a .js file >> not inside a .html polymer template. Currently I'm using requirejs for >> the whole repo as the amount of code and dependency tracking started to >> get out of hand, so I wanted a modular system where I can tell the >> system what each module needs as a dependency. This has been great but >> will make writing comp implementations in a WebComponent html template a >> bit trickier as it would expect everything to in global scope, >> requierejs effectively removes everything from global scope. >> >> I have also a simple DOM integration plugin for WebRocket. If you want >> to turn it on (configurable when you construct the client instance) it >> will mirror the whole scene/entity/component/attribute chain in to the >> DOM as my own >> ......... >> nodes. This would be trivial to change to create XML3D nodes, but I cant >> test that out before everything on the asset side that has been >> discussed are complete. Because nothing would simply render if I wont >> provide the Ogre asset loaders for XML3D and it knowing how to used DDS >> textures. What I would prefer even more is to just pass you the geometry >> as data, because I have my own fully working AssetAPI to fetch the >> assets, and I have my own loaders for each asset type I support. I would >> kind of be duplicate work to re-implement then as XML3D loaders, when I >> alreayd have the ready typed gl arrays to give to you right now. >> >> For my current DOM "mirroring" plugin, if you manipulate the attributes >> in the DOM they wont sync up back to the in mem JavaScript attributes. >> So currently its read only. I haven't really had the need to implement >> the sync the other way as we are perfectly happy living in JS land and >> using the WebRocket SDK API to write our application logic. The >> declarative side is less important for Meshmoon WebRocket as the scenes >> are always coming from our Tundra based servers and not declared on the >> html page itself. For FIWARE the declarative and DOM side is more >> important and more in focus of course so I understand the direction of >> the talks and why XML3D is an valuable asset. >> >> Instantiating Scene, Entity and each of the Component implementation >> from a WebComponent template would solve this problem, as its designed >> to encapsulate the JS implementation and it has a great way to get >> callbacks when any attribute in the DOM node is manipulated. It can also >> expose DOM events and fire them, not to mention normal functions eg. >> $("", entityNode).setPosition(10,20,30); so you dont have to >> use the raw DOM api to set string attributes by random. These functions >> would have type checking and not let you put carbage into the >> attributes. I believe the attribute changed handler can also effectively >> abort the change if you are trying to put a boolean to a Vector3 >> attribute. This all is great and would just require us WebRocket devs to >> port our current EC implementations to WebComponent templates. >> >> But here is where I get confused. So you would be fine by us >> implementing Tundra components as WebComponent templates (as said this >> would be fairly trivial). How would XML3D then play into this situation? >> Would it now monitor our DOM elements instead of the ones you specify >> (afaik eg. )? Can you open up this a bit? >> >> *UI and WebComponents* >> * >> * >> WebComponents are also being looked at in our 2D-UI GE, but to be frank >> there is very little to do there. The system is incredibly simple to >> use. You include polymer.js to you page, add tags to your Polymer >> style .html templates, then you just use the tags declared in them on >> the markup or you create the DOM nodes during runtime from JS with your >> preferred method. I'm having a bit of hard time figuring out what we >> should focus on in the 2D part for WebComponents. I mean they are there >> and you can use them, you don't need any kind of special APIs or >> supporting code from our side for them to work. Only thing I can think >> of is implementing some widgets and maybe 3D related templates for >> anyone when they use WebTundra. >> >> Best regards, >> Jonne Nauha >> Meshmoon developer at Adminotech Ltd. >> www.meshmoon.com >> >> >> On Sat, Nov 2, 2013 at 1:21 PM, Philipp Slusallek >> > wrote: >> >> Hi Toni, all, >> >> I have now looked at video on the main Polymer page >> (http://www.polymer-project.__org/ >> ), which is actually very nicely >> done. They make a very good point why its advantageous to put things >> in the DOM compared to pure JS applications (or even Angular, which >> already uses the DOM). They highlight that with WebComponents >> (Polymer is a polyfill implementation of them) this becomes now even >> easier and creates an object-oriented aspect for HTML. >> >> BTW, this aspect is exactly what we were aiming at when we suggested >> the use of WebComponents for the 2D-UI Epic in the objectives of the >> FI-WARE Open Call and I think we should push even more in this >> direction, similar to what I wrote in response to Jonne's email earlier. >> >> Regarding the mapping: We already have the mapping of 3D to the DOM >> (XML3D) as well as many modules that build on top of that mapping >> (Xflow with animation, image processing, and AR; portable materials, >> etc.). I see no reason why we should throw this out just because >> there is a slightly different way of doing it. >> >> I would even speculate that if we would try to offer similar >> functionality that at the end we would end up with something that >> would pretty much resemble XML3D, maybe with a slightly different >> syntax. There are usually not many way to do the same thing in a >> good way. >> >> What I would support 100%, however, is that we try to use >> Web*Components* to implement the *ECA components* (and possibly >> entities). Essentially for the same reasons explained in the video. >> That makes a lot of sense in the Web context and is fully compatible >> with the DOM and all the DOM-based frameworks that can then be used >> as well on this data. >> >> I have been saying for a long time that we have had libraries in >> C/C++. Just porting them to JS to run in the Web does not give us >> any advantages at all (likely only disadvantages like performance >> and an inferior language). Its only if we embrace the key elements >> of the Web -- and the runtime DOM arguably is the core of it all -- >> that we can tap into all the benefits of the Web. >> >> And THE key advantage of the DOM is that you get *way more* than the >> sum of the pieces when putting things together. Instead, of making >> sure that one library can talk to all the others (which gets you >> into an N^2 complexity problem), each component only needs to work >> with the DOM (constant effort per component and you have to deal >> with one data structure anyway, so its essentially no overhead >> anyway). Then, you get the benefit from all other components >> *automatically and completely for free* (jquery and all the other >> tools "just work" also with XML3D, and we should be able to use >> XML3D within Polymer also). >> >> Note, that none of the Web technologies discussed here >> (WebComponents, Polymer, Angular, etc.) would work at all if they >> would not use the DOM at their core. Angular for example depends on >> the DOM and just allows a nicer binding of custom functionality >> (controller implemented in JS) to the DOM. >> >> This all applies exactly the same way also to 3D data -- at least >> logically. However, on the implementation side 3D is special because >> you have to take care of large data structures, typed arrays, and so >> on. This is what the main work in XML3D was all about. Why should >> someone invest all the effort to do it again for likely very similar >> results in the end? >> >> We can talk about using three.js as a renderer, though, but I would >> not touch the 3D DOM binding and data model that we already have in >> XML3D (unless someone comes with very good reasons to do so). >> >> >> Best, >> >> Philipp >> >> >> Am 02.11.2013 09:27, schrieb Toni Alatalo: >> >> On 02 Nov 2013, at 09:09, Philipp Slusallek >> > >> wrote: >> >> Nice stuff. from my perspective there are two ways to look >> at this work: One is to provide high level UI elements on >> top of a three.js implementation and the other is the start >> of creating a declarative layer on top of the three.js >> renderer. This seems more along the first line but both >> should be similarly interesting to us. >> >> >> Indeed. >> >> It great to see that other people are coming up with similar >> ideas now. It would be good to get the message about our >> XML3D design and implementation to these people out there. >> That way we could improve what we already have instead of >> reinventing the wheel. >> >> >> That was my immediate first thought as well: it seemed like >> people have started to reinvent declarative 3d for the web from >> scratch. That?s why I asked whether they knew about the existing >> work ? I understood that this Josh Galvin person (don?t know him >> from before) who made the Spinner demo, did (am not sure). >> >> Thanks for the views and pointers, I?ll keep an eye open for >> talks about this (actually just joined the #three.js irc channel >> on freenode yesterday, haven?t been involved in their community >> before really ? Tapani from us has been hanging out there >> though). They seem to communicate most in the github issue >> tracker and pull requests (which I think is a great way). >> >> I also did not find an e-mail address to the polymer-threejs >> person, but kaosat.net is his personal site >> and apparently he made the original announcement of the >> declarative three.js thing in August in Google+ so I figure e.g. >> replying there would be one way to comment: >> https://plus.google.com/__112899217323877236232/posts/__bUW1hrwHcAW >> .. >> I can do that on Monday. >> >> BTW it seems that this guy is into hardware and cad and all >> sorts of things and declarative 3d xml is just a side thing for >> fun, perhaps related to his work on some cad thing ? is not like >> he?d be pursuing a career or a product or anything out of it. >> >> It seems like a straightforward mapping of the three.js API to >> xml elements: what I struggle to understand now is whether >> that?s a good abstraction level and how does it correspond to >> xml3d?s vocabulary. >> >> ~Toni >> >> It would be good if you can point people also to our papers >> from this year >> (http://graphics.cg.uni-__saarland.de/publications/ >> ). They >> explain a lot of the background of why we have chose thing >> to work the way they work. >> >> More specifically: >> -- The "xml3d.js" paper explain a lot about the design of >> XML3D and its implementation >> (https://graphics.cg.uni-__saarland.de/2013/xml3djs-__architecture-of-a-polyfill-__implementation-of-xml3d/ >> ). >> -- The "Declarative image processing" paper explains all the >> advantages one gets from exposing processing elements to the >> DOM instead of implementing them only in some JS libraries >> (https://graphics.cg.uni-__saarland.de/2013/declarative-__ar-and-image-processing-on-__the-web-with-xflow/ >> ). >> -- And the 2012 paper on "XFlow" shows this usage for >> animation >> (https://graphics.cg.uni-__saarland.de/2012/xflow-__declarative-data-processing-__for-the-web/ >> ) >> >> Getting into a constructive discussion with some of these >> three.js people would be a good thing. I tried to find an >> email address for the polymer-threejs person but could not >> find any. Feel free to farward this email to him (and maybe >> others). I would love to get their feedback and engage in >> discussions. >> >> >> Best, >> >> Philipp >> >> Am 02.11.2013 00:38, schrieb Toni Alatalo: >> >> Apparently some three.js user/dev has gotten inspired by >> WebComponents & >> the Polymeer and written >> https://github.com/kaosat-dev/__polymer-threejs >> :) >> >> Now another guy has continued with >> https://github.com/JoshGalvin/__three-polymer >> ? there?s >> a demo of custom >> element (?spinner?), similar to the Door case discussed >> here earlier. >> >> Had a brief chat with him, will return to this later but >> was fun to see >> the minimal webgl web component example there as that >> has been in our >> agenda. >> >> ~Toni >> >> 01:01 *<* galv*>* >> https://github.com/JoshGalvin/__three-polymer >> added >> support for more basic geometry types >> 01:02 *<* galv*>* Going to do materials next >> 01:25 *<* *antont**>* galv: hee - are you aware of these >> btw? >> http://www.w3.org/community/__declarative3d/ >> , e.g. >> https://github.com/xml3d/__xml3d.js >> >> 01:27 *<* galv*>* yeah, different level of abstraction >> 01:27 *<* *antont**>* perhaps >> 01:28 *<* galv*>* I expect people to wrap up their game >> objects >> 01:28 *<* galv*>* aka "spinner" >> 01:28 *<* galv*>* (index.html) >> 01:29 *<* *antont**>* we've been also planning to enable >> saying things >> like if that's what you mean >> 01:30 *<* *antont**>* right, seems like the same idea >> 01:31 *<* *antont**>* very cool to see, gotta check the >> codes etc later >> .. but sleep now, laters >> >> >> >> _________________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> >> https://lists.fi-ware.eu/__listinfo/fiware-miwi >> >> >> >> >> -- >> >> ------------------------------__------------------------------__------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz >> (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> ------------------------------__------------------------------__--------------- >> >> >> >> >> >> -- >> >> ------------------------------__------------------------------__------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> ------------------------------__------------------------------__--------------- >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >> > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Sun Nov 3 09:24:35 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Sun, 03 Nov 2013 09:24:35 +0100 Subject: [Fiware-miwi] three.js WebComponents In-Reply-To: References: <51CDE540-A80E-401A-A661-A96FF9E4B9B4@playsign.net> <5274A545.3090509@dfki.de> <5EC2C34A-3BF5-4218-8738-6EC54FB670FB@playsign.net> <5274E045.2020106@dfki.de> <5275F75B.6010405@dfki.de> Message-ID: <52760843.2080200@dfki.de> Hi, Sounds really good. I would suggest that we start defining the WebComponent components and their interfaces soon (of course, based on what you already use, but have a quick process to discuss if there could be improvements) and then go from there. In parallel we can check that XML3D works well with WebComponents and Polmer. I believe we have talked about that a bit already but I am not sure we have a specification for the WebComponents yet. Torsten, can you work with whoever feels responsible on the Finish side to come up with a common specification (somewhere on the private pages for now)? Sounds like we are all working in the same direction -- which is great! I can't wait to see the first avatars walking to Oulu city (or some other place) in any Web browser :-). Best, Philipp Am 03.11.2013 08:23, schrieb Toni Alatalo: > Again a brief note about a single point: > > On 03 Nov 2013, at 09:12, Philipp Slusallek > wrote: >> Also there might be convenient components that would make 3D app >> programming with XML3D and WebComponents much easier. So for instance >> having the XML3D-based components automatically define the set of >> variable that need to be synchronized, registering them with the >> networking module (if available), and making sure this all gets done >> in the background would be excellent. Also providing an 3D avatar >> WebComponent with embedded animations and such would be ideal. It >> would be excellent if we could provide to Web developers a whole set >> of predefined and modular WebComponents that they could just use to >> create 3D worlds with behaviour. > > This is what realXtend has been doing for several years ? just outside > the Web ? and plan is to continue doing the same on the Web as well :) > > That?s how it works in the native Tundra, and plan for this fi-ware work > to produce the web client (which also works standalone, like the native > one does too) is to implement the same. That?s also how it works in > Chiru-WebClient and WebRocket, i.e. the pre-existing WebTundra?s. > > ~Toni > >> The one really big issue where I still see a lot of work on finding >> new solutions (not just implementation but real conceptual work) is >> dealing with the hugely varying set of input devices. One person has >> mouse and keyboard, the next has only a multi-touch surface, the next >> is using a Kinect, while yet another other is using a Wii Remote, etc. >> A single Web app will have to work with all of those situations >> simultaneously (on different devices, though). Letting Web application >> and their developers handle this in their apps will simply not work >> and be frustrating for developer and users. >> >> We have started to work on a concept for this but this has not gone >> very far yet. The coarse idea is that applications use virtual >> interaction elements to say what kind of input (metaphor) they are >> expected (e.g. selection of an object, moving an object). These are >> modules that can be configured and instantiated. They can also be >> nested within/on top each other (like non-visible dialog boxes) >> dynamically to have a hierarchical finite state machine of interactions. >> >> These interaction elements would consist of the interaction logic as >> well as (optionally) some interaction gadgets that visually represent >> the possible interactions (like handles to move and rotate an object). >> The key thing is that these interaction elements would be independent >> of the application, anyone can use them, and we could develop a whole >> library of them to cover the wide range of input metaphors that >> applications use (but applications can extend them with their own if >> needed). >> >> There are two extensions to this approach: (i) instead of having a >> single interaction element for each interaction metaphor we could have >> interfaces for them and develop entire user interface libraries that >> each have a different look and feel but still implement the same basic >> interaction metaphor. (ii) One could also design them such that users >> can change the mapping of the devices used for certain actions. Think >> of it as a reconfigurable gamer mouse, where the user can assign >> different action to certain buttons, gestures, or such. So a user can >> say, I will use the mouse to select objects but move them with the >> cursor keys (for a stupid simple example). These would be optional >> extensions for later, though. >> >> It would be great to work on those aspects together if this would be >> interesting to anyone of you. You probably have quite some experience >> in terms of good UI for 3D games/apps. Identifying, generalizing, and >> implementing them as a reusable set of components would a great step >> forward for 3D apps on the Web (or elsewhere). This would make great >> papers for conferences as well! >> >> If you want to go that way, we should probably set up a special >> session for discussing this in more detail. >> >> >> Best, >> >> Philipp >> >> Am 02.11.2013 17:35, schrieb Jonne Nauha: >>> Yeah WebComponents are nice. I've already implemented quick tests for >>> Tundras Entity and IComponent (common interface for all components, >>> WebComponents has inheritance so it plays nicely into this) like 3-4 >>> months ago in our WebRocket web client. At that point I left it alone, I >>> wanted to have my JS code that implements the component in a .js file >>> not inside a .html polymer template. Currently I'm using requirejs for >>> the whole repo as the amount of code and dependency tracking started to >>> get out of hand, so I wanted a modular system where I can tell the >>> system what each module needs as a dependency. This has been great but >>> will make writing comp implementations in a WebComponent html template a >>> bit trickier as it would expect everything to in global scope, >>> requierejs effectively removes everything from global scope. >>> >>> I have also a simple DOM integration plugin for WebRocket. If you want >>> to turn it on (configurable when you construct the client instance) it >>> will mirror the whole scene/entity/component/attribute chain in to the >>> DOM as my own >>> ......... >>> nodes. This would be trivial to change to create XML3D nodes, but I cant >>> test that out before everything on the asset side that has been >>> discussed are complete. Because nothing would simply render if I wont >>> provide the Ogre asset loaders for XML3D and it knowing how to used DDS >>> textures. What I would prefer even more is to just pass you the geometry >>> as data, because I have my own fully working AssetAPI to fetch the >>> assets, and I have my own loaders for each asset type I support. I would >>> kind of be duplicate work to re-implement then as XML3D loaders, when I >>> alreayd have the ready typed gl arrays to give to you right now. >>> >>> For my current DOM "mirroring" plugin, if you manipulate the attributes >>> in the DOM they wont sync up back to the in mem JavaScript attributes. >>> So currently its read only. I haven't really had the need to implement >>> the sync the other way as we are perfectly happy living in JS land and >>> using the WebRocket SDK API to write our application logic. The >>> declarative side is less important for Meshmoon WebRocket as the scenes >>> are always coming from our Tundra based servers and not declared on the >>> html page itself. For FIWARE the declarative and DOM side is more >>> important and more in focus of course so I understand the direction of >>> the talks and why XML3D is an valuable asset. >>> >>> Instantiating Scene, Entity and each of the Component implementation >>> from a WebComponent template would solve this problem, as its designed >>> to encapsulate the JS implementation and it has a great way to get >>> callbacks when any attribute in the DOM node is manipulated. It can also >>> expose DOM events and fire them, not to mention normal functions eg. >>> $("", entityNode).setPosition(10,20,30); so you dont have to >>> use the raw DOM api to set string attributes by random. These functions >>> would have type checking and not let you put carbage into the >>> attributes. I believe the attribute changed handler can also effectively >>> abort the change if you are trying to put a boolean to a Vector3 >>> attribute. This all is great and would just require us WebRocket devs to >>> port our current EC implementations to WebComponent templates. >>> >>> But here is where I get confused. So you would be fine by us >>> implementing Tundra components as WebComponent templates (as said this >>> would be fairly trivial). How would XML3D then play into this situation? >>> Would it now monitor our DOM elements instead of the ones you specify >>> (afaik eg. )? Can you open up this a bit? >>> >>> *UI and WebComponents* >>> * >>> * >>> WebComponents are also being looked at in our 2D-UI GE, but to be frank >>> there is very little to do there. The system is incredibly simple to >>> use. You include polymer.js to you page, add tags to your Polymer >>> style .html templates, then you just use the tags declared in them on >>> the markup or you create the DOM nodes during runtime from JS with your >>> preferred method. I'm having a bit of hard time figuring out what we >>> should focus on in the 2D part for WebComponents. I mean they are there >>> and you can use them, you don't need any kind of special APIs or >>> supporting code from our side for them to work. Only thing I can think >>> of is implementing some widgets and maybe 3D related templates for >>> anyone when they use WebTundra. >>> >>> Best regards, >>> Jonne Nauha >>> Meshmoon developer at Adminotech Ltd. >>> www.meshmoon.com >> > >>> >>> >>> On Sat, Nov 2, 2013 at 1:21 PM, Philipp Slusallek >>> >> > >>> wrote: >>> >>> Hi Toni, all, >>> >>> I have now looked at video on the main Polymer page >>> (http://www.polymer-project.__org/ >>> ), which is actually very nicely >>> done. They make a very good point why its advantageous to put things >>> in the DOM compared to pure JS applications (or even Angular, which >>> already uses the DOM). They highlight that with WebComponents >>> (Polymer is a polyfill implementation of them) this becomes now even >>> easier and creates an object-oriented aspect for HTML. >>> >>> BTW, this aspect is exactly what we were aiming at when we suggested >>> the use of WebComponents for the 2D-UI Epic in the objectives of the >>> FI-WARE Open Call and I think we should push even more in this >>> direction, similar to what I wrote in response to Jonne's email >>> earlier. >>> >>> Regarding the mapping: We already have the mapping of 3D to the DOM >>> (XML3D) as well as many modules that build on top of that mapping >>> (Xflow with animation, image processing, and AR; portable materials, >>> etc.). I see no reason why we should throw this out just because >>> there is a slightly different way of doing it. >>> >>> I would even speculate that if we would try to offer similar >>> functionality that at the end we would end up with something that >>> would pretty much resemble XML3D, maybe with a slightly different >>> syntax. There are usually not many way to do the same thing in a >>> good way. >>> >>> What I would support 100%, however, is that we try to use >>> Web*Components* to implement the *ECA components* (and possibly >>> entities). Essentially for the same reasons explained in the video. >>> That makes a lot of sense in the Web context and is fully compatible >>> with the DOM and all the DOM-based frameworks that can then be used >>> as well on this data. >>> >>> I have been saying for a long time that we have had libraries in >>> C/C++. Just porting them to JS to run in the Web does not give us >>> any advantages at all (likely only disadvantages like performance >>> and an inferior language). Its only if we embrace the key elements >>> of the Web -- and the runtime DOM arguably is the core of it all -- >>> that we can tap into all the benefits of the Web. >>> >>> And THE key advantage of the DOM is that you get *way more* than the >>> sum of the pieces when putting things together. Instead, of making >>> sure that one library can talk to all the others (which gets you >>> into an N^2 complexity problem), each component only needs to work >>> with the DOM (constant effort per component and you have to deal >>> with one data structure anyway, so its essentially no overhead >>> anyway). Then, you get the benefit from all other components >>> *automatically and completely for free* (jquery and all the other >>> tools "just work" also with XML3D, and we should be able to use >>> XML3D within Polymer also). >>> >>> Note, that none of the Web technologies discussed here >>> (WebComponents, Polymer, Angular, etc.) would work at all if they >>> would not use the DOM at their core. Angular for example depends on >>> the DOM and just allows a nicer binding of custom functionality >>> (controller implemented in JS) to the DOM. >>> >>> This all applies exactly the same way also to 3D data -- at least >>> logically. However, on the implementation side 3D is special because >>> you have to take care of large data structures, typed arrays, and so >>> on. This is what the main work in XML3D was all about. Why should >>> someone invest all the effort to do it again for likely very similar >>> results in the end? >>> >>> We can talk about using three.js as a renderer, though, but I would >>> not touch the 3D DOM binding and data model that we already have in >>> XML3D (unless someone comes with very good reasons to do so). >>> >>> >>> Best, >>> >>> Philipp >>> >>> >>> Am 02.11.2013 09:27, schrieb Toni Alatalo: >>> >>> On 02 Nov 2013, at 09:09, Philipp Slusallek >>> >> > >>> wrote: >>> >>> Nice stuff. from my perspective there are two ways to look >>> at this work: One is to provide high level UI elements on >>> top of a three.js implementation and the other is the start >>> of creating a declarative layer on top of the three.js >>> renderer. This seems more along the first line but both >>> should be similarly interesting to us. >>> >>> >>> Indeed. >>> >>> It great to see that other people are coming up with similar >>> ideas now. It would be good to get the message about our >>> XML3D design and implementation to these people out there. >>> That way we could improve what we already have instead of >>> reinventing the wheel. >>> >>> >>> That was my immediate first thought as well: it seemed like >>> people have started to reinvent declarative 3d for the web from >>> scratch. That?s why I asked whether they knew about the existing >>> work ? I understood that this Josh Galvin person (don?t know him >>> from before) who made the Spinner demo, did (am not sure). >>> >>> Thanks for the views and pointers, I?ll keep an eye open for >>> talks about this (actually just joined the #three.js irc channel >>> on freenode yesterday, haven?t been involved in their community >>> before really ? Tapani from us has been hanging out there >>> though). They seem to communicate most in the github issue >>> tracker and pull requests (which I think is a great way). >>> >>> I also did not find an e-mail address to the polymer-threejs >>> person, butkaosat.net >> > is his personal site >>> and apparently he made the original announcement of the >>> declarative three.js thing in August in Google+ so I figure e.g. >>> replying there would be one way to comment: >>> https://plus.google.com/__112899217323877236232/posts/__bUW1hrwHcAW >>> >>> .. >>> I can do that on Monday. >>> >>> BTW it seems that this guy is into hardware and cad and all >>> sorts of things and declarative 3d xml is just a side thing for >>> fun, perhaps related to his work on some cad thing ? is not like >>> he?d be pursuing a career or a product or anything out of it. >>> >>> It seems like a straightforward mapping of the three.js API to >>> xml elements: what I struggle to understand now is whether >>> that?s a good abstraction level and how does it correspond to >>> xml3d?s vocabulary. >>> >>> ~Toni >>> >>> It would be good if you can point people also to our papers >>> from this year >>> (http://graphics.cg.uni-__saarland.de/publications/ >>> ). They >>> explain a lot of the background of why we have chose thing >>> to work the way they work. >>> >>> More specifically: >>> -- The "xml3d.js" paper explain a lot about the design of >>> XML3D and its implementation >>> (https://graphics.cg.uni-__saarland.de/2013/xml3djs-__architecture-of-a-polyfill-__implementation-of-xml3d/ >>> ). >>> -- The "Declarative image processing" paper explains all the >>> advantages one gets from exposing processing elements to the >>> DOM instead of implementing them only in some JS libraries >>> (https://graphics.cg.uni-__saarland.de/2013/declarative-__ar-and-image-processing-on-__the-web-with-xflow/ >>> ). >>> -- And the 2012 paper on "XFlow" shows this usage for >>> animation >>> (https://graphics.cg.uni-__saarland.de/2012/xflow-__declarative-data-processing-__for-the-web/ >>> ) >>> >>> Getting into a constructive discussion with some of these >>> three.js people would be a good thing. I tried to find an >>> email address for the polymer-threejs person but could not >>> find any. Feel free to farward this email to him (and maybe >>> others). I would love to get their feedback and engage in >>> discussions. >>> >>> >>> Best, >>> >>> Philipp >>> >>> Am 02.11.2013 00:38, schrieb Toni Alatalo: >>> >>> Apparently some three.js user/dev has gotten inspired by >>> WebComponents & >>> the Polymeer and written >>> https://github.com/kaosat-dev/__polymer-threejs >>> :) >>> >>> Now another guy has continued with >>> https://github.com/JoshGalvin/__three-polymer >>> ? there?s >>> a demo of custom >>> element (?spinner?), similar to the Door case discussed >>> here earlier. >>> >>> Had a brief chat with him, will return to this later but >>> was fun to see >>> the minimal webgl web component example there as that >>> has been in our >>> agenda. >>> >>> ~Toni >>> >>> 01:01 *<* galv*>* >>> https://github.com/JoshGalvin/__three-polymer >>> added >>> support for more basic geometry types >>> 01:02 *<* galv*>* Going to do materials next >>> 01:25 *<* *antont**>* galv: hee - are you aware of these >>> btw? >>> http://www.w3.org/community/__declarative3d/ >>> , e.g. >>> https://github.com/xml3d/__xml3d.js >>> >>> 01:27 *<* galv*>* yeah, different level of abstraction >>> 01:27 *<* *antont**>* perhaps >>> 01:28 *<* galv*>* I expect people to wrap up their game >>> objects >>> 01:28 *<* galv*>* aka "spinner" >>> 01:28 *<* galv*>* (index.html) >>> 01:29 *<* *antont**>* we've been also planning to enable >>> saying things >>> like if that's what you mean >>> 01:30 *<* *antont**>* right, seems like the same idea >>> 01:31 *<* *antont**>* very cool to see, gotta check the >>> codes etc later >>> .. but sleep now, laters >>> >>> >>> >>> _________________________________________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> >>> https://lists.fi-ware.eu/__listinfo/fiware-miwi >>> >>> >>> >>> >>> -- >>> >>> ------------------------------__------------------------------__------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz >>> (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> ------------------------------__------------------------------__--------------- >>> >>> >>> >>> >>> >>> -- >>> >>> ------------------------------__------------------------------__------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> ------------------------------__------------------------------__--------------- >>> >>> _______________________________________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> >>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>> >>> >> >> >> -- >> >> ------------------------------------------------------------------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> --------------------------------------------------------------------------- >> > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From Philipp.Slusallek at dfki.de Sun Nov 3 10:14:13 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Sun, 03 Nov 2013 10:14:13 +0100 Subject: [Fiware-miwi] 3D UI Usage from other GEs / epics / apps In-Reply-To: References: Message-ID: <527613E5.5020701@dfki.de> Hi, Just catching up on this thread. Am 31.10.2013 08:16, schrieb Toni Alatalo: > I think the renderer API is needed for these kind of things: > > 1. Custom drawing, either in a component?s implementation or just with > direct drawing commands from application code. For example a component > that is a procedural tree ? or volumetric terrain. Or some custom code > to draw aiming helpers in a shooting game or so, for example some kind > of curves. While I think that it might be necessary to have an escape mechanism for drawing stuff directly, we should see that we can cover most of these features within the declarative part as well. This will aloow us to use other renderers as well (e.g. with XML3D and our shade.js everything also works with a real-time ray tracer that we are developing again in other projects and which seems to be coming to mobile devices in HW (e.g. via Imagination, Samsung). > 2. Complex scene queries. Typically they might be better via the scene > though. But perhaps something that goes really deep in the renderer for > example to query which areas are in shadow or so? Or visibility checks? Would be good to know what is needed here. We obviously have ray casting queries, bboxes, and such but others could be added as well. > 3. Things that need to hook into the rendering pipeline ? perhaps for > things like Render-To-Texture or post-process compositing. Perhaps how > XFlow integrates with rendering? Kristian and Felix are working on a general mechanism for expressing dependencies within the rendering process (e.g. need to generate the texture before using it. It seems to still be in flux right now. >> scene : Object, // API for accessing the >> Entity-Component-Attribute model. >> // Implemented by ??? > > Yes, the unclarity about the responsibility here is why I started the > tread on ?the entity system? some weeks ago. > > I think the situation is not catastrophic as we already have 3 > implementations of that: 2 in the ?WebTundras? (Chiru-WebClient and > WebRocket) and also xml3d.js. > > Playsign can worry about this at least for now (say, the coming month). > We are not writing a renderer from scratch as there are many good ones > out there already so we can spend resources on this too. Let?s see > whether we can just adopt one of those 3 systems for MIWI ? and hence > probably as realXtend?s future official WebTundra ? or do we need to > write a 4th from scratch for some reason. As I wrote this morning, A WebComponent version based on XML3D would seem like a good way to address this. Jonne's existing work seems to go a long way here. > Again, one particular question here is the ?IComponent?: how do we > define new components ? aka. XML elements? As mentioned before, > WebComponents can be related so is to be analysed. Adminotech is already > using WebComponents in the 2D UI work so perhaps you could help with > understanding this: would the way they have there work for defining reX > components? Also Philipp?s comments about KIARA which has a way to > define things is related. That would be a great step forward. As i wrote to Jonne before, a WebComponent could easily define the KIARA definition as well (did not mention KIARA by name though) and thus interface to the network behind the scenes without the user having to worry about that. >> asset : Object, // Not strictly necessary for xml3d as it does >> asset requests for us, but for three.js this is pretty much needed. >> // Implemented by ??? > > I think this belongs to the 3D UI so falls in Playsign?s responsibility. > Again there are the existing implementations in WebTundras and xml3d.js > ? and obviously the browser does much of the work but I think we still > need to track dependencies in the loading and keep track of usages for > releasing resources etc. (the reason why you have the asset system in > web rocket and the resource manager in xml3d.js). I realized that there might be a misconception about mememory management in the previous emails regarding this (with Kristian). Our resource manager is not leaking memory or anything like this when he was talking about not releasing resources. The point that Kristian was making is that we are simply not concerned about removing objects that are not close to us. Instead we are assuming that all references are supposed to be loaded and do fit into memory but we do so asynchronously. This approach was simply not designed for the case that only a fraction of the scene fits into memory. While you can use the loading part of our resource manager, the delete part is not there and has to be added. Thanks, Philipp > Thanks for the draft! Code speaks louder than words (that?s why I wrote > the txml (<)-> xml3d converter), and at least for me this kind of > code-like API def was very clear and helpful to read :) > > ~Toni > >> >> ui : Object, // API to add/remove widgets correctly on top >> of the 3D rendering canvas element, window resize events etc. >> // Implemented by 2D/Input GE (Adminotech). >> >> input : Object // API to hook to input events occurring on top >> of the 3D scene. >> // Implemented by 2D/Input GE (Adminotech). >> }; >> >> >> Best regards, >> Jonne Nauha >> Meshmoon developer at Adminotech Ltd. >> www.meshmoon.com >> >> >> On Wed, Oct 30, 2013 at 9:51 AM, Toni Alatalo > > wrote: >> >> Hi again, >> new angle here: calling devs *outside* the 3D UI GE: POIs, >> real-virtual interaction, interface designer, virtual characters, >> 3d capture, synchronization etc. >> I think we need to proceed rapidly with integration now and >> propose that one next step towards that is to analyze the >> interfaces between 3D UI and other GEs. This is because it seems >> to be a central part with which many others interface: that is >> evident in the old 'arch.png' where we analyzed GE/Epic >> interdependencies: is embedded in section 2 in the Winterthur arch >> discussion notes which hopefully works for everyone to see, >> https://docs.google.com/document/d/1Sr4rg44yGxK8jj6yBsayCwfitZTq5Cdyyb_xC25vhhE/edit >> I propose a process where we go through the usage patterns case by >> case. For example so that me & Erno visit the other devs to >> discuss it. I think a good goal for those sessions is to define >> and plan the implementation of first tests / minimal use cases >> where the other GEs are used together with 3D UI to show >> something. I'd like this first pass to happen quickly so that >> within 2 weeks from the planning the first case is implemented. So >> if we get to have the sessions within 2 weeks from now, in a month >> we'd have demos with all parts. >> Let's organize this so that those who think this applies to their >> work contact me with private email (to not spam the list), we meet >> and collect the notes to the wiki and inform this list about that. >> One question of particular interest to me here is: can the users >> of 3D UI do what they need well on the entity system level (for >> example just add and configure mesh components), or do they need >> deeper access to the 3d scene and rendering (spatial queries, >> somehow affect the rendering pipeline etc). With Tundra we have >> the Scene API and the (Ogre)World API(s) to support the latter, >> and also access to the renderer directly. OTOH the entity system >> level is renderer independent. >> Synchronization is a special case which requires good two-way >> integration with 3D UI. Luckily it's something that we and >> especially Lasse himself knows already from how it works in Tundra >> (and in WebTundras). Definitely to be discussed and planned now >> too of course. >> So please if you agree that this is a good process do raise hands >> and let's start working on it! We can discuss this in the weekly >> too if needed. >> Cheers, >> ~Toni >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >> > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From Philipp.Slusallek at dfki.de Sun Nov 3 10:19:56 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Sun, 03 Nov 2013 10:19:56 +0100 Subject: [Fiware-miwi] DOM as API vs as UI (Re: 3D UI Usage from other GEs / epics / apps) In-Reply-To: <9B2CC7C7-0418-45D8-B298-97676672BA77@playsign.net> References: <52717BA2.807@dfki.de> <5272218B.3040202@dfki.de> <9B2CC7C7-0418-45D8-B298-97676672BA77@playsign.net> Message-ID: <5276153C.2020704@dfki.de> Hi, No, we would still have to maintain the scene representation in the DOM. However, we can use the specialized access functions (not the string-valued attributes) to access the functionality of a (XML3D) DOM node much more efficiently than having to parse strings. Torstens example with the rotation is a good example of such a specialized interface of an (XML3D) DOM node. Best, Philipp Am 31.10.2013 10:42, schrieb Toni Alatalo: > On 31 Oct 2013, at 11:23, Torsten Spieldenner > > wrote: >> On top the capabilities of the DOM API and additional powers of >> sophisticated JavaScript-libraries, XML3D introduces an API extension >> by its own to provide a convenient way to access the DOM elements as >> XML3D-Elements, for example retrieving translation as XML3DVec3 or >> Rotation as XML3DRotation (for example, to retrieve the rotation part >> of an XML3D transformation, you can do this by using jQuery to query >> the transformation node from the DOM, and access the rotation there >> then: var r = $("#my_transformation").rotation). > > What confuses me here is: > > earlier it was concluded that ?the DOM is the UI?, I understood meaning > how it works for people to > > a) author apps ? e.g. declare that oulu3d scene and reX avatar & chat > apps are used in my html, along this nice christmas themed thing I just > created (like txml is used in reX now) > > b) see and manipulate the state in the browser view-source & developer / > debugger DOM views (like the Scene Structure editor in Tundra) > > c) (something else that escaped me now) > > Anyhow the point being that intensive manipulations such as creating and > manipulating tens of thousands of entities are not done via it. This was > the response to our initial ?massive dom manipulation? perf test. > Manipulating transformation is a typical example where that happens ? I > know that declarative ways can often be a good way to deal with e.g. > moving objects, like the PhysicsMotor in Tundra and I think what XFlow > (targets to) cover(s) too, but not always nor for everything so I think > the point is still valid. > > So do you use a different API for heavy tasks and the DOM for other > things or how does it go? > > ~Toni > >>> If we think that XML3D (or the DOM and XML3D acts on those manipulations) >>> is already this perfect API I'm not sure what we are even trying to >>> accomplish here? If we are not building a nice to use 3D SDK whats the >>> target here? >> I totally agree that we still need to build this easily programmable >> 3D SDK. But XML3D makes it very simple to maintain the 3D scene in the >> DOM according to the scene state of the application. >> You may want to have a look at our example web client for our FiVES >> server (https://github.com/rryk/FiVES). Although I admit that the code >> needs some refactoring, the example of how entities are created shows >> this nicely : As soon as you create a new Entity object, the DOM >> representation of its scenegraph and its transformations are created >> automatically and maintained as View of the entity model. As >> developer, you only need to operate on the client application's API. >> This could be an example, of how an SDK could operate on the XML3D >> representation of the scene. >> >> >> ~ Torsten >> >>> On Wed, Oct 30, 2013 at 11:35 PM, Philipp Slusallek < >>> Philipp.Slusallek at dfki.de> wrote: >>> >>>> Hi Jonne, all, >>>> >>>> I am not sure that applying the Tudra API in the Web context is really the >>>> right approach. One of the key differences is that we already have a >>>> central "scene" data structure and it already handles rendering and input >>>> (DOM events), and other aspects. Also an API oriented approach may not be >>>> the best option in this declarative context either (even though I >>>> understands that it feels more natural when coming from C++, I had the same >>>> issues). >>>> >>>> So let me be a bit more specific: >>>> >>>> -- Network: So, yes we need a network module. It's not something that >>>> "lives" in the DOM but rather watches it and sends updates to the server to >>>> achieve sync. >>>> >>>> -- Renderer: Why do we need an object here. Its part of the DOM model. The >>>> only aspect is that we may want to set renderer-specific parameters. We >>>> currently do so through the DOM element, which seems like a good >>>> approach. The issues to be discussed here is what would be the advantages >>>> of a three.js based renderer and implement it of really needed. >>>> >>>> -- Scene: This can be done in the DOM nicely and with WebComponents its >>>> even more elegant. The scene objects are simple part of the same DOM but >>>> only some of them get rendered. I am not even sure that we need here in >>>> addition to the DOM and suitable mappings for the components. >>>> >>>> -- Asset: As you say this is already built-into the XML3D DOM. I see it a >>>> bit like the network system in that it watches missing resources in the DOM >>>> (plus attributes on priotity and such?) and implements a sort of scheduler >>>> excutes requests in some priority order. A version that only loads missing >>>> resources if is already available, one that goes even further and deletes >>>> unneeded resources could probably be ported from your resource manager. >>>> >>>> -- UI: That is why we are building on top of HTML, which is a pretty good >>>> UI layer in many requests. We have the 2D-UI GE to look into missing >>>> functionality >>>> >>>> -- Input: This also is already built in as the DOM as events traverse the >>>> DOM. It is widely used in all WEB based UIs and has proven quite useful >>>> there. Here we can nicely combine it with the 3D scene model where events >>>> are not only delivered to the 3D graphics elements but can be handled by >>>> the elements or components even before that. >>>> >>>> But maybe I am missunderstanding you here? >>>> >>>> >>>> Best, >>>> >>>> Philipp >>>> >>>> >>>> Am 30.10.2013 14:31, schrieb Jonne Nauha: >>>> >>>>> var client = >>>>> { >>>>> network : Object, // Network sync, connect, disconnect etc. >>>>> functionality. >>>>> // Implemented by scene sync GE (Ludocraft). >>>>> >>>>> renderer : Object, // API for 3D rendering engine access, creating >>>>> scene nodes, updating their transforms, raycasting etc. >>>>> // Implemented by 3D UI (Playsign). >>>>> >>>>> scene : Object, // API for accessing the >>>>> Entity-Component-Attribute model. >>>>> // Implemented by ??? >>>>> >>>>> asset : Object, // Not strictly necessary for xml3d as it does >>>>> asset requests for us, but for three.js this is pretty much needed. >>>>> // Implemented by ??? >>>>> >>>>> ui : Object, // API to add/remove widgets correctly on top >>>>> of the 3D rendering canvas element, window resize events etc. >>>>> // Implemented by 2D/Input GE (Adminotech). >>>>> >>>>> input : Object // API to hook to input events occurring on top >>>>> of the 3D scene. >>>>> // Implemented by 2D/Input GE (Adminotech). >>>>> }; >>>>> >>>>> >>>>> Best regards, >>>>> Jonne Nauha >>>>> Meshmoon developer at Adminotech Ltd. >>>>> www.meshmoon.com >>>>> >>>>> >>>>> >>>>> On Wed, Oct 30, 2013 at 9:51 AM, Toni Alatalo >>>> > wrote: >>>>> >>>>> Hi again, >>>>> new angle here: calling devs *outside* the 3D UI GE: POIs, >>>>> real-virtual interaction, interface designer, virtual characters, 3d >>>>> capture, synchronization etc. >>>>> I think we need to proceed rapidly with integration now and propose >>>>> that one next step towards that is to analyze the interfaces between >>>>> 3D UI and other GEs. This is because it seems to be a central part >>>>> with which many others interface: that is evident in the old >>>>> 'arch.png' where we analyzed GE/Epic interdependencies: is embedded >>>>> in section 2 in the Winterthur arch discussion notes which hopefully >>>>> works for everyone to see, >>>>> https://docs.google.com/**document/d/**1Sr4rg44yGxK8jj6yBsayCwfitZTq5 >>>>> **Cdyyb_xC25vhhE/edit >>>>> I propose a process where we go through the usage patterns case by >>>>> case. For example so that me & Erno visit the other devs to discuss >>>>> it. I think a good goal for those sessions is to define and plan the >>>>> implementation of first tests / minimal use cases where the other >>>>> GEs are used together with 3D UI to show something. I'd like this >>>>> first pass to happen quickly so that within 2 weeks from the >>>>> planning the first case is implemented. So if we get to have the >>>>> sessions within 2 weeks from now, in a month we'd have demos with >>>>> all parts. >>>>> Let's organize this so that those who think this applies to their >>>>> work contact me with private email (to not spam the list), we meet >>>>> and collect the notes to the wiki and inform this list about that. >>>>> One question of particular interest to me here is: can the users of >>>>> 3D UI do what they need well on the entity system level (for example >>>>> just add and configure mesh components), or do they need deeper >>>>> access to the 3d scene and rendering (spatial queries, somehow >>>>> affect the rendering pipeline etc). With Tundra we have the >>>>> Scene API and the (Ogre)World API(s) to support the latter, and also >>>>> access to the renderer directly. OTOH the entity system level is >>>>> renderer independent. >>>>> Synchronization is a special case which requires good two-way >>>>> integration with 3D UI. Luckily it's something that we and >>>>> especially Lasse himself knows already from how it works in Tundra >>>>> (and in WebTundras). Definitely to be discussed and planned now too >>>>> of course. >>>>> So please if you agree that this is a good process do raise hands >>>>> and let's start working on it! We can discuss this in the weekly too >>>>> if needed. >>>>> Cheers, >>>>> ~Toni >>>>> >>>>> ______________________________**_________________ >>>>> Fiware-miwi mailing list >>>>> Fiware-miwi at lists.fi-ware.eu >>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> ______________________________**_________________ >>>>> Fiware-miwi mailing list >>>>> Fiware-miwi at lists.fi-ware.eu >>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>> >>>>> >>>> -- >>>> >>>> ------------------------------**------------------------------** >>>> ------------- >>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>> >>>> Gesch?ftsf?hrung: >>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>> Dr. Walter Olthoff >>>> Vorsitzender des Aufsichtsrats: >>>> Prof. Dr. h.c. Hans A. Aukes >>>> >>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>> ------------------------------**------------------------------** >>>> --------------- >>>> >>> >>> >>> _______________________________________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From antti.kokko at adminotech.com Mon Nov 4 08:02:26 2013 From: antti.kokko at adminotech.com (Antti Kokko) Date: Mon, 4 Nov 2013 09:02:26 +0200 Subject: [Fiware-miwi] three.js WebComponents In-Reply-To: <52760843.2080200@dfki.de> References: <51CDE540-A80E-401A-A661-A96FF9E4B9B4@playsign.net> <5274A545.3090509@dfki.de> <5EC2C34A-3BF5-4218-8738-6EC54FB670FB@playsign.net> <5274E045.2020106@dfki.de> <5275F75B.6010405@dfki.de> <52760843.2080200@dfki.de> Message-ID: Hello, I have been doing research and testing with Polymer and input devices. Regarding Polymer it really feels the way to go with the 2D-UI GE. I read somewhere that the AngularJS will be moving towards Shadow DOM as well and will include Polymer later on. With Polymer I have been doing a test implementation for chat app we have in WebRocket. Implementing the actual UI component with Polymer was pretty trivial and very nice to work with from a web designer point of view. As end result there is clear readable output including 2 html files and one css file. All functions have been encapsulated to the component itself. Therefore it becomes re-usable although in this case the component needs data binding and event hooking corresponding to the backend used. But in general when the backend is solid and decided implementing the web components will be nice work. Polymer already gives a huge set of components ready to use and work with. For this case I used polymer-collapse to slide up/down div elements I created. For this I needed only to import the corresponding polymer-collapse html file and hook it to my div element. Very easy and convenient. Right now I try to tackle the require js issues for WebRocket related to Polymer. For input devices I have been researching and testing touch JS libs and Gamepad API. For touch I tested couple of libraries and all of them worked well. As end result I chose jquery plugin and tested it against iPhone and iPad. Worked pretty well and I managed to get taps for 1-4 fingers, sliding and pinching to work. Shake and rotate didn?t work for some reason but I didn?t go deeper with that issue. For Gamepad I used Gamepad API [ https://dvcs.w3.org/hg/gamepad/raw-file/default/gamepad.html]. Got it working well with Chrome. With Firefox it didn?t work even with Nightly build, don?t know the reason. Anyway the implementation is already tested against web rocket and Jonne did a nice test with the avatar app we have. Gamepad works like a charm. For tests we used XBOX 360 controller and Playstation controller. With Windows just plugging the device to USB port and pushing some button is everything needed. Both gamepads worked right away. The next thing with input is to test/research is Kinect. Thanks, - Antti On Sun, Nov 3, 2013 at 10:24 AM, Philipp Slusallek < Philipp.Slusallek at dfki.de> wrote: > Hi, > > Sounds really good. I would suggest that we start defining the > WebComponent components and their interfaces soon (of course, based on what > you already use, but have a quick process to discuss if there could be > improvements) and then go from there. In parallel we can check that XML3D > works well with WebComponents and Polmer. > > I believe we have talked about that a bit already but I am not sure we > have a specification for the WebComponents yet. Torsten, can you work with > whoever feels responsible on the Finish side to come up with a common > specification (somewhere on the private pages for now)? > > Sounds like we are all working in the same direction -- which is great! I > can't wait to see the first avatars walking to Oulu city (or some other > place) in any Web browser :-). > > > Best, > > Philipp > > Am 03.11.2013 08:23, schrieb Toni Alatalo: > >> Again a brief note about a single point: >> >> On 03 Nov 2013, at 09:12, Philipp Slusallek > > wrote: >> >>> Also there might be convenient components that would make 3D app >>> programming with XML3D and WebComponents much easier. So for instance >>> having the XML3D-based components automatically define the set of >>> variable that need to be synchronized, registering them with the >>> networking module (if available), and making sure this all gets done >>> in the background would be excellent. Also providing an 3D avatar >>> WebComponent with embedded animations and such would be ideal. It >>> would be excellent if we could provide to Web developers a whole set >>> of predefined and modular WebComponents that they could just use to >>> create 3D worlds with behaviour. >>> >> >> This is what realXtend has been doing for several years ? just outside >> the Web ? and plan is to continue doing the same on the Web as well :) >> >> That?s how it works in the native Tundra, and plan for this fi-ware work >> to produce the web client (which also works standalone, like the native >> one does too) is to implement the same. That?s also how it works in >> Chiru-WebClient and WebRocket, i.e. the pre-existing WebTundra?s. >> >> ~Toni >> >> The one really big issue where I still see a lot of work on finding >>> new solutions (not just implementation but real conceptual work) is >>> dealing with the hugely varying set of input devices. One person has >>> mouse and keyboard, the next has only a multi-touch surface, the next >>> is using a Kinect, while yet another other is using a Wii Remote, etc. >>> A single Web app will have to work with all of those situations >>> simultaneously (on different devices, though). Letting Web application >>> and their developers handle this in their apps will simply not work >>> and be frustrating for developer and users. >>> >>> We have started to work on a concept for this but this has not gone >>> very far yet. The coarse idea is that applications use virtual >>> interaction elements to say what kind of input (metaphor) they are >>> expected (e.g. selection of an object, moving an object). These are >>> modules that can be configured and instantiated. They can also be >>> nested within/on top each other (like non-visible dialog boxes) >>> dynamically to have a hierarchical finite state machine of interactions. >>> >>> These interaction elements would consist of the interaction logic as >>> well as (optionally) some interaction gadgets that visually represent >>> the possible interactions (like handles to move and rotate an object). >>> The key thing is that these interaction elements would be independent >>> of the application, anyone can use them, and we could develop a whole >>> library of them to cover the wide range of input metaphors that >>> applications use (but applications can extend them with their own if >>> needed). >>> >>> There are two extensions to this approach: (i) instead of having a >>> single interaction element for each interaction metaphor we could have >>> interfaces for them and develop entire user interface libraries that >>> each have a different look and feel but still implement the same basic >>> interaction metaphor. (ii) One could also design them such that users >>> can change the mapping of the devices used for certain actions. Think >>> of it as a reconfigurable gamer mouse, where the user can assign >>> different action to certain buttons, gestures, or such. So a user can >>> say, I will use the mouse to select objects but move them with the >>> cursor keys (for a stupid simple example). These would be optional >>> extensions for later, though. >>> >>> It would be great to work on those aspects together if this would be >>> interesting to anyone of you. You probably have quite some experience >>> in terms of good UI for 3D games/apps. Identifying, generalizing, and >>> implementing them as a reusable set of components would a great step >>> forward for 3D apps on the Web (or elsewhere). This would make great >>> papers for conferences as well! >>> >>> If you want to go that way, we should probably set up a special >>> session for discussing this in more detail. >>> >>> >>> Best, >>> >>> Philipp >>> >>> Am 02.11.2013 17:35, schrieb Jonne Nauha: >>> >>>> Yeah WebComponents are nice. I've already implemented quick tests for >>>> Tundras Entity and IComponent (common interface for all components, >>>> WebComponents has inheritance so it plays nicely into this) like 3-4 >>>> months ago in our WebRocket web client. At that point I left it alone, I >>>> wanted to have my JS code that implements the component in a .js file >>>> not inside a .html polymer template. Currently I'm using requirejs for >>>> the whole repo as the amount of code and dependency tracking started to >>>> get out of hand, so I wanted a modular system where I can tell the >>>> system what each module needs as a dependency. This has been great but >>>> will make writing comp implementations in a WebComponent html template a >>>> bit trickier as it would expect everything to in global scope, >>>> requierejs effectively removes everything from global scope. >>>> >>>> I have also a simple DOM integration plugin for WebRocket. If you want >>>> to turn it on (configurable when you construct the client instance) it >>>> will mirror the whole scene/entity/component/attribute chain in to the >>>> DOM as my own >>>> ......< >>>> /entity>... >>>> nodes. This would be trivial to change to create XML3D nodes, but I cant >>>> test that out before everything on the asset side that has been >>>> discussed are complete. Because nothing would simply render if I wont >>>> provide the Ogre asset loaders for XML3D and it knowing how to used DDS >>>> textures. What I would prefer even more is to just pass you the geometry >>>> as data, because I have my own fully working AssetAPI to fetch the >>>> assets, and I have my own loaders for each asset type I support. I would >>>> kind of be duplicate work to re-implement then as XML3D loaders, when I >>>> alreayd have the ready typed gl arrays to give to you right now. >>>> >>>> For my current DOM "mirroring" plugin, if you manipulate the attributes >>>> in the DOM they wont sync up back to the in mem JavaScript attributes. >>>> So currently its read only. I haven't really had the need to implement >>>> the sync the other way as we are perfectly happy living in JS land and >>>> using the WebRocket SDK API to write our application logic. The >>>> declarative side is less important for Meshmoon WebRocket as the scenes >>>> are always coming from our Tundra based servers and not declared on the >>>> html page itself. For FIWARE the declarative and DOM side is more >>>> important and more in focus of course so I understand the direction of >>>> the talks and why XML3D is an valuable asset. >>>> >>>> Instantiating Scene, Entity and each of the Component implementation >>>> from a WebComponent template would solve this problem, as its designed >>>> to encapsulate the JS implementation and it has a great way to get >>>> callbacks when any attribute in the DOM node is manipulated. It can also >>>> expose DOM events and fire them, not to mention normal functions eg. >>>> $("", entityNode).setPosition(10,20,30); so you dont have to >>>> use the raw DOM api to set string attributes by random. These functions >>>> would have type checking and not let you put carbage into the >>>> attributes. I believe the attribute changed handler can also effectively >>>> abort the change if you are trying to put a boolean to a Vector3 >>>> attribute. This all is great and would just require us WebRocket devs to >>>> port our current EC implementations to WebComponent templates. >>>> >>>> But here is where I get confused. So you would be fine by us >>>> implementing Tundra components as WebComponent templates (as said this >>>> would be fairly trivial). How would XML3D then play into this situation? >>>> Would it now monitor our DOM elements instead of the ones you specify >>>> (afaik eg. )? Can you open up this a bit? >>>> >>>> *UI and WebComponents* >>>> * >>>> * >>>> WebComponents are also being looked at in our 2D-UI GE, but to be frank >>>> there is very little to do there. The system is incredibly simple to >>>> use. You include polymer.js to you page, add tags to your Polymer >>>> style .html templates, then you just use the tags declared in them on >>>> the markup or you create the DOM nodes during runtime from JS with your >>>> preferred method. I'm having a bit of hard time figuring out what we >>>> should focus on in the 2D part for WebComponents. I mean they are there >>>> and you can use them, you don't need any kind of special APIs or >>>> supporting code from our side for them to work. Only thing I can think >>>> of is implementing some widgets and maybe 3D related templates for >>>> anyone when they use WebTundra. >>>> >>>> Best regards, >>>> Jonne Nauha >>>> Meshmoon developer at Adminotech Ltd. >>>> www.meshmoon.com >>> > >>>> >>>> >>>> >>>> On Sat, Nov 2, 2013 at 1:21 PM, Philipp Slusallek >>>> >>> > >>>> >>>> wrote: >>>> >>>> Hi Toni, all, >>>> >>>> I have now looked at video on the main Polymer page >>>> (http://www.polymer-project.__org/ >>>> ), which is actually very nicely >>>> done. They make a very good point why its advantageous to put things >>>> in the DOM compared to pure JS applications (or even Angular, which >>>> already uses the DOM). They highlight that with WebComponents >>>> (Polymer is a polyfill implementation of them) this becomes now even >>>> easier and creates an object-oriented aspect for HTML. >>>> >>>> BTW, this aspect is exactly what we were aiming at when we suggested >>>> the use of WebComponents for the 2D-UI Epic in the objectives of the >>>> FI-WARE Open Call and I think we should push even more in this >>>> direction, similar to what I wrote in response to Jonne's email >>>> earlier. >>>> >>>> Regarding the mapping: We already have the mapping of 3D to the DOM >>>> (XML3D) as well as many modules that build on top of that mapping >>>> (Xflow with animation, image processing, and AR; portable materials, >>>> etc.). I see no reason why we should throw this out just because >>>> there is a slightly different way of doing it. >>>> >>>> I would even speculate that if we would try to offer similar >>>> functionality that at the end we would end up with something that >>>> would pretty much resemble XML3D, maybe with a slightly different >>>> syntax. There are usually not many way to do the same thing in a >>>> good way. >>>> >>>> What I would support 100%, however, is that we try to use >>>> Web*Components* to implement the *ECA components* (and possibly >>>> entities). Essentially for the same reasons explained in the video. >>>> That makes a lot of sense in the Web context and is fully compatible >>>> with the DOM and all the DOM-based frameworks that can then be used >>>> as well on this data. >>>> >>>> I have been saying for a long time that we have had libraries in >>>> C/C++. Just porting them to JS to run in the Web does not give us >>>> any advantages at all (likely only disadvantages like performance >>>> and an inferior language). Its only if we embrace the key elements >>>> of the Web -- and the runtime DOM arguably is the core of it all -- >>>> that we can tap into all the benefits of the Web. >>>> >>>> And THE key advantage of the DOM is that you get *way more* than the >>>> sum of the pieces when putting things together. Instead, of making >>>> sure that one library can talk to all the others (which gets you >>>> into an N^2 complexity problem), each component only needs to work >>>> with the DOM (constant effort per component and you have to deal >>>> with one data structure anyway, so its essentially no overhead >>>> anyway). Then, you get the benefit from all other components >>>> *automatically and completely for free* (jquery and all the other >>>> tools "just work" also with XML3D, and we should be able to use >>>> XML3D within Polymer also). >>>> >>>> Note, that none of the Web technologies discussed here >>>> (WebComponents, Polymer, Angular, etc.) would work at all if they >>>> would not use the DOM at their core. Angular for example depends on >>>> the DOM and just allows a nicer binding of custom functionality >>>> (controller implemented in JS) to the DOM. >>>> >>>> This all applies exactly the same way also to 3D data -- at least >>>> logically. However, on the implementation side 3D is special because >>>> you have to take care of large data structures, typed arrays, and so >>>> on. This is what the main work in XML3D was all about. Why should >>>> someone invest all the effort to do it again for likely very similar >>>> results in the end? >>>> >>>> We can talk about using three.js as a renderer, though, but I would >>>> not touch the 3D DOM binding and data model that we already have in >>>> XML3D (unless someone comes with very good reasons to do so). >>>> >>>> >>>> Best, >>>> >>>> Philipp >>>> >>>> >>>> Am 02.11.2013 09:27, schrieb Toni Alatalo: >>>> >>>> On 02 Nov 2013, at 09:09, Philipp Slusallek >>>> >>> > >>>> >>>> wrote: >>>> >>>> Nice stuff. from my perspective there are two ways to look >>>> at this work: One is to provide high level UI elements on >>>> top of a three.js implementation and the other is the start >>>> of creating a declarative layer on top of the three.js >>>> renderer. This seems more along the first line but both >>>> should be similarly interesting to us. >>>> >>>> >>>> Indeed. >>>> >>>> It great to see that other people are coming up with similar >>>> ideas now. It would be good to get the message about our >>>> XML3D design and implementation to these people out there. >>>> That way we could improve what we already have instead of >>>> reinventing the wheel. >>>> >>>> >>>> That was my immediate first thought as well: it seemed like >>>> people have started to reinvent declarative 3d for the web from >>>> scratch. That?s why I asked whether they knew about the existing >>>> work ? I understood that this Josh Galvin person (don?t know him >>>> from before) who made the Spinner demo, did (am not sure). >>>> >>>> Thanks for the views and pointers, I?ll keep an eye open for >>>> talks about this (actually just joined the #three.js irc channel >>>> on freenode yesterday, haven?t been involved in their community >>>> before really ? Tapani from us has been hanging out there >>>> though). They seem to communicate most in the github issue >>>> tracker and pull requests (which I think is a great way). >>>> >>>> I also did not find an e-mail address to the polymer-threejs >>>> person, butkaosat.net >>> > is his personal site >>>> >>>> and apparently he made the original announcement of the >>>> declarative three.js thing in August in Google+ so I figure e.g. >>>> replying there would be one way to comment: >>>> https://plus.google.com/__112899217323877236232/posts/__bUW1hrwHcAW >>>> >>> > >>>> .. >>>> I can do that on Monday. >>>> >>>> BTW it seems that this guy is into hardware and cad and all >>>> sorts of things and declarative 3d xml is just a side thing for >>>> fun, perhaps related to his work on some cad thing ? is not like >>>> he?d be pursuing a career or a product or anything out of it. >>>> >>>> It seems like a straightforward mapping of the three.js API to >>>> xml elements: what I struggle to understand now is whether >>>> that?s a good abstraction level and how does it correspond to >>>> xml3d?s vocabulary. >>>> >>>> ~Toni >>>> >>>> It would be good if you can point people also to our papers >>>> from this year >>>> (http://graphics.cg.uni-__saarland.de/publications/ >>>> ). They >>>> explain a lot of the background of why we have chose thing >>>> to work the way they work. >>>> >>>> More specifically: >>>> -- The "xml3d.js" paper explain a lot about the design of >>>> XML3D and its implementation >>>> (https://graphics.cg.uni-__saarland.de/2013/xml3djs-__ >>>> architecture-of-a-polyfill-__implementation-of-xml3d/ >>>> >>> architecture-of-a-polyfill-implementation-of-xml3d/>). >>>> -- The "Declarative image processing" paper explains all the >>>> advantages one gets from exposing processing elements to the >>>> DOM instead of implementing them only in some JS libraries >>>> (https://graphics.cg.uni-__saarland.de/2013/declarative-_ >>>> _ar-and-image-processing-on-__the-web-with-xflow/ >>>> >>> ar-and-image-processing-on-the-web-with-xflow/>). >>>> -- And the 2012 paper on "XFlow" shows this usage for >>>> animation >>>> (https://graphics.cg.uni-__saarland.de/2012/xflow-__ >>>> declarative-data-processing-__for-the-web/ >>>> >>> declarative-data-processing-for-the-web/>) >>>> >>>> Getting into a constructive discussion with some of these >>>> three.js people would be a good thing. I tried to find an >>>> email address for the polymer-threejs person but could not >>>> find any. Feel free to farward this email to him (and maybe >>>> others). I would love to get their feedback and engage in >>>> discussions. >>>> >>>> >>>> Best, >>>> >>>> Philipp >>>> >>>> Am 02.11.2013 00:38, schrieb Toni Alatalo: >>>> >>>> Apparently some three.js user/dev has gotten inspired by >>>> WebComponents & >>>> the Polymeer and written >>>> https://github.com/kaosat-dev/__polymer-threejs >>>> :) >>>> >>>> Now another guy has continued with >>>> https://github.com/JoshGalvin/__three-polymer >>>> ? there?s >>>> a demo of custom >>>> element (?spinner?), similar to the Door case discussed >>>> here earlier. >>>> >>>> Had a brief chat with him, will return to this later but >>>> was fun to see >>>> the minimal webgl web component example there as that >>>> has been in our >>>> agenda. >>>> >>>> ~Toni >>>> >>>> 01:01 *<* galv*>* >>>> https://github.com/JoshGalvin/__three-polymer >>>> added >>>> support for more basic geometry types >>>> 01:02 *<* galv*>* Going to do materials next >>>> 01:25 *<* *antont**>* galv: hee - are you aware of these >>>> btw? >>>> http://www.w3.org/community/__declarative3d/ >>>> , e.g. >>>> https://github.com/xml3d/__xml3d.js >>>> >>>> 01:27 *<* galv*>* yeah, different level of abstraction >>>> 01:27 *<* *antont**>* perhaps >>>> 01:28 *<* galv*>* I expect people to wrap up their game >>>> objects >>>> 01:28 *<* galv*>* aka "spinner" >>>> 01:28 *<* galv*>* (index.html) >>>> 01:29 *<* *antont**>* we've been also planning to enable >>>> saying things >>>> like if that's what you mean >>>> 01:30 *<* *antont**>* right, seems like the same idea >>>> 01:31 *<* *antont**>* very cool to see, gotta check the >>>> codes etc later >>>> .. but sleep now, laters >>>> >>>> >>>> >>>> _________________________________________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> >>>> https://lists.fi-ware.eu/__listinfo/fiware-miwi >>>> >>>> >>>> >>>> >>>> -- >>>> >>>> ------------------------------__---------------------------- >>>> --__------------- >>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz >>>> (DFKI) GmbH >>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>> >>>> Gesch?ftsf?hrung: >>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>> Dr. Walter Olthoff >>>> Vorsitzender des Aufsichtsrats: >>>> Prof. Dr. h.c. Hans A. Aukes >>>> >>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>> ------------------------------__---------------------------- >>>> --__--------------- >>>> >>>> >>>> >>>> >>>> >>>> -- >>>> >>>> ------------------------------__---------------------------- >>>> --__------------- >>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>> >>>> Gesch?ftsf?hrung: >>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>> Dr. Walter Olthoff >>>> Vorsitzender des Aufsichtsrats: >>>> Prof. Dr. h.c. Hans A. Aukes >>>> >>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>> ------------------------------__---------------------------- >>>> --__--------------- >>>> >>>> _______________________________________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> >>> lists.fi-ware.eu> >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>> >>>> >>>> >>> >>> -- >>> >>> ------------------------------------------------------------ >>> ------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> ------------------------------------------------------------ >>> --------------- >>> >>> >> >> > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------------------------------------ > --------------- > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erno at playsign.net Mon Nov 4 11:51:54 2013 From: erno at playsign.net (Erno Kuusela) Date: Mon, 4 Nov 2013 12:51:54 +0200 Subject: [Fiware-miwi] Oulu bi-weekly f2f tomorrow 09:00 Message-ID: <20131104105154.GJ47616@ee.oulu.fi> Hello, I seem to remember we agreed at the last Oulu meeting to have the meeting in the morning this time, at 9:00 and at M?kelininkatu 15. See you tomorrow! Erno From kristian.sons at dfki.de Mon Nov 4 08:56:17 2013 From: kristian.sons at dfki.de (Kristian Sons) Date: Mon, 04 Nov 2013 08:56:17 +0100 Subject: [Fiware-miwi] three.js WebComponents In-Reply-To: References: <51CDE540-A80E-401A-A661-A96FF9E4B9B4@playsign.net> <5274A545.3090509@dfki.de> <5EC2C34A-3BF5-4218-8738-6EC54FB670FB@playsign.net> <5274E045.2020106@dfki.de> <5275F75B.6010405@dfki.de> <52760843.2080200@dfki.de> Message-ID: <52775321.2050905@dfki.de> Hi, I looked into the Shadow-DOM functionality of WebComponents a while ago. I gives means to hide the canvas in the DOM (behind the XML3D element) thus it feels even more native. However, in that days there were issues in the Chrome Debugger that might have been fixed. I also didn't look into the Polyfill implementation yet. Using WebComponents together with XML3D would mean that xm3d.js would have to consider the Shadow-DOM of components that are within the element. Best, Kristian Am 04.11.2013 08:02, schrieb Antti Kokko: > Hello, > > I have been doing research and testing with Polymer and input devices. > Regarding Polymer it really feels the way to go with the 2D-UI GE. I > read somewhere that the AngularJS will be moving towards Shadow DOM as > well and will include Polymer later on. > > With Polymer I have been doing a test implementation for chat app we > have in WebRocket. Implementing the actual UI component with Polymer > was pretty trivial and very nice to work with from a web designer > point of view. As end result there is clear readable output including > 2 html files and one css file. All functions have been encapsulated to > the component itself. Therefore it becomes re-usable although in this > case the component needs data binding and event hooking corresponding > to the backend used. But in general when the backend is solid and > decided implementing the web components will be nice work. Polymer > already gives a huge set of components ready to use and work with. For > this case I used polymer-collapse to slide up/down div elements I > created. For this I needed only to import the corresponding > polymer-collapse html file and hook it to my div element. Very easy > and convenient. Right now I try to tackle the require js issues for > WebRocket related to Polymer. > > For input devices I have been researching and testing touch JS libs > and Gamepad API. For touch I tested couple of libraries and all of > them worked well. As end result I chose jquery plugin and tested it > against iPhone and iPad. Worked pretty well and I managed to get taps > for 1-4 fingers, sliding and pinching to work. Shake and rotate didn?t > work for some reason but I didn?t go deeper with that issue. > > For Gamepad I used Gamepad API > [https://dvcs.w3.org/hg/gamepad/raw-file/default/gamepad.html]. Got it > working well with Chrome. With Firefox it didn?t work even with > Nightly build, don?t know the reason. Anyway the implementation is > already tested against web rocket and Jonne did a nice test with the > avatar app we have. Gamepad works like a charm. For tests we used XBOX > 360 controller and Playstation controller. With Windows just plugging > the device to USB port and pushing some button is everything needed. > Both gamepads worked right away. > > The next thing with input is to test/research is Kinect. > > Thanks, > > - Antti > > > On Sun, Nov 3, 2013 at 10:24 AM, Philipp Slusallek > > wrote: > > Hi, > > Sounds really good. I would suggest that we start defining the > WebComponent components and their interfaces soon (of course, > based on what you already use, but have a quick process to discuss > if there could be improvements) and then go from there. In > parallel we can check that XML3D works well with WebComponents and > Polmer. > > I believe we have talked about that a bit already but I am not > sure we have a specification for the WebComponents yet. Torsten, > can you work with whoever feels responsible on the Finish side to > come up with a common specification (somewhere on the private > pages for now)? > > Sounds like we are all working in the same direction -- which is > great! I can't wait to see the first avatars walking to Oulu city > (or some other place) in any Web browser :-). > > > Best, > > Philipp > > Am 03.11.2013 08:23, schrieb Toni Alatalo: > > Again a brief note about a single point: > > On 03 Nov 2013, at 09:12, Philipp Slusallek > > >> wrote: > > Also there might be convenient components that would make > 3D app > programming with XML3D and WebComponents much easier. So > for instance > having the XML3D-based components automatically define the > set of > variable that need to be synchronized, registering them > with the > networking module (if available), and making sure this all > gets done > in the background would be excellent. Also providing an 3D > avatar > WebComponent with embedded animations and such would be > ideal. It > would be excellent if we could provide to Web developers a > whole set > of predefined and modular WebComponents that they could > just use to > create 3D worlds with behaviour. > > > This is what realXtend has been doing for several years --- > just outside > the Web --- and plan is to continue doing the same on the Web > as well :) > > That's how it works in the native Tundra, and plan for this > fi-ware work > to produce the web client (which also works standalone, like > the native > one does too) is to implement the same. That's also how it > works in > Chiru-WebClient and WebRocket, i.e. the pre-existing WebTundra's. > > ~Toni > > The one really big issue where I still see a lot of work > on finding > new solutions (not just implementation but real conceptual > work) is > dealing with the hugely varying set of input devices. One > person has > mouse and keyboard, the next has only a multi-touch > surface, the next > is using a Kinect, while yet another other is using a Wii > Remote, etc. > A single Web app will have to work with all of those > situations > simultaneously (on different devices, though). Letting Web > application > and their developers handle this in their apps will simply > not work > and be frustrating for developer and users. > > We have started to work on a concept for this but this has > not gone > very far yet. The coarse idea is that applications use virtual > interaction elements to say what kind of input (metaphor) > they are > expected (e.g. selection of an object, moving an object). > These are > modules that can be configured and instantiated. They can > also be > nested within/on top each other (like non-visible dialog > boxes) > dynamically to have a hierarchical finite state machine of > interactions. > > These interaction elements would consist of the > interaction logic as > well as (optionally) some interaction gadgets that > visually represent > the possible interactions (like handles to move and rotate > an object). > The key thing is that these interaction elements would be > independent > of the application, anyone can use them, and we could > develop a whole > library of them to cover the wide range of input metaphors > that > applications use (but applications can extend them with > their own if > needed). > > There are two extensions to this approach: (i) instead of > having a > single interaction element for each interaction metaphor > we could have > interfaces for them and develop entire user interface > libraries that > each have a different look and feel but still implement > the same basic > interaction metaphor. (ii) One could also design them such > that users > can change the mapping of the devices used for certain > actions. Think > of it as a reconfigurable gamer mouse, where the user can > assign > different action to certain buttons, gestures, or such. So > a user can > say, I will use the mouse to select objects but move them > with the > cursor keys (for a stupid simple example). These would be > optional > extensions for later, though. > > It would be great to work on those aspects together if > this would be > interesting to anyone of you. You probably have quite some > experience > in terms of good UI for 3D games/apps. Identifying, > generalizing, and > implementing them as a reusable set of components would a > great step > forward for 3D apps on the Web (or elsewhere). This would > make great > papers for conferences as well! > > If you want to go that way, we should probably set up a > special > session for discussing this in more detail. > > > Best, > > Philipp > > Am 02.11.2013 17:35, schrieb Jonne Nauha: > > Yeah WebComponents are nice. I've already implemented > quick tests for > Tundras Entity and IComponent (common interface for > all components, > WebComponents has inheritance so it plays nicely into > this) like 3-4 > months ago in our WebRocket web client. At that point > I left it alone, I > wanted to have my JS code that implements the > component in a .js file > not inside a .html polymer template. Currently I'm > using requirejs for > the whole repo as the amount of code and dependency > tracking started to > get out of hand, so I wanted a modular system where I > can tell the > system what each module needs as a dependency. This > has been great but > will make writing comp implementations in a > WebComponent html template a > bit trickier as it would expect everything to in > global scope, > requierejs effectively removes everything from global > scope. > > I have also a simple DOM integration plugin for > WebRocket. If you want > to turn it on (configurable when you construct the > client instance) it > will mirror the whole scene/entity/component/attribute > chain in to the > DOM as my own > ......... > nodes. This would be trivial to change to create XML3D > nodes, but I cant > test that out before everything on the asset side that > has been > discussed are complete. Because nothing would simply > render if I wont > provide the Ogre asset loaders for XML3D and it > knowing how to used DDS > textures. What I would prefer even more is to just > pass you the geometry > as data, because I have my own fully working AssetAPI > to fetch the > assets, and I have my own loaders for each asset type > I support. I would > kind of be duplicate work to re-implement then as > XML3D loaders, when I > alreayd have the ready typed gl arrays to give to you > right now. > > For my current DOM "mirroring" plugin, if you > manipulate the attributes > in the DOM they wont sync up back to the in mem > JavaScript attributes. > So currently its read only. I haven't really had the > need to implement > the sync the other way as we are perfectly happy > living in JS land and > using the WebRocket SDK API to write our application > logic. The > declarative side is less important for Meshmoon > WebRocket as the scenes > are always coming from our Tundra based servers and > not declared on the > html page itself. For FIWARE the declarative and DOM > side is more > important and more in focus of course so I understand > the direction of > the talks and why XML3D is an valuable asset. > > Instantiating Scene, Entity and each of the Component > implementation > from a WebComponent template would solve this problem, > as its designed > to encapsulate the JS implementation and it has a > great way to get > callbacks when any attribute in the DOM node is > manipulated. It can also > expose DOM events and fire them, not to mention normal > functions eg. > $("", entityNode).setPosition(10,20,30); so > you dont have to > use the raw DOM api to set string attributes by > random. These functions > would have type checking and not let you put carbage > into the > attributes. I believe the attribute changed handler > can also effectively > abort the change if you are trying to put a boolean to > a Vector3 > attribute. This all is great and would just require us > WebRocket devs to > port our current EC implementations to WebComponent > templates. > > But here is where I get confused. So you would be fine > by us > implementing Tundra components as WebComponent > templates (as said this > would be fairly trivial). How would XML3D then play > into this situation? > Would it now monitor our DOM elements instead of the > ones you specify > (afaik eg. )? Can you open up this a bit? > > *UI and WebComponents* > * > * > WebComponents are also being looked at in our 2D-UI > GE, but to be frank > there is very little to do there. The system is > incredibly simple to > use. You include polymer.js to you page, add > tags to your Polymer > style .html templates, then you just use the tags > declared in them on > the markup or you create the DOM nodes during runtime > from JS with your > preferred method. I'm having a bit of hard time > figuring out what we > should focus on in the 2D part for WebComponents. I > mean they are there > and you can use them, you don't need any kind of > special APIs or > supporting code from our side for them to work. Only > thing I can think > of is implementing some widgets and maybe 3D related > templates for > anyone when they use WebTundra. > > Best regards, > Jonne Nauha > Meshmoon developer at Adminotech Ltd. > www.meshmoon.com > > > > > > On Sat, Nov 2, 2013 at 1:21 PM, Philipp Slusallek > > > >> > > wrote: > > Hi Toni, all, > > I have now looked at video on the main Polymer page > (http://www.polymer-project.__org/ > ), which is > actually very nicely > done. They make a very good point why its > advantageous to put things > in the DOM compared to pure JS applications (or > even Angular, which > already uses the DOM). They highlight that with > WebComponents > (Polymer is a polyfill implementation of them) this > becomes now even > easier and creates an object-oriented aspect for HTML. > > BTW, this aspect is exactly what we were aiming at > when we suggested > the use of WebComponents for the 2D-UI Epic in the > objectives of the > FI-WARE Open Call and I think we should push even > more in this > direction, similar to what I wrote in response to > Jonne's email > earlier. > > Regarding the mapping: We already have the mapping > of 3D to the DOM > (XML3D) as well as many modules that build on top > of that mapping > (Xflow with animation, image processing, and AR; > portable materials, > etc.). I see no reason why we should throw this out > just because > there is a slightly different way of doing it. > > I would even speculate that if we would try to > offer similar > functionality that at the end we would end up with > something that > would pretty much resemble XML3D, maybe with a > slightly different > syntax. There are usually not many way to do the > same thing in a > good way. > > What I would support 100%, however, is that we try > to use > Web*Components* to implement the *ECA components* > (and possibly > entities). Essentially for the same reasons > explained in the video. > That makes a lot of sense in the Web context and is > fully compatible > with the DOM and all the DOM-based frameworks that > can then be used > as well on this data. > > I have been saying for a long time that we have had > libraries in > C/C++. Just porting them to JS to run in the Web > does not give us > any advantages at all (likely only disadvantages > like performance > and an inferior language). Its only if we embrace > the key elements > of the Web -- and the runtime DOM arguably is the > core of it all -- > that we can tap into all the benefits of the Web. > > And THE key advantage of the DOM is that you get > *way more* than the > sum of the pieces when putting things together. > Instead, of making > sure that one library can talk to all the others > (which gets you > into an N^2 complexity problem), each component > only needs to work > with the DOM (constant effort per component and you > have to deal > with one data structure anyway, so its essentially > no overhead > anyway). Then, you get the benefit from all other > components > *automatically and completely for free* (jquery and > all the other > tools "just work" also with XML3D, and we should be > able to use > XML3D within Polymer also). > > Note, that none of the Web technologies discussed here > (WebComponents, Polymer, Angular, etc.) would work > at all if they > would not use the DOM at their core. Angular for > example depends on > the DOM and just allows a nicer binding of custom > functionality > (controller implemented in JS) to the DOM. > > This all applies exactly the same way also to 3D > data -- at least > logically. However, on the implementation side 3D > is special because > you have to take care of large data structures, > typed arrays, and so > on. This is what the main work in XML3D was all > about. Why should > someone invest all the effort to do it again for > likely very similar > results in the end? > > We can talk about using three.js as a renderer, > though, but I would > not touch the 3D DOM binding and data model that we > already have in > XML3D (unless someone comes with very good reasons > to do so). > > > Best, > > Philipp > > > Am 02.11.2013 09:27, schrieb Toni Alatalo: > > On 02 Nov 2013, at 09:09, Philipp Slusallek > > > >> > > wrote: > > Nice stuff. from my perspective there are > two ways to look > at this work: One is to provide high level > UI elements on > top of a three.js implementation and the > other is the start > of creating a declarative layer on top of > the three.js > renderer. This seems more along the first > line but both > should be similarly interesting to us. > > > Indeed. > > It great to see that other people are > coming up with similar > ideas now. It would be good to get the > message about our > XML3D design and implementation to these > people out there. > That way we could improve what we already > have instead of > reinventing the wheel. > > > That was my immediate first thought as well: it > seemed like > people have started to reinvent declarative 3d > for the web from > scratch. That's why I asked whether they knew > about the existing > work --- I understood that this Josh Galvin > person (don't know him > from before) who made the Spinner demo, did (am > not sure). > > Thanks for the views and pointers, I'll keep an > eye open for > talks about this (actually just joined the > #three.js irc channel > on freenode yesterday, haven't been involved in > their community > before really --- Tapani from us has been > hanging out there > though). They seem to communicate most in the > github issue > tracker and pull requests (which I think is a > great way). > > I also did not find an e-mail address to the > polymer-threejs > person, butkaosat.net > > is his personal site > > and apparently he made the original > announcement of the > declarative three.js thing in August in Google+ > so I figure e.g. > replying there would be one way to comment: > https://plus.google.com/__112899217323877236232/posts/__bUW1hrwHcAW > > > .. > I can do that on Monday. > > BTW it seems that this guy is into hardware and > cad and all > sorts of things and declarative 3d xml is just > a side thing for > fun, perhaps related to his work on some cad > thing --- is not like > he'd be pursuing a career or a product or > anything out of it. > > It seems like a straightforward mapping of the > three.js API to > xml elements: what I struggle to understand now > is whether > that's a good abstraction level and how does it > correspond to > xml3d's vocabulary. > > ~Toni > > It would be good if you can point people > also to our papers > from this year > > (http://graphics.cg.uni-__saarland.de/publications/ > > ). They > explain a lot of the background of why we > have chose thing > to work the way they work. > > More specifically: > -- The "xml3d.js" paper explain a lot about > the design of > XML3D and its implementation > > (https://graphics.cg.uni-__saarland.de/2013/xml3djs-__architecture-of-a-polyfill-__implementation-of-xml3d/ > > ). > -- The "Declarative image processing" paper > explains all the > advantages one gets from exposing > processing elements to the > DOM instead of implementing them only in > some JS libraries > > (https://graphics.cg.uni-__saarland.de/2013/declarative-__ar-and-image-processing-on-__the-web-with-xflow/ > > ). > -- And the 2012 paper on "XFlow" shows this > usage for > animation > > (https://graphics.cg.uni-__saarland.de/2012/xflow-__declarative-data-processing-__for-the-web/ > > ) > > Getting into a constructive discussion with > some of these > three.js people would be a good thing. I > tried to find an > email address for the polymer-threejs > person but could not > find any. Feel free to farward this email > to him (and maybe > others). I would love to get their feedback > and engage in > discussions. > > > Best, > > Philipp > > Am 02.11.2013 00:38, schrieb Toni Alatalo: > > Apparently some three.js user/dev has > gotten inspired by > WebComponents & > the Polymeer and written > https://github.com/kaosat-dev/__polymer-threejs > > :) > > Now another guy has continued with > https://github.com/JoshGalvin/__three-polymer > > --- there's > a demo of custom > element ('spinner'), similar to the > Door case discussed > here earlier. > > Had a brief chat with him, will return > to this later but > was fun to see > the minimal webgl web component example > there as that > has been in our > agenda. > > ~Toni > > 01:01 *<* galv*>* > https://github.com/JoshGalvin/__three-polymer > > added > support for more basic geometry types > 01:02 *<* galv*>* Going to do materials > next > 01:25 *<* *antont**>* galv: hee - are > you aware of these > btw? > http://www.w3.org/community/__declarative3d/ > > , e.g. > https://github.com/xml3d/__xml3d.js > > 01:27 *<* galv*>* yeah, different level > of abstraction > 01:27 *<* *antont**>* perhaps > 01:28 *<* galv*>* I expect people to > wrap up their game > objects > 01:28 *<* galv*>* aka "spinner" > 01:28 *<* galv*>* (index.html) > 01:29 *<* *antont**>* we've been also > planning to enable > saying things > like if that's what you mean > 01:30 *<* *antont**>* right, seems like > the same idea > 01:31 *<* *antont**>* very cool to see, > gotta check the > codes etc later > .. but sleep now, laters > > > > > _________________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > > https://lists.fi-ware.eu/__listinfo/fiware-miwi > > > > > > -- > > > ------------------------------__------------------------------__------------- > Deutsches Forschungszentrum f?r K?nstliche > Intelligenz > (DFKI) GmbH > Trippstadter Strasse 122, D-67663 > Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang > Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB > 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: > 19/673/0060/3 > > ------------------------------__------------------------------__--------------- > > > > > > -- > > > ------------------------------__------------------------------__------------- > Deutsches Forschungszentrum f?r K?nstliche > Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster > (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > > ------------------------------__------------------------------__--------------- > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > https://lists.fi-ware.eu/listinfo/fiware-miwi > > > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz > (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > > > > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi -- _______________________________________________________________________________ Kristian Sons Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI Agenten und Simulierte Realit?t Campus, Geb. D 3 2, Raum 0.77 66123 Saarbr?cken, Germany Phone: +49 681 85775-3833 Phone: +49 681 302-3833 Fax: +49 681 85775--2235 kristian.sons at dfki.de http://www.xml3d.org Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 _______________________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Mon Nov 4 17:56:08 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Mon, 04 Nov 2013 17:56:08 +0100 Subject: [Fiware-miwi] three.js WebComponents In-Reply-To: References: <51CDE540-A80E-401A-A661-A96FF9E4B9B4@playsign.net> <5274A545.3090509@dfki.de> <5EC2C34A-3BF5-4218-8738-6EC54FB670FB@playsign.net> <5274E045.2020106@dfki.de> <5275F75B.6010405@dfki.de> <52760843.2080200@dfki.de> Message-ID: <5277D1A8.5040502@dfki.de> Hi, Sounds great!! Again let me know if you want to discuss a more general and abstract handling of events (at least for 3D stuff). Best, Philipp Am 04.11.2013 08:02, schrieb Antti Kokko: > Hello, > > I have been doing research and testing with Polymer and input devices. > Regarding Polymer it really feels the way to go with the 2D-UI GE. I > read somewhere that the AngularJS will be moving towards Shadow DOM as > well and will include Polymer later on. > > With Polymer I have been doing a test implementation for chat app we > have in WebRocket. Implementing the actual UI component with Polymer was > pretty trivial and very nice to work with from a web designer point of > view. As end result there is clear readable output including 2 html > files and one css file. All functions have been encapsulated to the > component itself. Therefore it becomes re-usable although in this case > the component needs data binding and event hooking corresponding to the > backend used. But in general when the backend is solid and decided > implementing the web components will be nice work. Polymer already gives > a huge set of components ready to use and work with. For this case I > used polymer-collapse to slide up/down div elements I created. For this > I needed only to import the corresponding polymer-collapse html file and > hook it to my div element. Very easy and convenient. Right now I try to > tackle the require js issues for WebRocket related to Polymer. > > For input devices I have been researching and testing touch JS libs and > Gamepad API. For touch I tested couple of libraries and all of them > worked well. As end result I chose jquery plugin and tested it against > iPhone and iPad. Worked pretty well and I managed to get taps for 1-4 > fingers, sliding and pinching to work. Shake and rotate didn?t work for > some reason but I didn?t go deeper with that issue. > > For Gamepad I used Gamepad API > [https://dvcs.w3.org/hg/gamepad/raw-file/default/gamepad.html]. Got it > working well with Chrome. With Firefox it didn?t work even with Nightly > build, don?t know the reason. Anyway the implementation is already > tested against web rocket and Jonne did a nice test with the avatar app > we have. Gamepad works like a charm. For tests we used XBOX 360 > controller and Playstation controller. With Windows just plugging the > device to USB port and pushing some button is everything needed. Both > gamepads worked right away. > > The next thing with input is to test/research is Kinect. > > Thanks, > > - Antti > > > On Sun, Nov 3, 2013 at 10:24 AM, Philipp Slusallek > > wrote: > > Hi, > > Sounds really good. I would suggest that we start defining the > WebComponent components and their interfaces soon (of course, based > on what you already use, but have a quick process to discuss if > there could be improvements) and then go from there. In parallel we > can check that XML3D works well with WebComponents and Polmer. > > I believe we have talked about that a bit already but I am not sure > we have a specification for the WebComponents yet. Torsten, can you > work with whoever feels responsible on the Finish side to come up > with a common specification (somewhere on the private pages for now)? > > Sounds like we are all working in the same direction -- which is > great! I can't wait to see the first avatars walking to Oulu city > (or some other place) in any Web browser :-). > > > Best, > > Philipp > > Am 03.11.2013 08:23, schrieb Toni Alatalo: > > Again a brief note about a single point: > > On 03 Nov 2013, at 09:12, Philipp Slusallek > > >> wrote: > > Also there might be convenient components that would make 3D app > programming with XML3D and WebComponents much easier. So for > instance > having the XML3D-based components automatically define the > set of > variable that need to be synchronized, registering them with the > networking module (if available), and making sure this all > gets done > in the background would be excellent. Also providing an 3D > avatar > WebComponent with embedded animations and such would be > ideal. It > would be excellent if we could provide to Web developers a > whole set > of predefined and modular WebComponents that they could just > use to > create 3D worlds with behaviour. > > > This is what realXtend has been doing for several years ? just > outside > the Web ? and plan is to continue doing the same on the Web as > well :) > > That?s how it works in the native Tundra, and plan for this > fi-ware work > to produce the web client (which also works standalone, like the > native > one does too) is to implement the same. That?s also how it works in > Chiru-WebClient and WebRocket, i.e. the pre-existing WebTundra?s. > > ~Toni > > The one really big issue where I still see a lot of work on > finding > new solutions (not just implementation but real conceptual > work) is > dealing with the hugely varying set of input devices. One > person has > mouse and keyboard, the next has only a multi-touch surface, > the next > is using a Kinect, while yet another other is using a Wii > Remote, etc. > A single Web app will have to work with all of those situations > simultaneously (on different devices, though). Letting Web > application > and their developers handle this in their apps will simply > not work > and be frustrating for developer and users. > > We have started to work on a concept for this but this has > not gone > very far yet. The coarse idea is that applications use virtual > interaction elements to say what kind of input (metaphor) > they are > expected (e.g. selection of an object, moving an object). > These are > modules that can be configured and instantiated. They can > also be > nested within/on top each other (like non-visible dialog boxes) > dynamically to have a hierarchical finite state machine of > interactions. > > These interaction elements would consist of the interaction > logic as > well as (optionally) some interaction gadgets that visually > represent > the possible interactions (like handles to move and rotate > an object). > The key thing is that these interaction elements would be > independent > of the application, anyone can use them, and we could > develop a whole > library of them to cover the wide range of input metaphors that > applications use (but applications can extend them with > their own if > needed). > > There are two extensions to this approach: (i) instead of > having a > single interaction element for each interaction metaphor we > could have > interfaces for them and develop entire user interface > libraries that > each have a different look and feel but still implement the > same basic > interaction metaphor. (ii) One could also design them such > that users > can change the mapping of the devices used for certain > actions. Think > of it as a reconfigurable gamer mouse, where the user can assign > different action to certain buttons, gestures, or such. So a > user can > say, I will use the mouse to select objects but move them > with the > cursor keys (for a stupid simple example). These would be > optional > extensions for later, though. > > It would be great to work on those aspects together if this > would be > interesting to anyone of you. You probably have quite some > experience > in terms of good UI for 3D games/apps. Identifying, > generalizing, and > implementing them as a reusable set of components would a > great step > forward for 3D apps on the Web (or elsewhere). This would > make great > papers for conferences as well! > > If you want to go that way, we should probably set up a special > session for discussing this in more detail. > > > Best, > > Philipp > > Am 02.11.2013 17:35, schrieb Jonne Nauha: > > Yeah WebComponents are nice. I've already implemented > quick tests for > Tundras Entity and IComponent (common interface for all > components, > WebComponents has inheritance so it plays nicely into > this) like 3-4 > months ago in our WebRocket web client. At that point I > left it alone, I > wanted to have my JS code that implements the component > in a .js file > not inside a .html polymer template. Currently I'm using > requirejs for > the whole repo as the amount of code and dependency > tracking started to > get out of hand, so I wanted a modular system where I > can tell the > system what each module needs as a dependency. This has > been great but > will make writing comp implementations in a WebComponent > html template a > bit trickier as it would expect everything to in global > scope, > requierejs effectively removes everything from global scope. > > I have also a simple DOM integration plugin for > WebRocket. If you want > to turn it on (configurable when you construct the > client instance) it > will mirror the whole scene/entity/component/__attribute > chain in to the > DOM as my own > <__attribute/>......<__/entity>... > nodes. This would be trivial to change to create XML3D > nodes, but I cant > test that out before everything on the asset side that > has been > discussed are complete. Because nothing would simply > render if I wont > provide the Ogre asset loaders for XML3D and it knowing > how to used DDS > textures. What I would prefer even more is to just pass > you the geometry > as data, because I have my own fully working AssetAPI to > fetch the > assets, and I have my own loaders for each asset type I > support. I would > kind of be duplicate work to re-implement then as XML3D > loaders, when I > alreayd have the ready typed gl arrays to give to you > right now. > > For my current DOM "mirroring" plugin, if you manipulate > the attributes > in the DOM they wont sync up back to the in mem > JavaScript attributes. > So currently its read only. I haven't really had the > need to implement > the sync the other way as we are perfectly happy living > in JS land and > using the WebRocket SDK API to write our application > logic. The > declarative side is less important for Meshmoon > WebRocket as the scenes > are always coming from our Tundra based servers and not > declared on the > html page itself. For FIWARE the declarative and DOM > side is more > important and more in focus of course so I understand > the direction of > the talks and why XML3D is an valuable asset. > > Instantiating Scene, Entity and each of the Component > implementation > from a WebComponent template would solve this problem, > as its designed > to encapsulate the JS implementation and it has a great > way to get > callbacks when any attribute in the DOM node is > manipulated. It can also > expose DOM events and fire them, not to mention normal > functions eg. > $("", entityNode).setPosition(10,20,__30); so > you dont have to > use the raw DOM api to set string attributes by random. > These functions > would have type checking and not let you put carbage > into the > attributes. I believe the attribute changed handler can > also effectively > abort the change if you are trying to put a boolean to a > Vector3 > attribute. This all is great and would just require us > WebRocket devs to > port our current EC implementations to WebComponent > templates. > > But here is where I get confused. So you would be fine by us > implementing Tundra components as WebComponent templates > (as said this > would be fairly trivial). How would XML3D then play into > this situation? > Would it now monitor our DOM elements instead of the > ones you specify > (afaik eg. )? Can you open up this a bit? > > *UI and WebComponents* > * > * > WebComponents are also being looked at in our 2D-UI GE, > but to be frank > there is very little to do there. The system is > incredibly simple to > use. You include polymer.js to you page, add tags > to your Polymer > style .html templates, then you just use the tags > declared in them on > the markup or you create the DOM nodes during runtime > from JS with your > preferred method. I'm having a bit of hard time figuring > out what we > should focus on in the 2D part for WebComponents. I mean > they are there > and you can use them, you don't need any kind of special > APIs or > supporting code from our side for them to work. Only > thing I can think > of is implementing some widgets and maybe 3D related > templates for > anyone when they use WebTundra. > > Best regards, > Jonne Nauha > Meshmoon developer at Adminotech Ltd. > www.meshmoon.com > > > > > > > On Sat, Nov 2, 2013 at 1:21 PM, Philipp Slusallek > > > >> > > wrote: > > Hi Toni, all, > > I have now looked at video on the main Polymer page > (http://www.polymer-project.____org/ > >), which is actually > very nicely > done. They make a very good point why its > advantageous to put things > in the DOM compared to pure JS applications (or even > Angular, which > already uses the DOM). They highlight that with > WebComponents > (Polymer is a polyfill implementation of them) this > becomes now even > easier and creates an object-oriented aspect for HTML. > > BTW, this aspect is exactly what we were aiming at > when we suggested > the use of WebComponents for the 2D-UI Epic in the > objectives of the > FI-WARE Open Call and I think we should push even > more in this > direction, similar to what I wrote in response to > Jonne's email > earlier. > > Regarding the mapping: We already have the mapping > of 3D to the DOM > (XML3D) as well as many modules that build on top of > that mapping > (Xflow with animation, image processing, and AR; > portable materials, > etc.). I see no reason why we should throw this out > just because > there is a slightly different way of doing it. > > I would even speculate that if we would try to offer > similar > functionality that at the end we would end up with > something that > would pretty much resemble XML3D, maybe with a > slightly different > syntax. There are usually not many way to do the > same thing in a > good way. > > What I would support 100%, however, is that we try > to use > Web*Components* to implement the *ECA components* > (and possibly > entities). Essentially for the same reasons > explained in the video. > That makes a lot of sense in the Web context and is > fully compatible > with the DOM and all the DOM-based frameworks that > can then be used > as well on this data. > > I have been saying for a long time that we have had > libraries in > C/C++. Just porting them to JS to run in the Web > does not give us > any advantages at all (likely only disadvantages > like performance > and an inferior language). Its only if we embrace > the key elements > of the Web -- and the runtime DOM arguably is the > core of it all -- > that we can tap into all the benefits of the Web. > > And THE key advantage of the DOM is that you get > *way more* than the > sum of the pieces when putting things together. > Instead, of making > sure that one library can talk to all the others > (which gets you > into an N^2 complexity problem), each component only > needs to work > with the DOM (constant effort per component and you > have to deal > with one data structure anyway, so its essentially > no overhead > anyway). Then, you get the benefit from all other > components > *automatically and completely for free* (jquery and > all the other > tools "just work" also with XML3D, and we should be > able to use > XML3D within Polymer also). > > Note, that none of the Web technologies discussed here > (WebComponents, Polymer, Angular, etc.) would work > at all if they > would not use the DOM at their core. Angular for > example depends on > the DOM and just allows a nicer binding of custom > functionality > (controller implemented in JS) to the DOM. > > This all applies exactly the same way also to 3D > data -- at least > logically. However, on the implementation side 3D is > special because > you have to take care of large data structures, > typed arrays, and so > on. This is what the main work in XML3D was all > about. Why should > someone invest all the effort to do it again for > likely very similar > results in the end? > > We can talk about using three.js as a renderer, > though, but I would > not touch the 3D DOM binding and data model that we > already have in > XML3D (unless someone comes with very good reasons > to do so). > > > Best, > > Philipp > > > Am 02.11.2013 09:27, schrieb Toni Alatalo: > > On 02 Nov 2013, at 09:09, Philipp Slusallek > > > >> > > wrote: > > Nice stuff. from my perspective there are > two ways to look > at this work: One is to provide high level > UI elements on > top of a three.js implementation and the > other is the start > of creating a declarative layer on top of > the three.js > renderer. This seems more along the first > line but both > should be similarly interesting to us. > > > Indeed. > > It great to see that other people are coming > up with similar > ideas now. It would be good to get the > message about our > XML3D design and implementation to these > people out there. > That way we could improve what we already > have instead of > reinventing the wheel. > > > That was my immediate first thought as well: it > seemed like > people have started to reinvent declarative 3d > for the web from > scratch. That?s why I asked whether they knew > about the existing > work ? I understood that this Josh Galvin person > (don?t know him > from before) who made the Spinner demo, did (am > not sure). > > Thanks for the views and pointers, I?ll keep an > eye open for > talks about this (actually just joined the > #three.js irc channel > on freenode yesterday, haven?t been involved in > their community > before really ? Tapani from us has been hanging > out there > though). They seem to communicate most in the > github issue > tracker and pull requests (which I think is a > great way). > > I also did not find an e-mail address to the > polymer-threejs > person, butkaosat.net > > > is his personal site > > and apparently he made the original announcement > of the > declarative three.js thing in August in Google+ > so I figure e.g. > replying there would be one way to comment: > https://plus.google.com/____112899217323877236232/posts/____bUW1hrwHcAW > > > > > .. > I can do that on Monday. > > BTW it seems that this guy is into hardware and > cad and all > sorts of things and declarative 3d xml is just a > side thing for > fun, perhaps related to his work on some cad > thing ? is not like > he?d be pursuing a career or a product or > anything out of it. > > It seems like a straightforward mapping of the > three.js API to > xml elements: what I struggle to understand now > is whether > that?s a good abstraction level and how does it > correspond to > xml3d?s vocabulary. > > ~Toni > > It would be good if you can point people > also to our papers > from this year > > (http://graphics.cg.uni-____saarland.de/publications/ > > > >). They > explain a lot of the background of why we > have chose thing > to work the way they work. > > More specifically: > -- The "xml3d.js" paper explain a lot about > the design of > XML3D and its implementation > > (https://graphics.cg.uni-____saarland.de/2013/xml3djs-____architecture-of-a-polyfill-____implementation-of-xml3d/ > > >). > -- The "Declarative image processing" paper > explains all the > advantages one gets from exposing processing > elements to the > DOM instead of implementing them only in > some JS libraries > > (https://graphics.cg.uni-____saarland.de/2013/declarative-____ar-and-image-processing-on-____the-web-with-xflow/ > > >). > -- And the 2012 paper on "XFlow" shows this > usage for > animation > > (https://graphics.cg.uni-____saarland.de/2012/xflow-____declarative-data-processing-____for-the-web/ > > >) > > Getting into a constructive discussion with > some of these > three.js people would be a good thing. I > tried to find an > email address for the polymer-threejs person > but could not > find any. Feel free to farward this email to > him (and maybe > others). I would love to get their feedback > and engage in > discussions. > > > Best, > > Philipp > > Am 02.11.2013 00:38, schrieb Toni Alatalo: > > Apparently some three.js user/dev has > gotten inspired by > WebComponents & > the Polymeer and written > https://github.com/kaosat-dev/____polymer-threejs > > > > :) > > Now another guy has continued with > https://github.com/JoshGalvin/____three-polymer > > > > ? there?s > a demo of custom > element (?spinner?), similar to the Door > case discussed > here earlier. > > Had a brief chat with him, will return > to this later but > was fun to see > the minimal webgl web component example > there as that > has been in our > agenda. > > ~Toni > > 01:01 *<* galv*>* > https://github.com/JoshGalvin/____three-polymer > > > > added > support for more basic geometry types > 01:02 *<* galv*>* Going to do materials next > 01:25 *<* *antont**>* galv: hee - are > you aware of these > btw? > http://www.w3.org/community/____declarative3d/ > > > > , e.g. > https://github.com/xml3d/____xml3d.js > > > > 01:27 *<* galv*>* yeah, different level > of abstraction > 01:27 *<* *antont**>* perhaps > 01:28 *<* galv*>* I expect people to > wrap up their game > objects > 01:28 *<* galv*>* aka "spinner" > 01:28 *<* galv*>* (index.html) > 01:29 *<* *antont**>* we've been also > planning to enable > saying things > like if that's what you mean > 01:30 *<* *antont**>* right, seems like > the same idea > 01:31 *<* *antont**>* very cool to see, > gotta check the > codes etc later > .. but sleep now, laters > > > > > ___________________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > > https://lists.fi-ware.eu/____listinfo/fiware-miwi > > > > > > > > -- > > > ------------------------------____----------------------------__--__------------- > Deutsches Forschungszentrum f?r K?nstliche > Intelligenz > (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang > Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: > 19/673/0060/3 > > ------------------------------____----------------------------__--__--------------- > > > > > > -- > > > ------------------------------____----------------------------__--__------------- > Deutsches Forschungszentrum f?r K?nstliche > Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster > (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > > ------------------------------____----------------------------__--__--------------- > > _________________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > https://lists.fi-ware.eu/__listinfo/fiware-miwi > > > > > > -- > > ------------------------------__------------------------------__------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz > (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------__------------------------------__--------------- > > > > > > -- > > ------------------------------__------------------------------__------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------__------------------------------__--------------- > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From toni at playsign.net Wed Nov 6 08:26:55 2013 From: toni at playsign.net (Toni Alatalo) Date: Wed, 6 Nov 2013 09:26:55 +0200 Subject: [Fiware-miwi] xml3d.js question: knowing when a material is ready? Message-ID: Hi, another requirement for / question about xml3d.js from the scene scalability work: Question: Could we already know somehow in xml3d.js when a material, that uses textures, is ready for display? Case: We have proceeded to levels-of-detail work in the scalability effort for the city model and have now first tested the idea that more remote city blocks are loaded without (full) textures first, and then if the view gets nearer, the material with the high detail textures is switched to display. It is good to do that only when the material is fully ready for display, i.e. the texture(s) for it are loaded from the net and all the way to the graphics memory etc. Three.js does not readily provide that so in the current implementation we need to track the loading ourselves ? WebRocket does something similar, probably close to how it works in native Tundra where the Asset manager tracks the loading of dependencies and informs about completions. So this is to a) inform about yet another requirement discovered in this work and b) ask about the current state of that in xml3d.js. Cheers, ~Toni From mach at zhaw.ch Wed Nov 6 09:47:40 2013 From: mach at zhaw.ch (Christof Marti) Date: Wed, 6 Nov 2013 09:47:40 +0100 Subject: [Fiware-miwi] todays WP13 meeting Message-ID: <73587B09-205B-48F4-BE2B-02A714EFC191@zhaw.ch> Hi everybody I?m still suffering from a flu I caught last week which nocked me off since friday. I am now slowly recovering and trying to catch up some work today and close some points I am still owning you. I prepared a agenda/minutes for todays meeting. https://docs.google.com/document/d/1NPE7DeWj8z9SAIc_Y7moUv47vmcE8llBRpxrAudCNtY/edit# I will open the telco at 10:00 CET. Will probably be short; Sync and some final preparations for next monday. Thanks. Best regards Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering P.O.Box, CH-8401 Winterthur Office:TD O3.18, Obere Kirchgasse 2 Phone: +41 58 934 70 63 Mail: mach at zhaw.ch Skype: christof-marti From lachsen at cg.uni-saarland.de Wed Nov 6 10:01:17 2013 From: lachsen at cg.uni-saarland.de (Felix Klein) Date: Wed, 6 Nov 2013 10:01:17 +0100 Subject: [Fiware-miwi] xml3d.js question: knowing when a material is ready? In-Reply-To: References: Message-ID: Hi Toni, XML3D works already in a way, that meshes are displayed before all connected textures are fully loaded. If the textures are not available, the shader code is adapted to work without textures. It is possible to create custom shaders that work elegantly with and without textures. What XML3D does not do is schedule the loading of resources in any specific way. Currently it works like this: When a or is hooked into the document, the whole connected subtree is loaded with all textures. You can influence the loading by only hooking in content that should be loaded. It is currently not possible to query whether all textures inside a , or element have been loaded, but we have this state internally and it would be easy to implement such an interface. Since incremental loading seems more and more important for 3D applications, I actually propose a system to handle this more generically in XML3D. I have a system in mind that is flexible and can be controlled from the application (which I think is important). The basic idea is, that elements (or any other DataContaine) can be annotated with a loadPriority attribute. The higher the value, the less important all resources inside the element are. We can first use this attributes to determine the ordner in which to load resources. On top of that, we can annotate and element with the "loadUntil" attribute (or something like that). If this attribute is specified, only conent with a loadPriority smaller or equal to "loadUntil" will be loaded. Such a system can be used by your application in the following way: - You'd annotate all connected textures with a loadPriority of, say, "5". - Mesh and shader element close by get annotated with a "loadUntil" value of 5, such that everything including textures is loaded - Mesh and shader element far away get annotated with a "loadUntil" value lower than 5. In that case, texture won't get loaded This is just an idea, though. Bye Felix On Wed, Nov 6, 2013 at 8:26 AM, Toni Alatalo wrote: > Hi, another requirement for / question about xml3d.js from the scene > scalability work: > > Question: > Could we already know somehow in xml3d.js when a material, that uses > textures, is ready for display? > > Case: > We have proceeded to levels-of-detail work in the scalability effort for > the city model and have now first tested the idea that more remote city > blocks are loaded without (full) textures first, and then if the view gets > nearer, the material with the high detail textures is switched to display. > It is good to do that only when the material is fully ready for display, > i.e. the texture(s) for it are loaded from the net and all the way to the > graphics memory etc. > > Three.js does not readily provide that so in the current implementation we > need to track the loading ourselves ? WebRocket does something similar, > probably close to how it works in native Tundra where the Asset manager > tracks the loading of dependencies and informs about completions. > > So this is to a) inform about yet another requirement discovered in this > work and b) ask about the current state of that in xml3d.js. > > Cheers, > ~Toni > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Wed Nov 6 12:06:05 2013 From: toni at playsign.net (Toni Alatalo) Date: Wed, 6 Nov 2013 13:06:05 +0200 Subject: [Fiware-miwi] DOM as API vs as UI (Re: 3D UI Usage from other GEs / epics / apps) In-Reply-To: <5276153C.2020704@dfki.de> References: <52717BA2.807@dfki.de> <5272218B.3040202@dfki.de> <9B2CC7C7-0418-45D8-B298-97676672BA77@playsign.net> <5276153C.2020704@dfki.de> Message-ID: Hi returning to this as it?s still unclear for me and we need to implement this for integrating the client side of the synchronisation (that Lasse is working on) & the rest of the client: On 03 Nov 2013, at 11:19, Philipp Slusallek wrote: > No, we would still have to maintain the scene representation in the DOM. However, we can use the specialized access functions (not the string-valued attributes) to access the functionality of a (XML3D) DOM node much more efficiently than having to parse strings. Torstens example with the rotation is a good example of such a specialized interface of an (XML3D) DOM node. Yes I think everyone agrees that the representation is good to have there (also) but the question is indeed about efficient ways of modification. Question: are those specialised access functions the same thing to what Kristian refers to in this quote from September 2nd (Re: [Fiware-miwi] massive DOM manipulation benchmark)? I think not as you & Torsten talk about access to DOM elements whereas Kristian talks about something not in the DOM, or? "The DOM is meant as in interface to the user. That's how we designed XML3D. All medium-range modification (a few hundereds per frame) are okay. One must not forget that users will use jQuery etc, which can -- used the wrong way -- slow down the whole application (not only for 3D content). For all other operations, we have concept like Xflow, where the calculation is hidden behind the DOM interface. Also the rendering is also hidden behind the DOM structure. We even have our own memory management to get a better JS performance.? I?m referring to the parts about ?hidden behind the DOM?, used by XFlow and rendering. Do those use and modify some non-DOM data structures which xml3d.js has for the scene internally? We are fine with reading existing docs or even just source code for these things, you don?t have to explain everything in the emails here, but yes/no answers & pointers to more information (e.g. to relevant code files on github) would be very helpful. So now in particular I?m figuring out whether that kind of ?hidden? / non-DOM interface would be the suitable one for network synchronisation to use. I know that changes coming over the net are not usually *that* much, typically within the max hundreds (and usually much less) for which Kristian states that going via DOM manipulation is fine, but there can be quite large bursts (e.g. at login to create the whole scene, or when some logic changes a big part of it). And there are many consumers for the cpu time available in the browser main thread so is perhaps good to avoid wasting even a little of it in continuous movement update handling etc. And in any case it?s good for us to know and understand how the DOM interfacing works ? even if it turns out the efficient alternative is not necessary for networking. In the current WebTundras we have the same structure as in the native Tundra, i.e. there are normal software objects (non-DOM) for the entity-system level entities & components, and the 3d visual ones of those are proxies for the concrete implementations of scene nodes and meshes etc. in Three.js / Ogre respectively. And the experimental DOM-integration that Jonne made in WebRocket then mirrors that JS EC structure to the DOM periodically. These two old client architecture sketch diagrams illustrate the options: a) net sync updates DOM, rendering gets updates via DOM: https://rawgithub.com/realXtend/doc/master/dom/rexdom.svg b) net sync updates JS objects, optimised batched sync updates DOM: https://rawgithub.com/realXtend/doc/master/dom/rexdom-dom_as_ui.svg Until now I thought that we settled on b) back then in early September as I understood that it?s what you also do & recommend (and that?s what WebTundras have been doing so far). > Philipp ~Toni > Am 31.10.2013 10:42, schrieb Toni Alatalo: >> On 31 Oct 2013, at 11:23, Torsten Spieldenner >> > wrote: >>> On top the capabilities of the DOM API and additional powers of >>> sophisticated JavaScript-libraries, XML3D introduces an API extension >>> by its own to provide a convenient way to access the DOM elements as >>> XML3D-Elements, for example retrieving translation as XML3DVec3 or >>> Rotation as XML3DRotation (for example, to retrieve the rotation part >>> of an XML3D transformation, you can do this by using jQuery to query >>> the transformation node from the DOM, and access the rotation there >>> then: var r = $("#my_transformation").rotation). >> >> What confuses me here is: >> >> earlier it was concluded that ?the DOM is the UI?, I understood meaning >> how it works for people to >> >> a) author apps ? e.g. declare that oulu3d scene and reX avatar & chat >> apps are used in my html, along this nice christmas themed thing I just >> created (like txml is used in reX now) >> >> b) see and manipulate the state in the browser view-source & developer / >> debugger DOM views (like the Scene Structure editor in Tundra) >> >> c) (something else that escaped me now) >> >> Anyhow the point being that intensive manipulations such as creating and >> manipulating tens of thousands of entities are not done via it. This was >> the response to our initial ?massive dom manipulation? perf test. >> Manipulating transformation is a typical example where that happens ? I >> know that declarative ways can often be a good way to deal with e.g. >> moving objects, like the PhysicsMotor in Tundra and I think what XFlow >> (targets to) cover(s) too, but not always nor for everything so I think >> the point is still valid. >> >> So do you use a different API for heavy tasks and the DOM for other >> things or how does it go? >> >> ~Toni >> >>>> If we think that XML3D (or the DOM and XML3D acts on those manipulations) >>>> is already this perfect API I'm not sure what we are even trying to >>>> accomplish here? If we are not building a nice to use 3D SDK whats the >>>> target here? >>> I totally agree that we still need to build this easily programmable >>> 3D SDK. But XML3D makes it very simple to maintain the 3D scene in the >>> DOM according to the scene state of the application. >>> You may want to have a look at our example web client for our FiVES >>> server (https://github.com/rryk/FiVES). Although I admit that the code >>> needs some refactoring, the example of how entities are created shows >>> this nicely : As soon as you create a new Entity object, the DOM >>> representation of its scenegraph and its transformations are created >>> automatically and maintained as View of the entity model. As >>> developer, you only need to operate on the client application's API. >>> This could be an example, of how an SDK could operate on the XML3D >>> representation of the scene. >>> >>> >>> ~ Torsten >>> >>>> On Wed, Oct 30, 2013 at 11:35 PM, Philipp Slusallek < >>>> Philipp.Slusallek at dfki.de> wrote: >>>> >>>>> Hi Jonne, all, >>>>> >>>>> I am not sure that applying the Tudra API in the Web context is really the >>>>> right approach. One of the key differences is that we already have a >>>>> central "scene" data structure and it already handles rendering and input >>>>> (DOM events), and other aspects. Also an API oriented approach may not be >>>>> the best option in this declarative context either (even though I >>>>> understands that it feels more natural when coming from C++, I had the same >>>>> issues). >>>>> >>>>> So let me be a bit more specific: >>>>> >>>>> -- Network: So, yes we need a network module. It's not something that >>>>> "lives" in the DOM but rather watches it and sends updates to the server to >>>>> achieve sync. >>>>> >>>>> -- Renderer: Why do we need an object here. Its part of the DOM model. The >>>>> only aspect is that we may want to set renderer-specific parameters. We >>>>> currently do so through the DOM element, which seems like a good >>>>> approach. The issues to be discussed here is what would be the advantages >>>>> of a three.js based renderer and implement it of really needed. >>>>> >>>>> -- Scene: This can be done in the DOM nicely and with WebComponents its >>>>> even more elegant. The scene objects are simple part of the same DOM but >>>>> only some of them get rendered. I am not even sure that we need here in >>>>> addition to the DOM and suitable mappings for the components. >>>>> >>>>> -- Asset: As you say this is already built-into the XML3D DOM. I see it a >>>>> bit like the network system in that it watches missing resources in the DOM >>>>> (plus attributes on priotity and such?) and implements a sort of scheduler >>>>> excutes requests in some priority order. A version that only loads missing >>>>> resources if is already available, one that goes even further and deletes >>>>> unneeded resources could probably be ported from your resource manager. >>>>> >>>>> -- UI: That is why we are building on top of HTML, which is a pretty good >>>>> UI layer in many requests. We have the 2D-UI GE to look into missing >>>>> functionality >>>>> >>>>> -- Input: This also is already built in as the DOM as events traverse the >>>>> DOM. It is widely used in all WEB based UIs and has proven quite useful >>>>> there. Here we can nicely combine it with the 3D scene model where events >>>>> are not only delivered to the 3D graphics elements but can be handled by >>>>> the elements or components even before that. >>>>> >>>>> But maybe I am missunderstanding you here? >>>>> >>>>> >>>>> Best, >>>>> >>>>> Philipp >>>>> >>>>> >>>>> Am 30.10.2013 14:31, schrieb Jonne Nauha: >>>>> >>>>>> var client = >>>>>> { >>>>>> network : Object, // Network sync, connect, disconnect etc. >>>>>> functionality. >>>>>> // Implemented by scene sync GE (Ludocraft). >>>>>> >>>>>> renderer : Object, // API for 3D rendering engine access, creating >>>>>> scene nodes, updating their transforms, raycasting etc. >>>>>> // Implemented by 3D UI (Playsign). >>>>>> >>>>>> scene : Object, // API for accessing the >>>>>> Entity-Component-Attribute model. >>>>>> // Implemented by ??? >>>>>> >>>>>> asset : Object, // Not strictly necessary for xml3d as it does >>>>>> asset requests for us, but for three.js this is pretty much needed. >>>>>> // Implemented by ??? >>>>>> >>>>>> ui : Object, // API to add/remove widgets correctly on top >>>>>> of the 3D rendering canvas element, window resize events etc. >>>>>> // Implemented by 2D/Input GE (Adminotech). >>>>>> >>>>>> input : Object // API to hook to input events occurring on top >>>>>> of the 3D scene. >>>>>> // Implemented by 2D/Input GE (Adminotech). >>>>>> }; >>>>>> >>>>>> >>>>>> Best regards, >>>>>> Jonne Nauha >>>>>> Meshmoon developer at Adminotech Ltd. >>>>>> www.meshmoon.com >>>>>> >>>>>> >>>>>> >>>>>> On Wed, Oct 30, 2013 at 9:51 AM, Toni Alatalo >>>>> > wrote: >>>>>> >>>>>> Hi again, >>>>>> new angle here: calling devs *outside* the 3D UI GE: POIs, >>>>>> real-virtual interaction, interface designer, virtual characters, 3d >>>>>> capture, synchronization etc. >>>>>> I think we need to proceed rapidly with integration now and propose >>>>>> that one next step towards that is to analyze the interfaces between >>>>>> 3D UI and other GEs. This is because it seems to be a central part >>>>>> with which many others interface: that is evident in the old >>>>>> 'arch.png' where we analyzed GE/Epic interdependencies: is embedded >>>>>> in section 2 in the Winterthur arch discussion notes which hopefully >>>>>> works for everyone to see, >>>>>> https://docs.google.com/**document/d/**1Sr4rg44yGxK8jj6yBsayCwfitZTq5 >>>>>> **Cdyyb_xC25vhhE/edit >>>>>> I propose a process where we go through the usage patterns case by >>>>>> case. For example so that me & Erno visit the other devs to discuss >>>>>> it. I think a good goal for those sessions is to define and plan the >>>>>> implementation of first tests / minimal use cases where the other >>>>>> GEs are used together with 3D UI to show something. I'd like this >>>>>> first pass to happen quickly so that within 2 weeks from the >>>>>> planning the first case is implemented. So if we get to have the >>>>>> sessions within 2 weeks from now, in a month we'd have demos with >>>>>> all parts. >>>>>> Let's organize this so that those who think this applies to their >>>>>> work contact me with private email (to not spam the list), we meet >>>>>> and collect the notes to the wiki and inform this list about that. >>>>>> One question of particular interest to me here is: can the users of >>>>>> 3D UI do what they need well on the entity system level (for example >>>>>> just add and configure mesh components), or do they need deeper >>>>>> access to the 3d scene and rendering (spatial queries, somehow >>>>>> affect the rendering pipeline etc). With Tundra we have the >>>>>> Scene API and the (Ogre)World API(s) to support the latter, and also >>>>>> access to the renderer directly. OTOH the entity system level is >>>>>> renderer independent. >>>>>> Synchronization is a special case which requires good two-way >>>>>> integration with 3D UI. Luckily it's something that we and >>>>>> especially Lasse himself knows already from how it works in Tundra >>>>>> (and in WebTundras). Definitely to be discussed and planned now too >>>>>> of course. >>>>>> So please if you agree that this is a good process do raise hands >>>>>> and let's start working on it! We can discuss this in the weekly too >>>>>> if needed. >>>>>> Cheers, >>>>>> ~Toni >>>>>> >>>>>> ______________________________**_________________ >>>>>> Fiware-miwi mailing list >>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> ______________________________**_________________ >>>>>> Fiware-miwi mailing list >>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>> >>>>>> >>>>> -- >>>>> >>>>> ------------------------------**------------------------------** >>>>> ------------- >>>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>>> >>>>> Gesch?ftsf?hrung: >>>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>>> Dr. Walter Olthoff >>>>> Vorsitzender des Aufsichtsrats: >>>>> Prof. Dr. h.c. Hans A. Aukes >>>>> >>>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>>> ------------------------------**------------------------------** >>>>> --------------- >>>>> >>>> >>>> >>>> _______________________________________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>> >>> _______________________________________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >> >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ari.okkonen at cie.fi Thu Nov 7 12:46:33 2013 From: ari.okkonen at cie.fi (Ari Okkonen CIE) Date: Thu, 07 Nov 2013 13:46:33 +0200 Subject: [Fiware-miwi] Participation to FI-WARE at Oulu and to the meeting dinner Message-ID: <527B7D99.8090905@cie.fi> Hi All, To ensure refreshments and lunch, please register to the agenda page https://www.google.com/url?q=https%3A%2F%2Fdocs.google.com%2Fdocument%2Fd%2F1UnrOgC5Btyn6AOEGdM6xu4M8itEvssJ4lXHNkrcLzG4%2Fedit Also, please, report me on Friday if you have special food requirements. We have reserved tables for dinner from a local restaurant for Monday evening starting at 20:00. If you are NOT going to participate the dinner, please, send me an email as soon as possible! Best Regards Ari Okkonen -- Ari Okkonen CIE, University of Oulu From mach at zhaw.ch Thu Nov 7 14:33:53 2013 From: mach at zhaw.ch (Christof Marti) Date: Thu, 7 Nov 2013 14:33:53 +0100 Subject: [Fiware-miwi] WP13 progress report (review) Message-ID: Hi everybody I finally completed the integration of all your contributions to the WP13 periodic report. I tried to bring it in a consistent form. Please check the entries for your company. Feedback also to the overall WP13 part is also welcome. Please keep the ?change modification marks? active so I can follow your changes. Deadline for feedback is until tomorrow noon (12 CET). Best regards Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering P.O.Box, CH-8401 Winterthur Office:TD O3.18, Obere Kirchgasse 2 Phone: +41 58 934 70 63 Mail: mach at zhaw.ch Skype: christof-marti -------------- next part -------------- A non-text attachment was scrubbed... Name: D.1.2.5 - WP13-V1.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 258987 bytes Desc: not available URL: From mach at zhaw.ch Thu Nov 7 21:05:17 2013 From: mach at zhaw.ch (Christof Marti) Date: Thu, 7 Nov 2013 21:05:17 +0100 Subject: [Fiware-miwi] WP13 progress report (review) In-Reply-To: References: Message-ID: <97C5FB41-F220-446C-8316-DEE2ADF43105@zhaw.ch> Sorry folks This message was sent this morning but was blocked in the mailing list because the attachment was to large and I did not realize it until now. (I increased the max. message size for the miwi mailing list now to 1MB) Would be great if you could check your part and give feedback asap, because Javier is urgently waiting for the final version. Cheers, Christof Am 07.11.2013 um 14:33 schrieb Marti Christof (mach) : > Hi everybody > > I finally completed the integration of all your contributions to the WP13 periodic report. > I tried to bring it in a consistent form. > > Please check the entries for your company. > Feedback also to the overall WP13 part is also welcome. > Please keep the ?change modification marks? active so I can follow your changes. > > Deadline for feedback is until tomorrow noon (12 CET). > > > Best regards > Christof > ---- > InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > Institut of Applied Information Technology - InIT > Zurich University of Applied Sciences - ZHAW > School of Engineering > P.O.Box, CH-8401 Winterthur > Office:TD O3.18, Obere Kirchgasse 2 > Phone: +41 58 934 70 63 > Mail: mach at zhaw.ch > Skype: christof-marti > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Fri Nov 8 07:47:00 2013 From: toni at playsign.net (Toni Alatalo) Date: Fri, 8 Nov 2013 08:47:00 +0200 Subject: [Fiware-miwi] Entity System Usage from UI Designer (Re: DOM as API vs as UI (Re: 3D UI Usage from other GEs / epics / apps)) In-Reply-To: References: <52717BA2.807@dfki.de> <5272218B.3040202@dfki.de> <9B2CC7C7-0418-45D8-B298-97676672BA77@playsign.net> <5276153C.2020704@dfki.de> Message-ID: A specific question for the UI Designer / scene builder team at Admino: Your current implementation is against reX entity system as JS objects (the WebRocket scene implementation) ? diagram b) in the previous post. Would the editor be well implementable directly against the DOM? ? the option in diagram a). I think a key point is attribute metadata, for example type and valid value range, and perhaps handling attribute types such as transform and color with special editing widgets (transform gizmo, color palette selector) etc. Do you think all that could work somehow nicely when using the DOM directly, perhaps by having the type information in WebComponent definitions? (you are the experts on this too thanks to the 2D UI work) Or does the editor in practice require the kind of entity system with attribute objects which you are using currently? We can discuss this at the office or in the Monday meet but I figured to post the question here anyhow as the discussion has been here otherwise and there?s no comments yet. I think UIDesigner as a user of the entity system is good to analyse now as it?s already fairly complete ? other users such as synchronisation are not as far yet. Cheers, ~Toni On 06 Nov 2013, at 13:06, Toni Alatalo wrote: > Hi returning to this as it?s still unclear for me and we need to implement this for integrating the client side of the synchronisation (that Lasse is working on) & the rest of the client: > > On 03 Nov 2013, at 11:19, Philipp Slusallek wrote: >> No, we would still have to maintain the scene representation in the DOM. However, we can use the specialized access functions (not the string-valued attributes) to access the functionality of a (XML3D) DOM node much more efficiently than having to parse strings. Torstens example with the rotation is a good example of such a specialized interface of an (XML3D) DOM node. > > Yes I think everyone agrees that the representation is good to have there (also) but the question is indeed about efficient ways of modification. > > Question: are those specialised access functions the same thing to what Kristian refers to in this quote from September 2nd (Re: [Fiware-miwi] massive DOM manipulation benchmark)? I think not as you & Torsten talk about access to DOM elements whereas Kristian talks about something not in the DOM, or? > > "The DOM is meant as in interface to the user. That's how we designed XML3D. All medium-range modification (a few hundereds per frame) are okay. One must not forget that users will use jQuery etc, which can -- used the wrong way -- slow down the whole application (not only for 3D content). For all other operations, we have concept like Xflow, where the calculation is hidden behind the DOM interface. Also the rendering is also hidden behind the DOM structure. We even have our own memory management to get a better JS performance.? > > I?m referring to the parts about ?hidden behind the DOM?, used by XFlow and rendering. > > Do those use and modify some non-DOM data structures which xml3d.js has for the scene internally? > > We are fine with reading existing docs or even just source code for these things, you don?t have to explain everything in the emails here, but yes/no answers & pointers to more information (e.g. to relevant code files on github) would be very helpful. > > So now in particular I?m figuring out whether that kind of ?hidden? / non-DOM interface would be the suitable one for network synchronisation to use. > > I know that changes coming over the net are not usually *that* much, typically within the max hundreds (and usually much less) for which Kristian states that going via DOM manipulation is fine, but there can be quite large bursts (e.g. at login to create the whole scene, or when some logic changes a big part of it). And there are many consumers for the cpu time available in the browser main thread so is perhaps good to avoid wasting even a little of it in continuous movement update handling etc. And in any case it?s good for us to know and understand how the DOM interfacing works ? even if it turns out the efficient alternative is not necessary for networking. > > In the current WebTundras we have the same structure as in the native Tundra, i.e. there are normal software objects (non-DOM) for the entity-system level entities & components, and the 3d visual ones of those are proxies for the concrete implementations of scene nodes and meshes etc. in Three.js / Ogre respectively. And the experimental DOM-integration that Jonne made in WebRocket then mirrors that JS EC structure to the DOM periodically. > > These two old client architecture sketch diagrams illustrate the options: > > a) net sync updates DOM, rendering gets updates via DOM: > https://rawgithub.com/realXtend/doc/master/dom/rexdom.svg > > b) net sync updates JS objects, optimised batched sync updates DOM: > https://rawgithub.com/realXtend/doc/master/dom/rexdom-dom_as_ui.svg > > Until now I thought that we settled on b) back then in early September as I understood that it?s what you also do & recommend (and that?s what WebTundras have been doing so far). > >> Philipp > > ~Toni > >> Am 31.10.2013 10:42, schrieb Toni Alatalo: >>> On 31 Oct 2013, at 11:23, Torsten Spieldenner >>> > wrote: >>>> On top the capabilities of the DOM API and additional powers of >>>> sophisticated JavaScript-libraries, XML3D introduces an API extension >>>> by its own to provide a convenient way to access the DOM elements as >>>> XML3D-Elements, for example retrieving translation as XML3DVec3 or >>>> Rotation as XML3DRotation (for example, to retrieve the rotation part >>>> of an XML3D transformation, you can do this by using jQuery to query >>>> the transformation node from the DOM, and access the rotation there >>>> then: var r = $("#my_transformation").rotation). >>> >>> What confuses me here is: >>> >>> earlier it was concluded that ?the DOM is the UI?, I understood meaning >>> how it works for people to >>> >>> a) author apps ? e.g. declare that oulu3d scene and reX avatar & chat >>> apps are used in my html, along this nice christmas themed thing I just >>> created (like txml is used in reX now) >>> >>> b) see and manipulate the state in the browser view-source & developer / >>> debugger DOM views (like the Scene Structure editor in Tundra) >>> >>> c) (something else that escaped me now) >>> >>> Anyhow the point being that intensive manipulations such as creating and >>> manipulating tens of thousands of entities are not done via it. This was >>> the response to our initial ?massive dom manipulation? perf test. >>> Manipulating transformation is a typical example where that happens ? I >>> know that declarative ways can often be a good way to deal with e.g. >>> moving objects, like the PhysicsMotor in Tundra and I think what XFlow >>> (targets to) cover(s) too, but not always nor for everything so I think >>> the point is still valid. >>> >>> So do you use a different API for heavy tasks and the DOM for other >>> things or how does it go? >>> >>> ~Toni >>> >>>>> If we think that XML3D (or the DOM and XML3D acts on those manipulations) >>>>> is already this perfect API I'm not sure what we are even trying to >>>>> accomplish here? If we are not building a nice to use 3D SDK whats the >>>>> target here? >>>> I totally agree that we still need to build this easily programmable >>>> 3D SDK. But XML3D makes it very simple to maintain the 3D scene in the >>>> DOM according to the scene state of the application. >>>> You may want to have a look at our example web client for our FiVES >>>> server (https://github.com/rryk/FiVES). Although I admit that the code >>>> needs some refactoring, the example of how entities are created shows >>>> this nicely : As soon as you create a new Entity object, the DOM >>>> representation of its scenegraph and its transformations are created >>>> automatically and maintained as View of the entity model. As >>>> developer, you only need to operate on the client application's API. >>>> This could be an example, of how an SDK could operate on the XML3D >>>> representation of the scene. >>>> >>>> >>>> ~ Torsten >>>> >>>>> On Wed, Oct 30, 2013 at 11:35 PM, Philipp Slusallek < >>>>> Philipp.Slusallek at dfki.de> wrote: >>>>> >>>>>> Hi Jonne, all, >>>>>> >>>>>> I am not sure that applying the Tudra API in the Web context is really the >>>>>> right approach. One of the key differences is that we already have a >>>>>> central "scene" data structure and it already handles rendering and input >>>>>> (DOM events), and other aspects. Also an API oriented approach may not be >>>>>> the best option in this declarative context either (even though I >>>>>> understands that it feels more natural when coming from C++, I had the same >>>>>> issues). >>>>>> >>>>>> So let me be a bit more specific: >>>>>> >>>>>> -- Network: So, yes we need a network module. It's not something that >>>>>> "lives" in the DOM but rather watches it and sends updates to the server to >>>>>> achieve sync. >>>>>> >>>>>> -- Renderer: Why do we need an object here. Its part of the DOM model. The >>>>>> only aspect is that we may want to set renderer-specific parameters. We >>>>>> currently do so through the DOM element, which seems like a good >>>>>> approach. The issues to be discussed here is what would be the advantages >>>>>> of a three.js based renderer and implement it of really needed. >>>>>> >>>>>> -- Scene: This can be done in the DOM nicely and with WebComponents its >>>>>> even more elegant. The scene objects are simple part of the same DOM but >>>>>> only some of them get rendered. I am not even sure that we need here in >>>>>> addition to the DOM and suitable mappings for the components. >>>>>> >>>>>> -- Asset: As you say this is already built-into the XML3D DOM. I see it a >>>>>> bit like the network system in that it watches missing resources in the DOM >>>>>> (plus attributes on priotity and such?) and implements a sort of scheduler >>>>>> excutes requests in some priority order. A version that only loads missing >>>>>> resources if is already available, one that goes even further and deletes >>>>>> unneeded resources could probably be ported from your resource manager. >>>>>> >>>>>> -- UI: That is why we are building on top of HTML, which is a pretty good >>>>>> UI layer in many requests. We have the 2D-UI GE to look into missing >>>>>> functionality >>>>>> >>>>>> -- Input: This also is already built in as the DOM as events traverse the >>>>>> DOM. It is widely used in all WEB based UIs and has proven quite useful >>>>>> there. Here we can nicely combine it with the 3D scene model where events >>>>>> are not only delivered to the 3D graphics elements but can be handled by >>>>>> the elements or components even before that. >>>>>> >>>>>> But maybe I am missunderstanding you here? >>>>>> >>>>>> >>>>>> Best, >>>>>> >>>>>> Philipp >>>>>> >>>>>> >>>>>> Am 30.10.2013 14:31, schrieb Jonne Nauha: >>>>>> >>>>>>> var client = >>>>>>> { >>>>>>> network : Object, // Network sync, connect, disconnect etc. >>>>>>> functionality. >>>>>>> // Implemented by scene sync GE (Ludocraft). >>>>>>> >>>>>>> renderer : Object, // API for 3D rendering engine access, creating >>>>>>> scene nodes, updating their transforms, raycasting etc. >>>>>>> // Implemented by 3D UI (Playsign). >>>>>>> >>>>>>> scene : Object, // API for accessing the >>>>>>> Entity-Component-Attribute model. >>>>>>> // Implemented by ??? >>>>>>> >>>>>>> asset : Object, // Not strictly necessary for xml3d as it does >>>>>>> asset requests for us, but for three.js this is pretty much needed. >>>>>>> // Implemented by ??? >>>>>>> >>>>>>> ui : Object, // API to add/remove widgets correctly on top >>>>>>> of the 3D rendering canvas element, window resize events etc. >>>>>>> // Implemented by 2D/Input GE (Adminotech). >>>>>>> >>>>>>> input : Object // API to hook to input events occurring on top >>>>>>> of the 3D scene. >>>>>>> // Implemented by 2D/Input GE (Adminotech). >>>>>>> }; >>>>>>> >>>>>>> >>>>>>> Best regards, >>>>>>> Jonne Nauha >>>>>>> Meshmoon developer at Adminotech Ltd. >>>>>>> www.meshmoon.com >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Wed, Oct 30, 2013 at 9:51 AM, Toni Alatalo >>>>>> > wrote: >>>>>>> >>>>>>> Hi again, >>>>>>> new angle here: calling devs *outside* the 3D UI GE: POIs, >>>>>>> real-virtual interaction, interface designer, virtual characters, 3d >>>>>>> capture, synchronization etc. >>>>>>> I think we need to proceed rapidly with integration now and propose >>>>>>> that one next step towards that is to analyze the interfaces between >>>>>>> 3D UI and other GEs. This is because it seems to be a central part >>>>>>> with which many others interface: that is evident in the old >>>>>>> 'arch.png' where we analyzed GE/Epic interdependencies: is embedded >>>>>>> in section 2 in the Winterthur arch discussion notes which hopefully >>>>>>> works for everyone to see, >>>>>>> https://docs.google.com/**document/d/**1Sr4rg44yGxK8jj6yBsayCwfitZTq5 >>>>>>> **Cdyyb_xC25vhhE/edit >>>>>>> I propose a process where we go through the usage patterns case by >>>>>>> case. For example so that me & Erno visit the other devs to discuss >>>>>>> it. I think a good goal for those sessions is to define and plan the >>>>>>> implementation of first tests / minimal use cases where the other >>>>>>> GEs are used together with 3D UI to show something. I'd like this >>>>>>> first pass to happen quickly so that within 2 weeks from the >>>>>>> planning the first case is implemented. So if we get to have the >>>>>>> sessions within 2 weeks from now, in a month we'd have demos with >>>>>>> all parts. >>>>>>> Let's organize this so that those who think this applies to their >>>>>>> work contact me with private email (to not spam the list), we meet >>>>>>> and collect the notes to the wiki and inform this list about that. >>>>>>> One question of particular interest to me here is: can the users of >>>>>>> 3D UI do what they need well on the entity system level (for example >>>>>>> just add and configure mesh components), or do they need deeper >>>>>>> access to the 3d scene and rendering (spatial queries, somehow >>>>>>> affect the rendering pipeline etc). With Tundra we have the >>>>>>> Scene API and the (Ogre)World API(s) to support the latter, and also >>>>>>> access to the renderer directly. OTOH the entity system level is >>>>>>> renderer independent. >>>>>>> Synchronization is a special case which requires good two-way >>>>>>> integration with 3D UI. Luckily it's something that we and >>>>>>> especially Lasse himself knows already from how it works in Tundra >>>>>>> (and in WebTundras). Definitely to be discussed and planned now too >>>>>>> of course. >>>>>>> So please if you agree that this is a good process do raise hands >>>>>>> and let's start working on it! We can discuss this in the weekly too >>>>>>> if needed. >>>>>>> Cheers, >>>>>>> ~Toni >>>>>>> >>>>>>> ______________________________**_________________ >>>>>>> Fiware-miwi mailing list >>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> ______________________________**_________________ >>>>>>> Fiware-miwi mailing list >>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>>> >>>>>>> >>>>>> -- >>>>>> >>>>>> ------------------------------**------------------------------** >>>>>> ------------- >>>>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>>>> >>>>>> Gesch?ftsf?hrung: >>>>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>>>> Dr. Walter Olthoff >>>>>> Vorsitzender des Aufsichtsrats: >>>>>> Prof. Dr. h.c. Hans A. Aukes >>>>>> >>>>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>>>> ------------------------------**------------------------------** >>>>>> --------------- >>>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Fiware-miwi mailing list >>>>> Fiware-miwi at lists.fi-ware.eu >>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>> >>>> _______________________________________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>> >>> >>> >>> _______________________________________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>> >> >> >> -- >> >> ------------------------------------------------------------------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> --------------------------------------------------------------------------- >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erno at playsign.net Fri Nov 8 12:17:14 2013 From: erno at playsign.net (Erno Kuusela) Date: Fri, 8 Nov 2013 13:17:14 +0200 Subject: [Fiware-miwi] Xml3drepo access prob Message-ID: <20131108111714.GO47616@ee.oulu.fi> Hello, These URLs from have started asking for passwords, any ideas if we need a user account now, or is the site down/disabled? http://verser2.cs.ucl.ac.uk/xml3drepo/ http://verser2.cs.ucl.ac.uk/xml3drepo/oulu3dlive/?meshformat=json Erno From Philipp.Slusallek at dfki.de Sat Nov 9 06:57:52 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Sat, 09 Nov 2013 06:57:52 +0100 Subject: [Fiware-miwi] Xml3drepo access prob In-Reply-To: <20131108111714.GO47616@ee.oulu.fi> References: <20131108111714.GO47616@ee.oulu.fi> Message-ID: <527DCEE0.5000202@dfki.de> Hi, I do not. Kristian? Philipp Am 08.11.2013 12:17, schrieb Erno Kuusela: > Hello, > > These URLs from have started asking for passwords, > any ideas if we need a user account now, or is the site down/disabled? > > http://verser2.cs.ucl.ac.uk/xml3drepo/ > http://verser2.cs.ucl.ac.uk/xml3drepo/oulu3dlive/?meshformat=json > > Erno > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From Philipp.Slusallek at dfki.de Sat Nov 9 07:51:48 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Sat, 09 Nov 2013 07:51:48 +0100 Subject: [Fiware-miwi] DOM as API vs as UI (Re: 3D UI Usage from other GEs / epics / apps) In-Reply-To: References: <52717BA2.807@dfki.de> <5272218B.3040202@dfki.de> <9B2CC7C7-0418-45D8-B298-97676672BA77@playsign.net> <5276153C.2020704@dfki.de> Message-ID: <527DDB84.1050202@dfki.de> Hi, Since many of us a traveling, let me take a stab at your questions. Kristian, Torsten, please correct any technical errors. Xflow (which Kristian refers to) is used for realtime animation of characters and such operations. We have shown real-time animation of more then 25 characters with skeletal animation and skinning completely running in JS (without HW acceleration yet). This should show that XML3D can well be used for real-time changes even for things like geometry. Xflow works on the internal data representation of XML3D which is tailored for fast rendering through WebGL (typed arrays and such). This internal data structures are similar to what three.js maintains. There is actually not much difference at this layer. When a frame needs to rendered, both renderers simply go through this "display list" as efficiently as possible. What Kristian refers to regarding memory management is the issues that we encountered with garbage collection in JS implementations. As a result we allocate large arrays once and manage the data within those arrays ourselves. This has turned out to avoid quite frequent rendering stalls whenever the JS garbage collector kicked in. Each of the XML3D elements (e.g. mesh, data) offers JS APIs (should be documented in the Wiki, I hope) to access these internal data structures directly and so other JS code can have equally good access to these data structures. You can (but should not) also go through the text based DOM attributes but this will be slow for large data (e.g. vertices). I believe its is till fine to use these interfaces for small things like changing the time for an animation or such. One thing where you have to go through the DOM is creating the DOM objects themselves. There is little we can do about that. Of course, if your modifications are about things that can be well described by Xflow, you ideally should use xflow and eventually benefit from the HW acceleration that we are implementing, where potentially all the computations would happen on the GPU and not be touched by JS any more. I am not sure what the status of this task is, though. Hope this helps. Kristian, Torsten: Feel free to add more detail and corrections. Best, Philipp Am 06.11.2013 12:06, schrieb Toni Alatalo: > Hi returning to this as it?s still unclear for me and we need to > implement this for integrating the client side of the synchronisation > (that Lasse is working on) & the rest of the client: > > On 03 Nov 2013, at 11:19, Philipp Slusallek > wrote: >> No, we would still have to maintain the scene representation in the >> DOM. However, we can use the specialized access functions (not the >> string-valued attributes) to access the functionality of a (XML3D) DOM >> node much more efficiently than having to parse strings. Torstens >> example with the rotation is a good example of such a specialized >> interface of an (XML3D) DOM node. > > Yes I think everyone agrees that the representation is good to have > there (also) but the question is indeed about efficient ways of > modification. > > Question: are those specialised access functions the same thing to what > Kristian refers to in this quote from September 2nd (Re: [Fiware-miwi] > massive DOM manipulation benchmark)? I think not as you & Torsten talk > about access to DOM elements whereas Kristian talks about something not > in the DOM, or? > > "The DOM is meant as in interface to the user. That's how we designed > XML3D. All medium-range modification (a few hundereds per frame) are > okay. One must not forget that users will use jQuery etc, which can -- > used the wrong way -- slow down the whole application (not only for 3D > content). For all other operations, we have concept like Xflow, where > the calculation is hidden behind the DOM interface. Also the rendering > is also hidden behind the DOM structure. We even have our own memory > management to get a better JS performance.? > > I?m referring to the parts about ?hidden behind the DOM?, used by XFlow > and rendering. > > Do those use and modify some non-DOM data structures which xml3d.js has > for the scene internally? > > We are fine with reading existing docs or even just source code for > these things, you don?t have to explain everything in the emails here, > but yes/no answers & pointers to more information (e.g. to relevant code > files on github) would be very helpful. > > So now in particular I?m figuring out whether that kind of ?hidden? / > non-DOM interface would be the suitable one for network synchronisation > to use. > > I know that changes coming over the net are not usually *that* much, > typically within the max hundreds (and usually much less) for which > Kristian states that going via DOM manipulation is fine, but there can > be quite large bursts (e.g. at login to create the whole scene, or when > some logic changes a big part of it). And there are many consumers for > the cpu time available in the browser main thread so is perhaps good to > avoid wasting even a little of it in continuous movement update handling > etc. And in any case it?s good for us to know and understand how the DOM > interfacing works ? even if it turns out the efficient alternative is > not necessary for networking. > > In the current WebTundras we have the same structure as in the native > Tundra, i.e. there are normal software objects (non-DOM) for the > entity-system level entities & components, and the 3d visual ones of > those are proxies for the concrete implementations of scene nodes and > meshes etc. in Three.js / Ogre respectively. And the experimental > DOM-integration that Jonne made in WebRocket then mirrors that JS EC > structure to the DOM periodically. > > These two old client architecture sketch diagrams illustrate the options: > > a) net sync updates DOM, rendering gets updates via DOM: > https://rawgithub.com/realXtend/doc/master/dom/rexdom.svg > > b) net sync updates JS objects, optimised batched sync updates DOM: > https://rawgithub.com/realXtend/doc/master/dom/rexdom-dom_as_ui.svg > > Until now I thought that we settled on b) back then in early September > as I understood that it?s what you also do & recommend (and that?s what > WebTundras have been doing so far). > >> Philipp > > ~Toni > >> Am 31.10.2013 10:42, schrieb Toni Alatalo: >>> On 31 Oct 2013, at 11:23, Torsten Spieldenner >>> >> > >>> wrote: >>>> On top the capabilities of the DOM API and additional powers of >>>> sophisticated JavaScript-libraries, XML3D introduces an API extension >>>> by its own to provide a convenient way to access the DOM elements as >>>> XML3D-Elements, for example retrieving translation as XML3DVec3 or >>>> Rotation as XML3DRotation (for example, to retrieve the rotation part >>>> of an XML3D transformation, you can do this by using jQuery to query >>>> the transformation node from the DOM, and access the rotation there >>>> then: var r = $("#my_transformation").rotation). >>> >>> What confuses me here is: >>> >>> earlier it was concluded that ?the DOM is the UI?, I understood meaning >>> how it works for people to >>> >>> a) author apps ? e.g. declare that oulu3d scene and reX avatar & chat >>> apps are used in my html, along this nice christmas themed thing I just >>> created (like txml is used in reX now) >>> >>> b) see and manipulate the state in the browser view-source & developer / >>> debugger DOM views (like the Scene Structure editor in Tundra) >>> >>> c) (something else that escaped me now) >>> >>> Anyhow the point being that intensive manipulations such as creating and >>> manipulating tens of thousands of entities are not done via it. This was >>> the response to our initial ?massive dom manipulation? perf test. >>> Manipulating transformation is a typical example where that happens ? I >>> know that declarative ways can often be a good way to deal with e.g. >>> moving objects, like the PhysicsMotor in Tundra and I think what XFlow >>> (targets to) cover(s) too, but not always nor for everything so I think >>> the point is still valid. >>> >>> So do you use a different API for heavy tasks and the DOM for other >>> things or how does it go? >>> >>> ~Toni >>> >>>>> If we think that XML3D (or the DOM and XML3D acts on those >>>>> manipulations) >>>>> is already this perfect API I'm not sure what we are even trying to >>>>> accomplish here? If we are not building a nice to use 3D SDK whats the >>>>> target here? >>>> I totally agree that we still need to build this easily programmable >>>> 3D SDK. But XML3D makes it very simple to maintain the 3D scene in the >>>> DOM according to the scene state of the application. >>>> You may want to have a look at our example web client for our FiVES >>>> server (https://github.com/rryk/FiVES). Although I admit that the code >>>> needs some refactoring, the example of how entities are created shows >>>> this nicely : As soon as you create a new Entity object, the DOM >>>> representation of its scenegraph and its transformations are created >>>> automatically and maintained as View of the entity model. As >>>> developer, you only need to operate on the client application's API. >>>> This could be an example, of how an SDK could operate on the XML3D >>>> representation of the scene. >>>> >>>> >>>> ~ Torsten >>>> >>>>> On Wed, Oct 30, 2013 at 11:35 PM, Philipp Slusallek < >>>>> Philipp.Slusallek at dfki.de > wrote: >>>>> >>>>>> Hi Jonne, all, >>>>>> >>>>>> I am not sure that applying the Tudra API in the Web context is >>>>>> really the >>>>>> right approach. One of the key differences is that we already have a >>>>>> central "scene" data structure and it already handles rendering >>>>>> and input >>>>>> (DOM events), and other aspects. Also an API oriented approach may >>>>>> not be >>>>>> the best option in this declarative context either (even though I >>>>>> understands that it feels more natural when coming from C++, I had >>>>>> the same >>>>>> issues). >>>>>> >>>>>> So let me be a bit more specific: >>>>>> >>>>>> -- Network: So, yes we need a network module. It's not something that >>>>>> "lives" in the DOM but rather watches it and sends updates to the >>>>>> server to >>>>>> achieve sync. >>>>>> >>>>>> -- Renderer: Why do we need an object here. Its part of the DOM >>>>>> model. The >>>>>> only aspect is that we may want to set renderer-specific >>>>>> parameters. We >>>>>> currently do so through the DOM element, which seems like >>>>>> a good >>>>>> approach. The issues to be discussed here is what would be the >>>>>> advantages >>>>>> of a three.js based renderer and implement it of really needed. >>>>>> >>>>>> -- Scene: This can be done in the DOM nicely and with >>>>>> WebComponents its >>>>>> even more elegant. The scene objects are simple part of the same >>>>>> DOM but >>>>>> only some of them get rendered. I am not even sure that we need >>>>>> here in >>>>>> addition to the DOM and suitable mappings for the components. >>>>>> >>>>>> -- Asset: As you say this is already built-into the XML3D DOM. I >>>>>> see it a >>>>>> bit like the network system in that it watches missing resources >>>>>> in the DOM >>>>>> (plus attributes on priotity and such?) and implements a sort of >>>>>> scheduler >>>>>> excutes requests in some priority order. A version that only loads >>>>>> missing >>>>>> resources if is already available, one that goes even further and >>>>>> deletes >>>>>> unneeded resources could probably be ported from your resource >>>>>> manager. >>>>>> >>>>>> -- UI: That is why we are building on top of HTML, which is a >>>>>> pretty good >>>>>> UI layer in many requests. We have the 2D-UI GE to look into missing >>>>>> functionality >>>>>> >>>>>> -- Input: This also is already built in as the DOM as events >>>>>> traverse the >>>>>> DOM. It is widely used in all WEB based UIs and has proven quite >>>>>> useful >>>>>> there. Here we can nicely combine it with the 3D scene model where >>>>>> events >>>>>> are not only delivered to the 3D graphics elements but can be >>>>>> handled by >>>>>> the elements or components even before that. >>>>>> >>>>>> But maybe I am missunderstanding you here? >>>>>> >>>>>> >>>>>> Best, >>>>>> >>>>>> Philipp >>>>>> >>>>>> >>>>>> Am 30.10.2013 14:31, schrieb Jonne Nauha: >>>>>> >>>>>>> var client = >>>>>>> { >>>>>>> network : Object, // Network sync, connect, disconnect etc. >>>>>>> functionality. >>>>>>> // Implemented by scene sync GE (Ludocraft). >>>>>>> >>>>>>> renderer : Object, // API for 3D rendering engine access, >>>>>>> creating >>>>>>> scene nodes, updating their transforms, raycasting etc. >>>>>>> // Implemented by 3D UI (Playsign). >>>>>>> >>>>>>> scene : Object, // API for accessing the >>>>>>> Entity-Component-Attribute model. >>>>>>> // Implemented by ??? >>>>>>> >>>>>>> asset : Object, // Not strictly necessary for xml3d as >>>>>>> it does >>>>>>> asset requests for us, but for three.js this is pretty much needed. >>>>>>> // Implemented by ??? >>>>>>> >>>>>>> ui : Object, // API to add/remove widgets correctly >>>>>>> on top >>>>>>> of the 3D rendering canvas element, window resize events etc. >>>>>>> // Implemented by 2D/Input GE (Adminotech). >>>>>>> >>>>>>> input : Object // API to hook to input events occurring >>>>>>> on top >>>>>>> of the 3D scene. >>>>>>> // Implemented by 2D/Input GE (Adminotech). >>>>>>> }; >>>>>>> >>>>>>> >>>>>>> Best regards, >>>>>>> Jonne Nauha >>>>>>> Meshmoon developer at Adminotech Ltd. >>>>>>> www.meshmoon.com >>>>>>> >>>>>> > >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Wed, Oct 30, 2013 at 9:51 AM, Toni Alatalo >>>>>> >>>>>>> > wrote: >>>>>>> >>>>>>> Hi again, >>>>>>> new angle here: calling devs *outside* the 3D UI GE: POIs, >>>>>>> real-virtual interaction, interface designer, virtual >>>>>>> characters, 3d >>>>>>> capture, synchronization etc. >>>>>>> I think we need to proceed rapidly with integration now and >>>>>>> propose >>>>>>> that one next step towards that is to analyze the interfaces >>>>>>> between >>>>>>> 3D UI and other GEs. This is because it seems to be a central >>>>>>> part >>>>>>> with which many others interface: that is evident in the old >>>>>>> 'arch.png' where we analyzed GE/Epic interdependencies: is >>>>>>> embedded >>>>>>> in section 2 in the Winterthur arch discussion notes which >>>>>>> hopefully >>>>>>> works for everyone to see, >>>>>>> https://docs.google.com/**document/d/**1Sr4rg44yGxK8jj6yBsayCwfitZTq5 >>>>>>> **Cdyyb_xC25vhhE/edit >>>>>>> I propose a process where we go through the usage patterns >>>>>>> case by >>>>>>> case. For example so that me & Erno visit the other devs to >>>>>>> discuss >>>>>>> it. I think a good goal for those sessions is to define and >>>>>>> plan the >>>>>>> implementation of first tests / minimal use cases where the other >>>>>>> GEs are used together with 3D UI to show something. I'd like this >>>>>>> first pass to happen quickly so that within 2 weeks from the >>>>>>> planning the first case is implemented. So if we get to have the >>>>>>> sessions within 2 weeks from now, in a month we'd have demos with >>>>>>> all parts. >>>>>>> Let's organize this so that those who think this applies to their >>>>>>> work contact me with private email (to not spam the list), we >>>>>>> meet >>>>>>> and collect the notes to the wiki and inform this list about >>>>>>> that. >>>>>>> One question of particular interest to me here is: can the >>>>>>> users of >>>>>>> 3D UI do what they need well on the entity system level (for >>>>>>> example >>>>>>> just add and configure mesh components), or do they need deeper >>>>>>> access to the 3d scene and rendering (spatial queries, somehow >>>>>>> affect the rendering pipeline etc). With Tundra we have the >>>>>>> Scene API and the (Ogre)World API(s) to support the latter, >>>>>>> and also >>>>>>> access to the renderer directly. OTOH the entity system level is >>>>>>> renderer independent. >>>>>>> Synchronization is a special case which requires good two-way >>>>>>> integration with 3D UI. Luckily it's something that we and >>>>>>> especially Lasse himself knows already from how it works in >>>>>>> Tundra >>>>>>> (and in WebTundras). Definitely to be discussed and planned >>>>>>> now too >>>>>>> of course. >>>>>>> So please if you agree that this is a good process do raise hands >>>>>>> and let's start working on it! We can discuss this in the >>>>>>> weekly too >>>>>>> if needed. >>>>>>> Cheers, >>>>>>> ~Toni >>>>>>> >>>>>>> ______________________________**_________________ >>>>>>> Fiware-miwi mailing list >>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>> >>>>>>> >>>>>> >>>>>> > >>>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> ______________________________**_________________ >>>>>>> Fiware-miwi mailing list >>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>>> >>>>>>> >>>>>> -- >>>>>> >>>>>> ------------------------------**------------------------------** >>>>>> ------------- >>>>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>>>> >>>>>> Gesch?ftsf?hrung: >>>>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>>>> Dr. Walter Olthoff >>>>>> Vorsitzender des Aufsichtsrats: >>>>>> Prof. Dr. h.c. Hans A. Aukes >>>>>> >>>>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>>>> ------------------------------**------------------------------** >>>>>> --------------- >>>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Fiware-miwi mailing list >>>>> Fiware-miwi at lists.fi-ware.eu >>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>> >>>> _______________________________________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>> >>> >>> >>> _______________________________________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>> >> >> >> -- >> >> ------------------------------------------------------------------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> --------------------------------------------------------------------------- >> > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From Philipp.Slusallek at dfki.de Sat Nov 9 08:02:33 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Sat, 09 Nov 2013 08:02:33 +0100 Subject: [Fiware-miwi] Entity System Usage from UI Designer (Re: DOM as API vs as UI (Re: 3D UI Usage from other GEs / epics / apps)) In-Reply-To: References: <52717BA2.807@dfki.de> <5272218B.3040202@dfki.de> <9B2CC7C7-0418-45D8-B298-97676672BA77@playsign.net> <5276153C.2020704@dfki.de> Message-ID: <527DDE09.1000501@dfki.de> Hi, I think we would need very strong reasons not to use the DOM as the basis for the Interface Designer as this project is based on the declarative 3D approach. It would also be good to be able to handle XML3D data directly without the WebComponent layer on top of them. We are building Generic Enablers and not all application will need or want to use the WebComponents as we are defining them. From my point of view, the mapping of WebComponents or XML3D elements to the ECA model (likely via KIARA) should happen independently of the editor and be defined either generically (if using XML3D) or by the WebComponents themselves. This mapping can be nicely hidden within them and only they know what data needs to be sent to keep the component in sync. BTW, I will be arriving in Oulu at 17:05h on Sunday (I assume that Torsten has the same flight). Maybe, we could have dinner together tomorrow? (If anything has been planned already, please send me a quick reminder as I am buried in 300 email from the last few days at ICT 2013 in Vilnius and I am only slowly catching up :-( ). Best and see you all in Oulu, Philipp Am 08.11.2013 07:47, schrieb Toni Alatalo: > A specific question for the UI Designer / scene builder team at Admino: > > Your current implementation is against reX entity system as JS objects > (the WebRocket scene implementation) ? diagram b) in the previous post. > > Would the editor be well implementable directly against the DOM? ? the > option in diagram a). > > I think a key point is attribute metadata, for example type and valid > value range, and perhaps handling attribute types such as transform and > color with special editing widgets (transform gizmo, color palette > selector) etc. Do you think all that could work somehow nicely when > using the DOM directly, perhaps by having the type information in > WebComponent definitions? (you are the experts on this too thanks to the > 2D UI work) > > Or does the editor in practice require the kind of entity system with > attribute objects which you are using currently? > > We can discuss this at the office or in the Monday meet but I figured to > post the question here anyhow as the discussion has been here otherwise > and there?s no comments yet. I think UIDesigner as a user of the entity > system is good to analyse now as it?s already fairly complete ? other > users such as synchronisation are not as far yet. > > Cheers, > ~Toni > > On 06 Nov 2013, at 13:06, Toni Alatalo > wrote: > >> Hi returning to this as it?s still unclear for me and we need to >> implement this for integrating the client side of the synchronisation >> (that Lasse is working on) & the rest of the client: >> >> On 03 Nov 2013, at 11:19, Philipp Slusallek > > wrote: >>> No, we would still have to maintain the scene representation in the >>> DOM. However, we can use the specialized access functions (not the >>> string-valued attributes) to access the functionality of a (XML3D) >>> DOM node much more efficiently than having to parse strings. Torstens >>> example with the rotation is a good example of such a specialized >>> interface of an (XML3D) DOM node. >> >> Yes I think everyone agrees that the representation is good to have >> there (also) but the question is indeed about efficient ways of >> modification. >> >> Question: are those specialised access functions the same thing to >> what Kristian refers to in this quote from September 2nd (Re: >> [Fiware-miwi] massive DOM manipulation benchmark)? I think not as you >> & Torsten talk about access to DOM elements whereas Kristian talks >> about something not in the DOM, or? >> >> "The DOM is meant as in interface to the user. That's how we designed >> XML3D. All medium-range modification (a few hundereds per frame) are >> okay. One must not forget that users will use jQuery etc, which can -- >> used the wrong way -- slow down the whole application (not only for 3D >> content). For all other operations, we have concept like Xflow, where >> the calculation is hidden behind the DOM interface. Also the rendering >> is also hidden behind the DOM structure. We even have our own memory >> management to get a better JS performance.? >> >> I?m referring to the parts about ?hidden behind the DOM?, used by >> XFlow and rendering. >> >> Do those use and modify some non-DOM data structures which xml3d.js >> has for the scene internally? >> >> We are fine with reading existing docs or even just source code for >> these things, you don?t have to explain everything in the emails here, >> but yes/no answers & pointers to more information (e.g. to relevant >> code files on github) would be very helpful. >> >> So now in particular I?m figuring out whether that kind of ?hidden? / >> non-DOM interface would be the suitable one for network >> synchronisation to use. >> >> I know that changes coming over the net are not usually *that* much, >> typically within the max hundreds (and usually much less) for which >> Kristian states that going via DOM manipulation is fine, but there can >> be quite large bursts (e.g. at login to create the whole scene, or >> when some logic changes a big part of it). And there are many >> consumers for the cpu time available in the browser main thread so is >> perhaps good to avoid wasting even a little of it in continuous >> movement update handling etc. And in any case it?s good for us to know >> and understand how the DOM interfacing works ? even if it turns out >> the efficient alternative is not necessary for networking. >> >> In the current WebTundras we have the same structure as in the native >> Tundra, i.e. there are normal software objects (non-DOM) for the >> entity-system level entities & components, and the 3d visual ones of >> those are proxies for the concrete implementations of scene nodes and >> meshes etc. in Three.js / Ogre respectively. And the experimental >> DOM-integration that Jonne made in WebRocket then mirrors that JS EC >> structure to the DOM periodically. >> >> These two old client architecture sketch diagrams illustrate the options: >> >> a) net sync updates DOM, rendering gets updates via DOM: >> https://rawgithub.com/realXtend/doc/master/dom/rexdom.svg >> >> b) net sync updates JS objects, optimised batched sync updates DOM: >> https://rawgithub.com/realXtend/doc/master/dom/rexdom-dom_as_ui.svg >> >> Until now I thought that we settled on b) back then in early September >> as I understood that it?s what you also do & recommend (and that?s >> what WebTundras have been doing so far). >> >>> Philipp >> >> ~Toni >> >>> Am 31.10.2013 10:42, schrieb Toni Alatalo: >>>> On 31 Oct 2013, at 11:23, Torsten Spieldenner >>>> >>> > >>>> wrote: >>>>> On top the capabilities of the DOM API and additional powers of >>>>> sophisticated JavaScript-libraries, XML3D introduces an API extension >>>>> by its own to provide a convenient way to access the DOM elements as >>>>> XML3D-Elements, for example retrieving translation as XML3DVec3 or >>>>> Rotation as XML3DRotation (for example, to retrieve the rotation part >>>>> of an XML3D transformation, you can do this by using jQuery to query >>>>> the transformation node from the DOM, and access the rotation there >>>>> then: var r = $("#my_transformation").rotation). >>>> >>>> What confuses me here is: >>>> >>>> earlier it was concluded that ?the DOM is the UI?, I understood meaning >>>> how it works for people to >>>> >>>> a) author apps ? e.g. declare that oulu3d scene and reX avatar & chat >>>> apps are used in my html, along this nice christmas themed thing I just >>>> created (like txml is used in reX now) >>>> >>>> b) see and manipulate the state in the browser view-source & developer / >>>> debugger DOM views (like the Scene Structure editor in Tundra) >>>> >>>> c) (something else that escaped me now) >>>> >>>> Anyhow the point being that intensive manipulations such as creating and >>>> manipulating tens of thousands of entities are not done via it. This was >>>> the response to our initial ?massive dom manipulation? perf test. >>>> Manipulating transformation is a typical example where that happens ? I >>>> know that declarative ways can often be a good way to deal with e.g. >>>> moving objects, like the PhysicsMotor in Tundra and I think what XFlow >>>> (targets to) cover(s) too, but not always nor for everything so I think >>>> the point is still valid. >>>> >>>> So do you use a different API for heavy tasks and the DOM for other >>>> things or how does it go? >>>> >>>> ~Toni >>>> >>>>>> If we think that XML3D (or the DOM and XML3D acts on those >>>>>> manipulations) >>>>>> is already this perfect API I'm not sure what we are even trying to >>>>>> accomplish here? If we are not building a nice to use 3D SDK whats the >>>>>> target here? >>>>> I totally agree that we still need to build this easily programmable >>>>> 3D SDK. But XML3D makes it very simple to maintain the 3D scene in the >>>>> DOM according to the scene state of the application. >>>>> You may want to have a look at our example web client for our FiVES >>>>> server (https://github.com/rryk/FiVES). Although I admit that the code >>>>> needs some refactoring, the example of how entities are created shows >>>>> this nicely : As soon as you create a new Entity object, the DOM >>>>> representation of its scenegraph and its transformations are created >>>>> automatically and maintained as View of the entity model. As >>>>> developer, you only need to operate on the client application's API. >>>>> This could be an example, of how an SDK could operate on the XML3D >>>>> representation of the scene. >>>>> >>>>> >>>>> ~ Torsten >>>>> >>>>>> On Wed, Oct 30, 2013 at 11:35 PM, Philipp Slusallek < >>>>>> Philipp.Slusallek at dfki.de > wrote: >>>>>> >>>>>>> Hi Jonne, all, >>>>>>> >>>>>>> I am not sure that applying the Tudra API in the Web context is >>>>>>> really the >>>>>>> right approach. One of the key differences is that we already have a >>>>>>> central "scene" data structure and it already handles rendering >>>>>>> and input >>>>>>> (DOM events), and other aspects. Also an API oriented approach >>>>>>> may not be >>>>>>> the best option in this declarative context either (even though I >>>>>>> understands that it feels more natural when coming from C++, I >>>>>>> had the same >>>>>>> issues). >>>>>>> >>>>>>> So let me be a bit more specific: >>>>>>> >>>>>>> -- Network: So, yes we need a network module. It's not something that >>>>>>> "lives" in the DOM but rather watches it and sends updates to the >>>>>>> server to >>>>>>> achieve sync. >>>>>>> >>>>>>> -- Renderer: Why do we need an object here. Its part of the DOM >>>>>>> model. The >>>>>>> only aspect is that we may want to set renderer-specific >>>>>>> parameters. We >>>>>>> currently do so through the DOM element, which seems like >>>>>>> a good >>>>>>> approach. The issues to be discussed here is what would be the >>>>>>> advantages >>>>>>> of a three.js based renderer and implement it of really needed. >>>>>>> >>>>>>> -- Scene: This can be done in the DOM nicely and with >>>>>>> WebComponents its >>>>>>> even more elegant. The scene objects are simple part of the same >>>>>>> DOM but >>>>>>> only some of them get rendered. I am not even sure that we need >>>>>>> here in >>>>>>> addition to the DOM and suitable mappings for the components. >>>>>>> >>>>>>> -- Asset: As you say this is already built-into the XML3D DOM. I >>>>>>> see it a >>>>>>> bit like the network system in that it watches missing resources >>>>>>> in the DOM >>>>>>> (plus attributes on priotity and such?) and implements a sort of >>>>>>> scheduler >>>>>>> excutes requests in some priority order. A version that only >>>>>>> loads missing >>>>>>> resources if is already available, one that goes even further and >>>>>>> deletes >>>>>>> unneeded resources could probably be ported from your resource >>>>>>> manager. >>>>>>> >>>>>>> -- UI: That is why we are building on top of HTML, which is a >>>>>>> pretty good >>>>>>> UI layer in many requests. We have the 2D-UI GE to look into missing >>>>>>> functionality >>>>>>> >>>>>>> -- Input: This also is already built in as the DOM as events >>>>>>> traverse the >>>>>>> DOM. It is widely used in all WEB based UIs and has proven quite >>>>>>> useful >>>>>>> there. Here we can nicely combine it with the 3D scene model >>>>>>> where events >>>>>>> are not only delivered to the 3D graphics elements but can be >>>>>>> handled by >>>>>>> the elements or components even before that. >>>>>>> >>>>>>> But maybe I am missunderstanding you here? >>>>>>> >>>>>>> >>>>>>> Best, >>>>>>> >>>>>>> Philipp >>>>>>> >>>>>>> >>>>>>> Am 30.10.2013 14:31, schrieb Jonne Nauha: >>>>>>> >>>>>>>> var client = >>>>>>>> { >>>>>>>> network : Object, // Network sync, connect, disconnect etc. >>>>>>>> functionality. >>>>>>>> // Implemented by scene sync GE (Ludocraft). >>>>>>>> >>>>>>>> renderer : Object, // API for 3D rendering engine access, >>>>>>>> creating >>>>>>>> scene nodes, updating their transforms, raycasting etc. >>>>>>>> // Implemented by 3D UI (Playsign). >>>>>>>> >>>>>>>> scene : Object, // API for accessing the >>>>>>>> Entity-Component-Attribute model. >>>>>>>> // Implemented by ??? >>>>>>>> >>>>>>>> asset : Object, // Not strictly necessary for xml3d as >>>>>>>> it does >>>>>>>> asset requests for us, but for three.js this is pretty much needed. >>>>>>>> // Implemented by ??? >>>>>>>> >>>>>>>> ui : Object, // API to add/remove widgets correctly >>>>>>>> on top >>>>>>>> of the 3D rendering canvas element, window resize events etc. >>>>>>>> // Implemented by 2D/Input GE (Adminotech). >>>>>>>> >>>>>>>> input : Object // API to hook to input events occurring >>>>>>>> on top >>>>>>>> of the 3D scene. >>>>>>>> // Implemented by 2D/Input GE (Adminotech). >>>>>>>> }; >>>>>>>> >>>>>>>> >>>>>>>> Best regards, >>>>>>>> Jonne Nauha >>>>>>>> Meshmoon developer at Adminotech Ltd. >>>>>>>> www.meshmoon.com >>>>>>>> >>>>>>> > >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Oct 30, 2013 at 9:51 AM, Toni Alatalo >>>>>>> >>>>>>>> > wrote: >>>>>>>> >>>>>>>> Hi again, >>>>>>>> new angle here: calling devs *outside* the 3D UI GE: POIs, >>>>>>>> real-virtual interaction, interface designer, virtual >>>>>>>> characters, 3d >>>>>>>> capture, synchronization etc. >>>>>>>> I think we need to proceed rapidly with integration now and >>>>>>>> propose >>>>>>>> that one next step towards that is to analyze the interfaces >>>>>>>> between >>>>>>>> 3D UI and other GEs. This is because it seems to be a >>>>>>>> central part >>>>>>>> with which many others interface: that is evident in the old >>>>>>>> 'arch.png' where we analyzed GE/Epic interdependencies: is >>>>>>>> embedded >>>>>>>> in section 2 in the Winterthur arch discussion notes which >>>>>>>> hopefully >>>>>>>> works for everyone to see, >>>>>>>> https://docs.google.com/**document/d/**1Sr4rg44yGxK8jj6yBsayCwfitZTq5 >>>>>>>> **Cdyyb_xC25vhhE/edit >>>>>>>> I propose a process where we go through the usage patterns >>>>>>>> case by >>>>>>>> case. For example so that me & Erno visit the other devs to >>>>>>>> discuss >>>>>>>> it. I think a good goal for those sessions is to define and >>>>>>>> plan the >>>>>>>> implementation of first tests / minimal use cases where the >>>>>>>> other >>>>>>>> GEs are used together with 3D UI to show something. I'd like >>>>>>>> this >>>>>>>> first pass to happen quickly so that within 2 weeks from the >>>>>>>> planning the first case is implemented. So if we get to have the >>>>>>>> sessions within 2 weeks from now, in a month we'd have demos >>>>>>>> with >>>>>>>> all parts. >>>>>>>> Let's organize this so that those who think this applies to >>>>>>>> their >>>>>>>> work contact me with private email (to not spam the list), >>>>>>>> we meet >>>>>>>> and collect the notes to the wiki and inform this list about >>>>>>>> that. >>>>>>>> One question of particular interest to me here is: can the >>>>>>>> users of >>>>>>>> 3D UI do what they need well on the entity system level (for >>>>>>>> example >>>>>>>> just add and configure mesh components), or do they need deeper >>>>>>>> access to the 3d scene and rendering (spatial queries, somehow >>>>>>>> affect the rendering pipeline etc). With Tundra we have the >>>>>>>> Scene API and the (Ogre)World API(s) to support the latter, >>>>>>>> and also >>>>>>>> access to the renderer directly. OTOH the entity system level is >>>>>>>> renderer independent. >>>>>>>> Synchronization is a special case which requires good two-way >>>>>>>> integration with 3D UI. Luckily it's something that we and >>>>>>>> especially Lasse himself knows already from how it works in >>>>>>>> Tundra >>>>>>>> (and in WebTundras). Definitely to be discussed and planned >>>>>>>> now too >>>>>>>> of course. >>>>>>>> So please if you agree that this is a good process do raise >>>>>>>> hands >>>>>>>> and let's start working on it! We can discuss this in the >>>>>>>> weekly too >>>>>>>> if needed. >>>>>>>> Cheers, >>>>>>>> ~Toni >>>>>>>> >>>>>>>> ______________________________**_________________ >>>>>>>> Fiware-miwi mailing list >>>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> > >>>>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ______________________________**_________________ >>>>>>>> Fiware-miwi mailing list >>>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>>>> >>>>>>>> >>>>>>> -- >>>>>>> >>>>>>> ------------------------------**------------------------------** >>>>>>> ------------- >>>>>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>>>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>>>>> >>>>>>> Gesch?ftsf?hrung: >>>>>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>>>>> Dr. Walter Olthoff >>>>>>> Vorsitzender des Aufsichtsrats: >>>>>>> Prof. Dr. h.c. Hans A. Aukes >>>>>>> >>>>>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>>>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>>>>> ------------------------------**------------------------------** >>>>>>> --------------- >>>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Fiware-miwi mailing list >>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>> >>>>> _______________________________________________ >>>>> Fiware-miwi mailing list >>>>> Fiware-miwi at lists.fi-ware.eu >>>>> >>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>> >>>> >>>> >>>> _______________________________________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>> >>> >>> >>> -- >>> >>> ------------------------------------------------------------------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> --------------------------------------------------------------------------- >>> >> > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From toni at playsign.net Sat Nov 9 10:41:58 2013 From: toni at playsign.net (Toni Alatalo) Date: Sat, 9 Nov 2013 11:41:58 +0200 Subject: [Fiware-miwi] DOM as API vs as UI (Re: 3D UI Usage from other GEs / epics / apps) In-Reply-To: <527DDB84.1050202@dfki.de> References: <52717BA2.807@dfki.de> <5272218B.3040202@dfki.de> <9B2CC7C7-0418-45D8-B298-97676672BA77@playsign.net> <5276153C.2020704@dfki.de> <527DDB84.1050202@dfki.de> Message-ID: Ok - thanks for the infos. I am now focusing on how the scene is accessed and for example how the networking code should apply object movements there. I was reading some of the xml3d.js scene code again and get more of it now ? there is a Scene object for which there is a RenderGroup as the root etc. ? that is a RenderNode so scene.traverse seems to eventually lead to the plain DOM children array traversal: https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/scene.js#L125 leads to -> https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/rendernode.js#L45 So there?s no internal representation for the full scene (apart from how it proxies access to the DOM), only for the object data such as meshes (which again proxy how the data is in webgl). About setting object positions, upon the root node creation scene itself does this to set to initialise the position: root.setLocalMatrix(XML3D.math.mat4.create()); in https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/scene.js#L100 That seems to access the matrix directly in the ?page? which apparently is the memory management system you?ve referred to, and set the transform dirty flag: https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/rendergroup.js#L35 So that would be one way for the network code to move objects within xml3d.js. I suppose the same setLocalMatrix is called also when it is manipulated via the DOM, but as these direct JS calls are what xml3d.js scene code itself uses for it, perhaps it would be the way for e.g. network code too? Or should it just go via DOM? I?m afraid that the xml3d.js JS API is not documented at all ? at least I?ve been unable to find anything about it via https://github.com/xml3d/xml3d.js/wiki ~Toni On 09 Nov 2013, at 08:51, Philipp Slusallek wrote: > Hi, > > Since many of us a traveling, let me take a stab at your questions. Kristian, Torsten, please correct any technical errors. > > Xflow (which Kristian refers to) is used for realtime animation of characters and such operations. We have shown real-time animation of more then 25 characters with skeletal animation and skinning completely running in JS (without HW acceleration yet). This should show that XML3D can well be used for real-time changes even for things like geometry. > > Xflow works on the internal data representation of XML3D which is tailored for fast rendering through WebGL (typed arrays and such). This internal data structures are similar to what three.js maintains. There is actually not much difference at this layer. When a frame needs to rendered, both renderers simply go through this "display list" as efficiently as possible. > > What Kristian refers to regarding memory management is the issues that we encountered with garbage collection in JS implementations. As a result we allocate large arrays once and manage the data within those arrays ourselves. This has turned out to avoid quite frequent rendering stalls whenever the JS garbage collector kicked in. > > Each of the XML3D elements (e.g. mesh, data) offers JS APIs (should be documented in the Wiki, I hope) to access these internal data structures directly and so other JS code can have equally good access to these data structures. You can (but should not) also go through the text based DOM attributes but this will be slow for large data (e.g. vertices). I believe its is till fine to use these interfaces for small things like changing the time for an animation or such. > > One thing where you have to go through the DOM is creating the DOM objects themselves. There is little we can do about that. > > Of course, if your modifications are about things that can be well described by Xflow, you ideally should use xflow and eventually benefit from the HW acceleration that we are implementing, where potentially all the computations would happen on the GPU and not be touched by JS any more. I am not sure what the status of this task is, though. > > Hope this helps. Kristian, Torsten: Feel free to add more detail and corrections. > > > Best, > > Philipp > > Am 06.11.2013 12:06, schrieb Toni Alatalo: >> Hi returning to this as it?s still unclear for me and we need to >> implement this for integrating the client side of the synchronisation >> (that Lasse is working on) & the rest of the client: >> >> On 03 Nov 2013, at 11:19, Philipp Slusallek > > wrote: >>> No, we would still have to maintain the scene representation in the >>> DOM. However, we can use the specialized access functions (not the >>> string-valued attributes) to access the functionality of a (XML3D) DOM >>> node much more efficiently than having to parse strings. Torstens >>> example with the rotation is a good example of such a specialized >>> interface of an (XML3D) DOM node. >> >> Yes I think everyone agrees that the representation is good to have >> there (also) but the question is indeed about efficient ways of >> modification. >> >> Question: are those specialised access functions the same thing to what >> Kristian refers to in this quote from September 2nd (Re: [Fiware-miwi] >> massive DOM manipulation benchmark)? I think not as you & Torsten talk >> about access to DOM elements whereas Kristian talks about something not >> in the DOM, or? >> >> "The DOM is meant as in interface to the user. That's how we designed >> XML3D. All medium-range modification (a few hundereds per frame) are >> okay. One must not forget that users will use jQuery etc, which can -- >> used the wrong way -- slow down the whole application (not only for 3D >> content). For all other operations, we have concept like Xflow, where >> the calculation is hidden behind the DOM interface. Also the rendering >> is also hidden behind the DOM structure. We even have our own memory >> management to get a better JS performance.? >> >> I?m referring to the parts about ?hidden behind the DOM?, used by XFlow >> and rendering. >> >> Do those use and modify some non-DOM data structures which xml3d.js has >> for the scene internally? >> >> We are fine with reading existing docs or even just source code for >> these things, you don?t have to explain everything in the emails here, >> but yes/no answers & pointers to more information (e.g. to relevant code >> files on github) would be very helpful. >> >> So now in particular I?m figuring out whether that kind of ?hidden? / >> non-DOM interface would be the suitable one for network synchronisation >> to use. >> >> I know that changes coming over the net are not usually *that* much, >> typically within the max hundreds (and usually much less) for which >> Kristian states that going via DOM manipulation is fine, but there can >> be quite large bursts (e.g. at login to create the whole scene, or when >> some logic changes a big part of it). And there are many consumers for >> the cpu time available in the browser main thread so is perhaps good to >> avoid wasting even a little of it in continuous movement update handling >> etc. And in any case it?s good for us to know and understand how the DOM >> interfacing works ? even if it turns out the efficient alternative is >> not necessary for networking. >> >> In the current WebTundras we have the same structure as in the native >> Tundra, i.e. there are normal software objects (non-DOM) for the >> entity-system level entities & components, and the 3d visual ones of >> those are proxies for the concrete implementations of scene nodes and >> meshes etc. in Three.js / Ogre respectively. And the experimental >> DOM-integration that Jonne made in WebRocket then mirrors that JS EC >> structure to the DOM periodically. >> >> These two old client architecture sketch diagrams illustrate the options: >> >> a) net sync updates DOM, rendering gets updates via DOM: >> https://rawgithub.com/realXtend/doc/master/dom/rexdom.svg >> >> b) net sync updates JS objects, optimised batched sync updates DOM: >> https://rawgithub.com/realXtend/doc/master/dom/rexdom-dom_as_ui.svg >> >> Until now I thought that we settled on b) back then in early September >> as I understood that it?s what you also do & recommend (and that?s what >> WebTundras have been doing so far). >> >>> Philipp >> >> ~Toni >> >>> Am 31.10.2013 10:42, schrieb Toni Alatalo: >>>> On 31 Oct 2013, at 11:23, Torsten Spieldenner >>>> >>> > >>>> wrote: >>>>> On top the capabilities of the DOM API and additional powers of >>>>> sophisticated JavaScript-libraries, XML3D introduces an API extension >>>>> by its own to provide a convenient way to access the DOM elements as >>>>> XML3D-Elements, for example retrieving translation as XML3DVec3 or >>>>> Rotation as XML3DRotation (for example, to retrieve the rotation part >>>>> of an XML3D transformation, you can do this by using jQuery to query >>>>> the transformation node from the DOM, and access the rotation there >>>>> then: var r = $("#my_transformation").rotation). >>>> >>>> What confuses me here is: >>>> >>>> earlier it was concluded that ?the DOM is the UI?, I understood meaning >>>> how it works for people to >>>> >>>> a) author apps ? e.g. declare that oulu3d scene and reX avatar & chat >>>> apps are used in my html, along this nice christmas themed thing I just >>>> created (like txml is used in reX now) >>>> >>>> b) see and manipulate the state in the browser view-source & developer / >>>> debugger DOM views (like the Scene Structure editor in Tundra) >>>> >>>> c) (something else that escaped me now) >>>> >>>> Anyhow the point being that intensive manipulations such as creating and >>>> manipulating tens of thousands of entities are not done via it. This was >>>> the response to our initial ?massive dom manipulation? perf test. >>>> Manipulating transformation is a typical example where that happens ? I >>>> know that declarative ways can often be a good way to deal with e.g. >>>> moving objects, like the PhysicsMotor in Tundra and I think what XFlow >>>> (targets to) cover(s) too, but not always nor for everything so I think >>>> the point is still valid. >>>> >>>> So do you use a different API for heavy tasks and the DOM for other >>>> things or how does it go? >>>> >>>> ~Toni >>>> >>>>>> If we think that XML3D (or the DOM and XML3D acts on those >>>>>> manipulations) >>>>>> is already this perfect API I'm not sure what we are even trying to >>>>>> accomplish here? If we are not building a nice to use 3D SDK whats the >>>>>> target here? >>>>> I totally agree that we still need to build this easily programmable >>>>> 3D SDK. But XML3D makes it very simple to maintain the 3D scene in the >>>>> DOM according to the scene state of the application. >>>>> You may want to have a look at our example web client for our FiVES >>>>> server (https://github.com/rryk/FiVES). Although I admit that the code >>>>> needs some refactoring, the example of how entities are created shows >>>>> this nicely : As soon as you create a new Entity object, the DOM >>>>> representation of its scenegraph and its transformations are created >>>>> automatically and maintained as View of the entity model. As >>>>> developer, you only need to operate on the client application's API. >>>>> This could be an example, of how an SDK could operate on the XML3D >>>>> representation of the scene. >>>>> >>>>> >>>>> ~ Torsten >>>>> >>>>>> On Wed, Oct 30, 2013 at 11:35 PM, Philipp Slusallek < >>>>>> Philipp.Slusallek at dfki.de > wrote: >>>>>> >>>>>>> Hi Jonne, all, >>>>>>> >>>>>>> I am not sure that applying the Tudra API in the Web context is >>>>>>> really the >>>>>>> right approach. One of the key differences is that we already have a >>>>>>> central "scene" data structure and it already handles rendering >>>>>>> and input >>>>>>> (DOM events), and other aspects. Also an API oriented approach may >>>>>>> not be >>>>>>> the best option in this declarative context either (even though I >>>>>>> understands that it feels more natural when coming from C++, I had >>>>>>> the same >>>>>>> issues). >>>>>>> >>>>>>> So let me be a bit more specific: >>>>>>> >>>>>>> -- Network: So, yes we need a network module. It's not something that >>>>>>> "lives" in the DOM but rather watches it and sends updates to the >>>>>>> server to >>>>>>> achieve sync. >>>>>>> >>>>>>> -- Renderer: Why do we need an object here. Its part of the DOM >>>>>>> model. The >>>>>>> only aspect is that we may want to set renderer-specific >>>>>>> parameters. We >>>>>>> currently do so through the DOM element, which seems like >>>>>>> a good >>>>>>> approach. The issues to be discussed here is what would be the >>>>>>> advantages >>>>>>> of a three.js based renderer and implement it of really needed. >>>>>>> >>>>>>> -- Scene: This can be done in the DOM nicely and with >>>>>>> WebComponents its >>>>>>> even more elegant. The scene objects are simple part of the same >>>>>>> DOM but >>>>>>> only some of them get rendered. I am not even sure that we need >>>>>>> here in >>>>>>> addition to the DOM and suitable mappings for the components. >>>>>>> >>>>>>> -- Asset: As you say this is already built-into the XML3D DOM. I >>>>>>> see it a >>>>>>> bit like the network system in that it watches missing resources >>>>>>> in the DOM >>>>>>> (plus attributes on priotity and such?) and implements a sort of >>>>>>> scheduler >>>>>>> excutes requests in some priority order. A version that only loads >>>>>>> missing >>>>>>> resources if is already available, one that goes even further and >>>>>>> deletes >>>>>>> unneeded resources could probably be ported from your resource >>>>>>> manager. >>>>>>> >>>>>>> -- UI: That is why we are building on top of HTML, which is a >>>>>>> pretty good >>>>>>> UI layer in many requests. We have the 2D-UI GE to look into missing >>>>>>> functionality >>>>>>> >>>>>>> -- Input: This also is already built in as the DOM as events >>>>>>> traverse the >>>>>>> DOM. It is widely used in all WEB based UIs and has proven quite >>>>>>> useful >>>>>>> there. Here we can nicely combine it with the 3D scene model where >>>>>>> events >>>>>>> are not only delivered to the 3D graphics elements but can be >>>>>>> handled by >>>>>>> the elements or components even before that. >>>>>>> >>>>>>> But maybe I am missunderstanding you here? >>>>>>> >>>>>>> >>>>>>> Best, >>>>>>> >>>>>>> Philipp >>>>>>> >>>>>>> >>>>>>> Am 30.10.2013 14:31, schrieb Jonne Nauha: >>>>>>> >>>>>>>> var client = >>>>>>>> { >>>>>>>> network : Object, // Network sync, connect, disconnect etc. >>>>>>>> functionality. >>>>>>>> // Implemented by scene sync GE (Ludocraft). >>>>>>>> >>>>>>>> renderer : Object, // API for 3D rendering engine access, >>>>>>>> creating >>>>>>>> scene nodes, updating their transforms, raycasting etc. >>>>>>>> // Implemented by 3D UI (Playsign). >>>>>>>> >>>>>>>> scene : Object, // API for accessing the >>>>>>>> Entity-Component-Attribute model. >>>>>>>> // Implemented by ??? >>>>>>>> >>>>>>>> asset : Object, // Not strictly necessary for xml3d as >>>>>>>> it does >>>>>>>> asset requests for us, but for three.js this is pretty much needed. >>>>>>>> // Implemented by ??? >>>>>>>> >>>>>>>> ui : Object, // API to add/remove widgets correctly >>>>>>>> on top >>>>>>>> of the 3D rendering canvas element, window resize events etc. >>>>>>>> // Implemented by 2D/Input GE (Adminotech). >>>>>>>> >>>>>>>> input : Object // API to hook to input events occurring >>>>>>>> on top >>>>>>>> of the 3D scene. >>>>>>>> // Implemented by 2D/Input GE (Adminotech). >>>>>>>> }; >>>>>>>> >>>>>>>> >>>>>>>> Best regards, >>>>>>>> Jonne Nauha >>>>>>>> Meshmoon developer at Adminotech Ltd. >>>>>>>> www.meshmoon.com >>>>>>>> >>>>>>> > >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Oct 30, 2013 at 9:51 AM, Toni Alatalo >>>>>>> >>>>>>>> > wrote: >>>>>>>> >>>>>>>> Hi again, >>>>>>>> new angle here: calling devs *outside* the 3D UI GE: POIs, >>>>>>>> real-virtual interaction, interface designer, virtual >>>>>>>> characters, 3d >>>>>>>> capture, synchronization etc. >>>>>>>> I think we need to proceed rapidly with integration now and >>>>>>>> propose >>>>>>>> that one next step towards that is to analyze the interfaces >>>>>>>> between >>>>>>>> 3D UI and other GEs. This is because it seems to be a central >>>>>>>> part >>>>>>>> with which many others interface: that is evident in the old >>>>>>>> 'arch.png' where we analyzed GE/Epic interdependencies: is >>>>>>>> embedded >>>>>>>> in section 2 in the Winterthur arch discussion notes which >>>>>>>> hopefully >>>>>>>> works for everyone to see, >>>>>>>> https://docs.google.com/**document/d/**1Sr4rg44yGxK8jj6yBsayCwfitZTq5 >>>>>>>> **Cdyyb_xC25vhhE/edit >>>>>>>> I propose a process where we go through the usage patterns >>>>>>>> case by >>>>>>>> case. For example so that me & Erno visit the other devs to >>>>>>>> discuss >>>>>>>> it. I think a good goal for those sessions is to define and >>>>>>>> plan the >>>>>>>> implementation of first tests / minimal use cases where the other >>>>>>>> GEs are used together with 3D UI to show something. I'd like this >>>>>>>> first pass to happen quickly so that within 2 weeks from the >>>>>>>> planning the first case is implemented. So if we get to have the >>>>>>>> sessions within 2 weeks from now, in a month we'd have demos with >>>>>>>> all parts. >>>>>>>> Let's organize this so that those who think this applies to their >>>>>>>> work contact me with private email (to not spam the list), we >>>>>>>> meet >>>>>>>> and collect the notes to the wiki and inform this list about >>>>>>>> that. >>>>>>>> One question of particular interest to me here is: can the >>>>>>>> users of >>>>>>>> 3D UI do what they need well on the entity system level (for >>>>>>>> example >>>>>>>> just add and configure mesh components), or do they need deeper >>>>>>>> access to the 3d scene and rendering (spatial queries, somehow >>>>>>>> affect the rendering pipeline etc). With Tundra we have the >>>>>>>> Scene API and the (Ogre)World API(s) to support the latter, >>>>>>>> and also >>>>>>>> access to the renderer directly. OTOH the entity system level is >>>>>>>> renderer independent. >>>>>>>> Synchronization is a special case which requires good two-way >>>>>>>> integration with 3D UI. Luckily it's something that we and >>>>>>>> especially Lasse himself knows already from how it works in >>>>>>>> Tundra >>>>>>>> (and in WebTundras). Definitely to be discussed and planned >>>>>>>> now too >>>>>>>> of course. >>>>>>>> So please if you agree that this is a good process do raise hands >>>>>>>> and let's start working on it! We can discuss this in the >>>>>>>> weekly too >>>>>>>> if needed. >>>>>>>> Cheers, >>>>>>>> ~Toni >>>>>>>> >>>>>>>> ______________________________**_________________ >>>>>>>> Fiware-miwi mailing list >>>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> > >>>>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ______________________________**_________________ >>>>>>>> Fiware-miwi mailing list >>>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>>>> >>>>>>>> >>>>>>> -- >>>>>>> >>>>>>> ------------------------------**------------------------------** >>>>>>> ------------- >>>>>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>>>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>>>>> >>>>>>> Gesch?ftsf?hrung: >>>>>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>>>>> Dr. Walter Olthoff >>>>>>> Vorsitzender des Aufsichtsrats: >>>>>>> Prof. Dr. h.c. Hans A. Aukes >>>>>>> >>>>>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>>>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>>>>> ------------------------------**------------------------------** >>>>>>> --------------- >>>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Fiware-miwi mailing list >>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>> >>>>> _______________________________________________ >>>>> Fiware-miwi mailing list >>>>> Fiware-miwi at lists.fi-ware.eu >>>>> >>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>> >>>> >>>> >>>> _______________________________________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>> >>> >>> >>> -- >>> >>> ------------------------------------------------------------------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> --------------------------------------------------------------------------- >>> >> > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Sat Nov 9 10:48:57 2013 From: toni at playsign.net (Toni Alatalo) Date: Sat, 9 Nov 2013 11:48:57 +0200 Subject: [Fiware-miwi] DOM as API vs as UI (Re: 3D UI Usage from other GEs / epics / apps) In-Reply-To: References: <52717BA2.807@dfki.de> <5272218B.3040202@dfki.de> <9B2CC7C7-0418-45D8-B298-97676672BA77@playsign.net> <5276153C.2020704@dfki.de> <527DDB84.1050202@dfki.de> Message-ID: bleh sorry I apparently misunderstood a key part: On 09 Nov 2013, at 11:41, Toni Alatalo wrote: > https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/scene.js#L125 > leads to -> > https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/rendernode.js#L45 > So there?s no internal representation for the full scene (apart from how it proxies access to the DOM) RenderNode does say: this.children = []; in https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/rendernode.js#L23 So actually there is a JS list internally for the full scene where all the scene nodes are as JS RenderNode objects etc ? so the DOM is not used as the internal structure for the scene, and the whole system resembles what we have in WebTundra?s as well ? or? In Chiru-Webclient the corresponding collection of JS objects for the scene is https://github.com/Chiru/Chiru-WebClient/blob/master/src/ecmodel/ECManager.js#L24 (that?s not the three.js scene, the SceneManager there own both the ECManager instance and the three scene). ~Toni > About setting object positions, upon the root node creation scene itself does this to set to initialise the position: > root.setLocalMatrix(XML3D.math.mat4.create()); > in https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/scene.js#L100 > > That seems to access the matrix directly in the ?page? which apparently is the memory management system you?ve referred to, and set the transform dirty flag: https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/rendergroup.js#L35 > > So that would be one way for the network code to move objects within xml3d.js. I suppose the same setLocalMatrix is called also when it is manipulated via the DOM, but as these direct JS calls are what xml3d.js scene code itself uses for it, perhaps it would be the way for e.g. network code too? Or should it just go via DOM? > > I?m afraid that the xml3d.js JS API is not documented at all ? at least I?ve been unable to find anything about it via https://github.com/xml3d/xml3d.js/wiki > > ~Toni > > > On 09 Nov 2013, at 08:51, Philipp Slusallek wrote: > >> Hi, >> >> Since many of us a traveling, let me take a stab at your questions. Kristian, Torsten, please correct any technical errors. >> >> Xflow (which Kristian refers to) is used for realtime animation of characters and such operations. We have shown real-time animation of more then 25 characters with skeletal animation and skinning completely running in JS (without HW acceleration yet). This should show that XML3D can well be used for real-time changes even for things like geometry. >> >> Xflow works on the internal data representation of XML3D which is tailored for fast rendering through WebGL (typed arrays and such). This internal data structures are similar to what three.js maintains. There is actually not much difference at this layer. When a frame needs to rendered, both renderers simply go through this "display list" as efficiently as possible. >> >> What Kristian refers to regarding memory management is the issues that we encountered with garbage collection in JS implementations. As a result we allocate large arrays once and manage the data within those arrays ourselves. This has turned out to avoid quite frequent rendering stalls whenever the JS garbage collector kicked in. >> >> Each of the XML3D elements (e.g. mesh, data) offers JS APIs (should be documented in the Wiki, I hope) to access these internal data structures directly and so other JS code can have equally good access to these data structures. You can (but should not) also go through the text based DOM attributes but this will be slow for large data (e.g. vertices). I believe its is till fine to use these interfaces for small things like changing the time for an animation or such. >> >> One thing where you have to go through the DOM is creating the DOM objects themselves. There is little we can do about that. >> >> Of course, if your modifications are about things that can be well described by Xflow, you ideally should use xflow and eventually benefit from the HW acceleration that we are implementing, where potentially all the computations would happen on the GPU and not be touched by JS any more. I am not sure what the status of this task is, though. >> >> Hope this helps. Kristian, Torsten: Feel free to add more detail and corrections. >> >> >> Best, >> >> Philipp >> >> Am 06.11.2013 12:06, schrieb Toni Alatalo: >>> Hi returning to this as it?s still unclear for me and we need to >>> implement this for integrating the client side of the synchronisation >>> (that Lasse is working on) & the rest of the client: >>> >>> On 03 Nov 2013, at 11:19, Philipp Slusallek >> > wrote: >>>> No, we would still have to maintain the scene representation in the >>>> DOM. However, we can use the specialized access functions (not the >>>> string-valued attributes) to access the functionality of a (XML3D) DOM >>>> node much more efficiently than having to parse strings. Torstens >>>> example with the rotation is a good example of such a specialized >>>> interface of an (XML3D) DOM node. >>> >>> Yes I think everyone agrees that the representation is good to have >>> there (also) but the question is indeed about efficient ways of >>> modification. >>> >>> Question: are those specialised access functions the same thing to what >>> Kristian refers to in this quote from September 2nd (Re: [Fiware-miwi] >>> massive DOM manipulation benchmark)? I think not as you & Torsten talk >>> about access to DOM elements whereas Kristian talks about something not >>> in the DOM, or? >>> >>> "The DOM is meant as in interface to the user. That's how we designed >>> XML3D. All medium-range modification (a few hundereds per frame) are >>> okay. One must not forget that users will use jQuery etc, which can -- >>> used the wrong way -- slow down the whole application (not only for 3D >>> content). For all other operations, we have concept like Xflow, where >>> the calculation is hidden behind the DOM interface. Also the rendering >>> is also hidden behind the DOM structure. We even have our own memory >>> management to get a better JS performance.? >>> >>> I?m referring to the parts about ?hidden behind the DOM?, used by XFlow >>> and rendering. >>> >>> Do those use and modify some non-DOM data structures which xml3d.js has >>> for the scene internally? >>> >>> We are fine with reading existing docs or even just source code for >>> these things, you don?t have to explain everything in the emails here, >>> but yes/no answers & pointers to more information (e.g. to relevant code >>> files on github) would be very helpful. >>> >>> So now in particular I?m figuring out whether that kind of ?hidden? / >>> non-DOM interface would be the suitable one for network synchronisation >>> to use. >>> >>> I know that changes coming over the net are not usually *that* much, >>> typically within the max hundreds (and usually much less) for which >>> Kristian states that going via DOM manipulation is fine, but there can >>> be quite large bursts (e.g. at login to create the whole scene, or when >>> some logic changes a big part of it). And there are many consumers for >>> the cpu time available in the browser main thread so is perhaps good to >>> avoid wasting even a little of it in continuous movement update handling >>> etc. And in any case it?s good for us to know and understand how the DOM >>> interfacing works ? even if it turns out the efficient alternative is >>> not necessary for networking. >>> >>> In the current WebTundras we have the same structure as in the native >>> Tundra, i.e. there are normal software objects (non-DOM) for the >>> entity-system level entities & components, and the 3d visual ones of >>> those are proxies for the concrete implementations of scene nodes and >>> meshes etc. in Three.js / Ogre respectively. And the experimental >>> DOM-integration that Jonne made in WebRocket then mirrors that JS EC >>> structure to the DOM periodically. >>> >>> These two old client architecture sketch diagrams illustrate the options: >>> >>> a) net sync updates DOM, rendering gets updates via DOM: >>> https://rawgithub.com/realXtend/doc/master/dom/rexdom.svg >>> >>> b) net sync updates JS objects, optimised batched sync updates DOM: >>> https://rawgithub.com/realXtend/doc/master/dom/rexdom-dom_as_ui.svg >>> >>> Until now I thought that we settled on b) back then in early September >>> as I understood that it?s what you also do & recommend (and that?s what >>> WebTundras have been doing so far). >>> >>>> Philipp >>> >>> ~Toni >>> >>>> Am 31.10.2013 10:42, schrieb Toni Alatalo: >>>>> On 31 Oct 2013, at 11:23, Torsten Spieldenner >>>>> >>>> > >>>>> wrote: >>>>>> On top the capabilities of the DOM API and additional powers of >>>>>> sophisticated JavaScript-libraries, XML3D introduces an API extension >>>>>> by its own to provide a convenient way to access the DOM elements as >>>>>> XML3D-Elements, for example retrieving translation as XML3DVec3 or >>>>>> Rotation as XML3DRotation (for example, to retrieve the rotation part >>>>>> of an XML3D transformation, you can do this by using jQuery to query >>>>>> the transformation node from the DOM, and access the rotation there >>>>>> then: var r = $("#my_transformation").rotation). >>>>> >>>>> What confuses me here is: >>>>> >>>>> earlier it was concluded that ?the DOM is the UI?, I understood meaning >>>>> how it works for people to >>>>> >>>>> a) author apps ? e.g. declare that oulu3d scene and reX avatar & chat >>>>> apps are used in my html, along this nice christmas themed thing I just >>>>> created (like txml is used in reX now) >>>>> >>>>> b) see and manipulate the state in the browser view-source & developer / >>>>> debugger DOM views (like the Scene Structure editor in Tundra) >>>>> >>>>> c) (something else that escaped me now) >>>>> >>>>> Anyhow the point being that intensive manipulations such as creating and >>>>> manipulating tens of thousands of entities are not done via it. This was >>>>> the response to our initial ?massive dom manipulation? perf test. >>>>> Manipulating transformation is a typical example where that happens ? I >>>>> know that declarative ways can often be a good way to deal with e.g. >>>>> moving objects, like the PhysicsMotor in Tundra and I think what XFlow >>>>> (targets to) cover(s) too, but not always nor for everything so I think >>>>> the point is still valid. >>>>> >>>>> So do you use a different API for heavy tasks and the DOM for other >>>>> things or how does it go? >>>>> >>>>> ~Toni >>>>> >>>>>>> If we think that XML3D (or the DOM and XML3D acts on those >>>>>>> manipulations) >>>>>>> is already this perfect API I'm not sure what we are even trying to >>>>>>> accomplish here? If we are not building a nice to use 3D SDK whats the >>>>>>> target here? >>>>>> I totally agree that we still need to build this easily programmable >>>>>> 3D SDK. But XML3D makes it very simple to maintain the 3D scene in the >>>>>> DOM according to the scene state of the application. >>>>>> You may want to have a look at our example web client for our FiVES >>>>>> server (https://github.com/rryk/FiVES). Although I admit that the code >>>>>> needs some refactoring, the example of how entities are created shows >>>>>> this nicely : As soon as you create a new Entity object, the DOM >>>>>> representation of its scenegraph and its transformations are created >>>>>> automatically and maintained as View of the entity model. As >>>>>> developer, you only need to operate on the client application's API. >>>>>> This could be an example, of how an SDK could operate on the XML3D >>>>>> representation of the scene. >>>>>> >>>>>> >>>>>> ~ Torsten >>>>>> >>>>>>> On Wed, Oct 30, 2013 at 11:35 PM, Philipp Slusallek < >>>>>>> Philipp.Slusallek at dfki.de > wrote: >>>>>>> >>>>>>>> Hi Jonne, all, >>>>>>>> >>>>>>>> I am not sure that applying the Tudra API in the Web context is >>>>>>>> really the >>>>>>>> right approach. One of the key differences is that we already have a >>>>>>>> central "scene" data structure and it already handles rendering >>>>>>>> and input >>>>>>>> (DOM events), and other aspects. Also an API oriented approach may >>>>>>>> not be >>>>>>>> the best option in this declarative context either (even though I >>>>>>>> understands that it feels more natural when coming from C++, I had >>>>>>>> the same >>>>>>>> issues). >>>>>>>> >>>>>>>> So let me be a bit more specific: >>>>>>>> >>>>>>>> -- Network: So, yes we need a network module. It's not something that >>>>>>>> "lives" in the DOM but rather watches it and sends updates to the >>>>>>>> server to >>>>>>>> achieve sync. >>>>>>>> >>>>>>>> -- Renderer: Why do we need an object here. Its part of the DOM >>>>>>>> model. The >>>>>>>> only aspect is that we may want to set renderer-specific >>>>>>>> parameters. We >>>>>>>> currently do so through the DOM element, which seems like >>>>>>>> a good >>>>>>>> approach. The issues to be discussed here is what would be the >>>>>>>> advantages >>>>>>>> of a three.js based renderer and implement it of really needed. >>>>>>>> >>>>>>>> -- Scene: This can be done in the DOM nicely and with >>>>>>>> WebComponents its >>>>>>>> even more elegant. The scene objects are simple part of the same >>>>>>>> DOM but >>>>>>>> only some of them get rendered. I am not even sure that we need >>>>>>>> here in >>>>>>>> addition to the DOM and suitable mappings for the components. >>>>>>>> >>>>>>>> -- Asset: As you say this is already built-into the XML3D DOM. I >>>>>>>> see it a >>>>>>>> bit like the network system in that it watches missing resources >>>>>>>> in the DOM >>>>>>>> (plus attributes on priotity and such?) and implements a sort of >>>>>>>> scheduler >>>>>>>> excutes requests in some priority order. A version that only loads >>>>>>>> missing >>>>>>>> resources if is already available, one that goes even further and >>>>>>>> deletes >>>>>>>> unneeded resources could probably be ported from your resource >>>>>>>> manager. >>>>>>>> >>>>>>>> -- UI: That is why we are building on top of HTML, which is a >>>>>>>> pretty good >>>>>>>> UI layer in many requests. We have the 2D-UI GE to look into missing >>>>>>>> functionality >>>>>>>> >>>>>>>> -- Input: This also is already built in as the DOM as events >>>>>>>> traverse the >>>>>>>> DOM. It is widely used in all WEB based UIs and has proven quite >>>>>>>> useful >>>>>>>> there. Here we can nicely combine it with the 3D scene model where >>>>>>>> events >>>>>>>> are not only delivered to the 3D graphics elements but can be >>>>>>>> handled by >>>>>>>> the elements or components even before that. >>>>>>>> >>>>>>>> But maybe I am missunderstanding you here? >>>>>>>> >>>>>>>> >>>>>>>> Best, >>>>>>>> >>>>>>>> Philipp >>>>>>>> >>>>>>>> >>>>>>>> Am 30.10.2013 14:31, schrieb Jonne Nauha: >>>>>>>> >>>>>>>>> var client = >>>>>>>>> { >>>>>>>>> network : Object, // Network sync, connect, disconnect etc. >>>>>>>>> functionality. >>>>>>>>> // Implemented by scene sync GE (Ludocraft). >>>>>>>>> >>>>>>>>> renderer : Object, // API for 3D rendering engine access, >>>>>>>>> creating >>>>>>>>> scene nodes, updating their transforms, raycasting etc. >>>>>>>>> // Implemented by 3D UI (Playsign). >>>>>>>>> >>>>>>>>> scene : Object, // API for accessing the >>>>>>>>> Entity-Component-Attribute model. >>>>>>>>> // Implemented by ??? >>>>>>>>> >>>>>>>>> asset : Object, // Not strictly necessary for xml3d as >>>>>>>>> it does >>>>>>>>> asset requests for us, but for three.js this is pretty much needed. >>>>>>>>> // Implemented by ??? >>>>>>>>> >>>>>>>>> ui : Object, // API to add/remove widgets correctly >>>>>>>>> on top >>>>>>>>> of the 3D rendering canvas element, window resize events etc. >>>>>>>>> // Implemented by 2D/Input GE (Adminotech). >>>>>>>>> >>>>>>>>> input : Object // API to hook to input events occurring >>>>>>>>> on top >>>>>>>>> of the 3D scene. >>>>>>>>> // Implemented by 2D/Input GE (Adminotech). >>>>>>>>> }; >>>>>>>>> >>>>>>>>> >>>>>>>>> Best regards, >>>>>>>>> Jonne Nauha >>>>>>>>> Meshmoon developer at Adminotech Ltd. >>>>>>>>> www.meshmoon.com >>>>>>>>> >>>>>>>> > >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Oct 30, 2013 at 9:51 AM, Toni Alatalo >>>>>>>> >>>>>>>>> > wrote: >>>>>>>>> >>>>>>>>> Hi again, >>>>>>>>> new angle here: calling devs *outside* the 3D UI GE: POIs, >>>>>>>>> real-virtual interaction, interface designer, virtual >>>>>>>>> characters, 3d >>>>>>>>> capture, synchronization etc. >>>>>>>>> I think we need to proceed rapidly with integration now and >>>>>>>>> propose >>>>>>>>> that one next step towards that is to analyze the interfaces >>>>>>>>> between >>>>>>>>> 3D UI and other GEs. This is because it seems to be a central >>>>>>>>> part >>>>>>>>> with which many others interface: that is evident in the old >>>>>>>>> 'arch.png' where we analyzed GE/Epic interdependencies: is >>>>>>>>> embedded >>>>>>>>> in section 2 in the Winterthur arch discussion notes which >>>>>>>>> hopefully >>>>>>>>> works for everyone to see, >>>>>>>>> https://docs.google.com/**document/d/**1Sr4rg44yGxK8jj6yBsayCwfitZTq5 >>>>>>>>> **Cdyyb_xC25vhhE/edit >>>>>>>>> I propose a process where we go through the usage patterns >>>>>>>>> case by >>>>>>>>> case. For example so that me & Erno visit the other devs to >>>>>>>>> discuss >>>>>>>>> it. I think a good goal for those sessions is to define and >>>>>>>>> plan the >>>>>>>>> implementation of first tests / minimal use cases where the other >>>>>>>>> GEs are used together with 3D UI to show something. I'd like this >>>>>>>>> first pass to happen quickly so that within 2 weeks from the >>>>>>>>> planning the first case is implemented. So if we get to have the >>>>>>>>> sessions within 2 weeks from now, in a month we'd have demos with >>>>>>>>> all parts. >>>>>>>>> Let's organize this so that those who think this applies to their >>>>>>>>> work contact me with private email (to not spam the list), we >>>>>>>>> meet >>>>>>>>> and collect the notes to the wiki and inform this list about >>>>>>>>> that. >>>>>>>>> One question of particular interest to me here is: can the >>>>>>>>> users of >>>>>>>>> 3D UI do what they need well on the entity system level (for >>>>>>>>> example >>>>>>>>> just add and configure mesh components), or do they need deeper >>>>>>>>> access to the 3d scene and rendering (spatial queries, somehow >>>>>>>>> affect the rendering pipeline etc). With Tundra we have the >>>>>>>>> Scene API and the (Ogre)World API(s) to support the latter, >>>>>>>>> and also >>>>>>>>> access to the renderer directly. OTOH the entity system level is >>>>>>>>> renderer independent. >>>>>>>>> Synchronization is a special case which requires good two-way >>>>>>>>> integration with 3D UI. Luckily it's something that we and >>>>>>>>> especially Lasse himself knows already from how it works in >>>>>>>>> Tundra >>>>>>>>> (and in WebTundras). Definitely to be discussed and planned >>>>>>>>> now too >>>>>>>>> of course. >>>>>>>>> So please if you agree that this is a good process do raise hands >>>>>>>>> and let's start working on it! We can discuss this in the >>>>>>>>> weekly too >>>>>>>>> if needed. >>>>>>>>> Cheers, >>>>>>>>> ~Toni >>>>>>>>> >>>>>>>>> ______________________________**_________________ >>>>>>>>> Fiware-miwi mailing list >>>>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> > >>>>>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> ______________________________**_________________ >>>>>>>>> Fiware-miwi mailing list >>>>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>>>>> >>>>>>>>> >>>>>>>> -- >>>>>>>> >>>>>>>> ------------------------------**------------------------------** >>>>>>>> ------------- >>>>>>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>>>>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>>>>>> >>>>>>>> Gesch?ftsf?hrung: >>>>>>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>>>>>> Dr. Walter Olthoff >>>>>>>> Vorsitzender des Aufsichtsrats: >>>>>>>> Prof. Dr. h.c. Hans A. Aukes >>>>>>>> >>>>>>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>>>>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>>>>>> ------------------------------**------------------------------** >>>>>>>> --------------- >>>>>>>> >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Fiware-miwi mailing list >>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>>> >>>>>> _______________________________________________ >>>>>> Fiware-miwi mailing list >>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>> >>>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Fiware-miwi mailing list >>>>> Fiware-miwi at lists.fi-ware.eu >>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>> >>>> >>>> >>>> -- >>>> >>>> ------------------------------------------------------------------------- >>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>> >>>> Gesch?ftsf?hrung: >>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>> Dr. Walter Olthoff >>>> Vorsitzender des Aufsichtsrats: >>>> Prof. Dr. h.c. Hans A. Aukes >>>> >>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>> --------------------------------------------------------------------------- >>>> >>> >> >> >> -- >> >> ------------------------------------------------------------------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> --------------------------------------------------------------------------- >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Sat Nov 9 13:06:42 2013 From: toni at playsign.net (Toni Alatalo) Date: Sat, 9 Nov 2013 14:06:42 +0200 Subject: [Fiware-miwi] Entity System Usage from UI Designer (Re: DOM as API vs as UI (Re: 3D UI Usage from other GEs / epics / apps)) In-Reply-To: <527DDE09.1000501@dfki.de> References: <52717BA2.807@dfki.de> <5272218B.3040202@dfki.de> <9B2CC7C7-0418-45D8-B298-97676672BA77@playsign.net> <5276153C.2020704@dfki.de> <527DDE09.1000501@dfki.de> Message-ID: <0B124FB4-67AF-40D8-8F64-04C159EA5F22@playsign.net> On 09 Nov 2013, at 09:02, Philipp Slusallek wrote: > I think we would need very strong reasons not to use the DOM as the basis for the Interface Designer as this project is based on the declarative 3D approach. I think that is not the question here, actually. Depends on what you mean with ?as the basis? there. I mean: It is established that the scenes are declarative ? that has been the practise in both background projects (realXtend & xml3d) for years. The Interface Designer / Scene editor works on the scene declarations. That?s how the editor in native Tundra works as well (Scene Structure & EC editors). It is also established that the DOM is used. It is the basis in any case in that sense, at least from a user POV the editor is a way to work with the DOM: it shows the scene data from your XML, changes made with the editor appear in the browser debugger DOM view (this works even in the current impl on WebRocket if you enable Jonne?s experimental DOM integration plugin there), and you can save the doc as XML. The question is: what is a good API ? in memory ? for the editor to work with the declarations? Would the DOM suffice for it or do we need something else to support it to get the required attribute metadata? > It would also be good to be able to handle XML3D data directly without the WebComponent layer on top of them. We are building Generic Enablers and not all application will need or want to use the WebComponents as we are defining them. In my understanding the idea is that XML3D data == reX EC data so that is implied. I was just thinking that *if* DOM itself does not suffice, WC definitions might be a possible supporting system from where to get the attribute type definitions for example. But thanks for rising the point: it would be interesting to know also whether the XML3D.js internal structures would have similar supporting features. With those JS datatypes we can probably know nicely whether some attribute is a transform, for example. Perhaps somehow with the DOM access too? Another example is whether some attribute is an integer or decimal (float) number, and ideally what is the valid value range for it. > From my point of view, the mapping of WebComponents or XML3D elements to the ECA model (likely via KIARA) should happen independently of the editor and be defined either generically (if using XML3D) or by the WebComponents themselves. This mapping can be nicely hidden within them and only they know what data needs to be sent to keep the component in sync. I?m not actually sure whether we need the mapping at all if we can just agree on what e.g. attribute names to use :) For example if reX decides it?s fine to switch the name of ?meshRef? attribute of the Mesh component to ?src? like it is in XML3D. The doc of current API in WebRocket: http://doc.meshmoon.com/doxygen/webrocket/classes/tundra.EC_Mesh.html#property_meshRef%20(attribute) . (that does bring the question how to have material & skeleton references etc. but that?s another discussion, I?ll actually need to check how those are in XML3D). The mapping would then be needed only for (legacy) TXML loading which is fine ? but not related to how the internals of especially the web client work. In my understanding the network synchronisation would not need it as it?d simply be just implemented so that a Mesh component is added as a .. mesh component, i.e. (also a) DOM element ? either via a Scene API or directly to the DOM. This understanding can very well be wrong, though :) .. to be tested more still. > BTW, I will be arriving in Oulu at 17:05h on Sunday (I assume that Torsten has the same flight). Maybe, we could have dinner together tomorrow? (If anything has been planned already, please send me a quick reminder as I am buried in 300 email from the last few days at ICT 2013 in Vilnius and I am only slowly catching up :-( ). We discussed this in the weekly and we agreed to meet Sunday evening already, Christof and IIRC Torsten at least were interested. We talked of just going for a beer but dinner would probably be good. We were thinking of 7pm or so? We put phonenums to the doc but mine is +358 40 7198759 and I can come to your hotel(s?) etc. and we can go somewhere ? the town is small and easy. I don?t know who from Oulu thought of coming but everyone is certainly welcome (just don?t confuse this with the ?official? dinner on Monday). > Best and see you all in Oulu, Welcome :) .. Very beautiful city and area much of the year, most ugly time of the year for a possibly extreme experience.. (can be dark, no snow yet). > Philipp ~Toni > > Am 08.11.2013 07:47, schrieb Toni Alatalo: >> A specific question for the UI Designer / scene builder team at Admino: >> >> Your current implementation is against reX entity system as JS objects >> (the WebRocket scene implementation) ? diagram b) in the previous post. >> >> Would the editor be well implementable directly against the DOM? ? the >> option in diagram a). >> >> I think a key point is attribute metadata, for example type and valid >> value range, and perhaps handling attribute types such as transform and >> color with special editing widgets (transform gizmo, color palette >> selector) etc. Do you think all that could work somehow nicely when >> using the DOM directly, perhaps by having the type information in >> WebComponent definitions? (you are the experts on this too thanks to the >> 2D UI work) >> >> Or does the editor in practice require the kind of entity system with >> attribute objects which you are using currently? >> >> We can discuss this at the office or in the Monday meet but I figured to >> post the question here anyhow as the discussion has been here otherwise >> and there?s no comments yet. I think UIDesigner as a user of the entity >> system is good to analyse now as it?s already fairly complete ? other >> users such as synchronisation are not as far yet. >> >> Cheers, >> ~Toni >> >> On 06 Nov 2013, at 13:06, Toni Alatalo > > wrote: >> >>> Hi returning to this as it?s still unclear for me and we need to >>> implement this for integrating the client side of the synchronisation >>> (that Lasse is working on) & the rest of the client: >>> >>> On 03 Nov 2013, at 11:19, Philipp Slusallek >> > wrote: >>>> No, we would still have to maintain the scene representation in the >>>> DOM. However, we can use the specialized access functions (not the >>>> string-valued attributes) to access the functionality of a (XML3D) >>>> DOM node much more efficiently than having to parse strings. Torstens >>>> example with the rotation is a good example of such a specialized >>>> interface of an (XML3D) DOM node. >>> >>> Yes I think everyone agrees that the representation is good to have >>> there (also) but the question is indeed about efficient ways of >>> modification. >>> >>> Question: are those specialised access functions the same thing to >>> what Kristian refers to in this quote from September 2nd (Re: >>> [Fiware-miwi] massive DOM manipulation benchmark)? I think not as you >>> & Torsten talk about access to DOM elements whereas Kristian talks >>> about something not in the DOM, or? >>> >>> "The DOM is meant as in interface to the user. That's how we designed >>> XML3D. All medium-range modification (a few hundereds per frame) are >>> okay. One must not forget that users will use jQuery etc, which can -- >>> used the wrong way -- slow down the whole application (not only for 3D >>> content). For all other operations, we have concept like Xflow, where >>> the calculation is hidden behind the DOM interface. Also the rendering >>> is also hidden behind the DOM structure. We even have our own memory >>> management to get a better JS performance.? >>> >>> I?m referring to the parts about ?hidden behind the DOM?, used by >>> XFlow and rendering. >>> >>> Do those use and modify some non-DOM data structures which xml3d.js >>> has for the scene internally? >>> >>> We are fine with reading existing docs or even just source code for >>> these things, you don?t have to explain everything in the emails here, >>> but yes/no answers & pointers to more information (e.g. to relevant >>> code files on github) would be very helpful. >>> >>> So now in particular I?m figuring out whether that kind of ?hidden? / >>> non-DOM interface would be the suitable one for network >>> synchronisation to use. >>> >>> I know that changes coming over the net are not usually *that* much, >>> typically within the max hundreds (and usually much less) for which >>> Kristian states that going via DOM manipulation is fine, but there can >>> be quite large bursts (e.g. at login to create the whole scene, or >>> when some logic changes a big part of it). And there are many >>> consumers for the cpu time available in the browser main thread so is >>> perhaps good to avoid wasting even a little of it in continuous >>> movement update handling etc. And in any case it?s good for us to know >>> and understand how the DOM interfacing works ? even if it turns out >>> the efficient alternative is not necessary for networking. >>> >>> In the current WebTundras we have the same structure as in the native >>> Tundra, i.e. there are normal software objects (non-DOM) for the >>> entity-system level entities & components, and the 3d visual ones of >>> those are proxies for the concrete implementations of scene nodes and >>> meshes etc. in Three.js / Ogre respectively. And the experimental >>> DOM-integration that Jonne made in WebRocket then mirrors that JS EC >>> structure to the DOM periodically. >>> >>> These two old client architecture sketch diagrams illustrate the options: >>> >>> a) net sync updates DOM, rendering gets updates via DOM: >>> https://rawgithub.com/realXtend/doc/master/dom/rexdom.svg >>> >>> b) net sync updates JS objects, optimised batched sync updates DOM: >>> https://rawgithub.com/realXtend/doc/master/dom/rexdom-dom_as_ui.svg >>> >>> Until now I thought that we settled on b) back then in early September >>> as I understood that it?s what you also do & recommend (and that?s >>> what WebTundras have been doing so far). >>> >>>> Philipp >>> >>> ~Toni >>> >>>> Am 31.10.2013 10:42, schrieb Toni Alatalo: >>>>> On 31 Oct 2013, at 11:23, Torsten Spieldenner >>>>> >>>> > >>>>> wrote: >>>>>> On top the capabilities of the DOM API and additional powers of >>>>>> sophisticated JavaScript-libraries, XML3D introduces an API extension >>>>>> by its own to provide a convenient way to access the DOM elements as >>>>>> XML3D-Elements, for example retrieving translation as XML3DVec3 or >>>>>> Rotation as XML3DRotation (for example, to retrieve the rotation part >>>>>> of an XML3D transformation, you can do this by using jQuery to query >>>>>> the transformation node from the DOM, and access the rotation there >>>>>> then: var r = $("#my_transformation").rotation). >>>>> >>>>> What confuses me here is: >>>>> >>>>> earlier it was concluded that ?the DOM is the UI?, I understood meaning >>>>> how it works for people to >>>>> >>>>> a) author apps ? e.g. declare that oulu3d scene and reX avatar & chat >>>>> apps are used in my html, along this nice christmas themed thing I just >>>>> created (like txml is used in reX now) >>>>> >>>>> b) see and manipulate the state in the browser view-source & developer / >>>>> debugger DOM views (like the Scene Structure editor in Tundra) >>>>> >>>>> c) (something else that escaped me now) >>>>> >>>>> Anyhow the point being that intensive manipulations such as creating and >>>>> manipulating tens of thousands of entities are not done via it. This was >>>>> the response to our initial ?massive dom manipulation? perf test. >>>>> Manipulating transformation is a typical example where that happens ? I >>>>> know that declarative ways can often be a good way to deal with e.g. >>>>> moving objects, like the PhysicsMotor in Tundra and I think what XFlow >>>>> (targets to) cover(s) too, but not always nor for everything so I think >>>>> the point is still valid. >>>>> >>>>> So do you use a different API for heavy tasks and the DOM for other >>>>> things or how does it go? >>>>> >>>>> ~Toni >>>>> >>>>>>> If we think that XML3D (or the DOM and XML3D acts on those >>>>>>> manipulations) >>>>>>> is already this perfect API I'm not sure what we are even trying to >>>>>>> accomplish here? If we are not building a nice to use 3D SDK whats the >>>>>>> target here? >>>>>> I totally agree that we still need to build this easily programmable >>>>>> 3D SDK. But XML3D makes it very simple to maintain the 3D scene in the >>>>>> DOM according to the scene state of the application. >>>>>> You may want to have a look at our example web client for our FiVES >>>>>> server (https://github.com/rryk/FiVES). Although I admit that the code >>>>>> needs some refactoring, the example of how entities are created shows >>>>>> this nicely : As soon as you create a new Entity object, the DOM >>>>>> representation of its scenegraph and its transformations are created >>>>>> automatically and maintained as View of the entity model. As >>>>>> developer, you only need to operate on the client application's API. >>>>>> This could be an example, of how an SDK could operate on the XML3D >>>>>> representation of the scene. >>>>>> >>>>>> >>>>>> ~ Torsten >>>>>> >>>>>>> On Wed, Oct 30, 2013 at 11:35 PM, Philipp Slusallek < >>>>>>> Philipp.Slusallek at dfki.de > wrote: >>>>>>> >>>>>>>> Hi Jonne, all, >>>>>>>> >>>>>>>> I am not sure that applying the Tudra API in the Web context is >>>>>>>> really the >>>>>>>> right approach. One of the key differences is that we already have a >>>>>>>> central "scene" data structure and it already handles rendering >>>>>>>> and input >>>>>>>> (DOM events), and other aspects. Also an API oriented approach >>>>>>>> may not be >>>>>>>> the best option in this declarative context either (even though I >>>>>>>> understands that it feels more natural when coming from C++, I >>>>>>>> had the same >>>>>>>> issues). >>>>>>>> >>>>>>>> So let me be a bit more specific: >>>>>>>> >>>>>>>> -- Network: So, yes we need a network module. It's not something that >>>>>>>> "lives" in the DOM but rather watches it and sends updates to the >>>>>>>> server to >>>>>>>> achieve sync. >>>>>>>> >>>>>>>> -- Renderer: Why do we need an object here. Its part of the DOM >>>>>>>> model. The >>>>>>>> only aspect is that we may want to set renderer-specific >>>>>>>> parameters. We >>>>>>>> currently do so through the DOM element, which seems like >>>>>>>> a good >>>>>>>> approach. The issues to be discussed here is what would be the >>>>>>>> advantages >>>>>>>> of a three.js based renderer and implement it of really needed. >>>>>>>> >>>>>>>> -- Scene: This can be done in the DOM nicely and with >>>>>>>> WebComponents its >>>>>>>> even more elegant. The scene objects are simple part of the same >>>>>>>> DOM but >>>>>>>> only some of them get rendered. I am not even sure that we need >>>>>>>> here in >>>>>>>> addition to the DOM and suitable mappings for the components. >>>>>>>> >>>>>>>> -- Asset: As you say this is already built-into the XML3D DOM. I >>>>>>>> see it a >>>>>>>> bit like the network system in that it watches missing resources >>>>>>>> in the DOM >>>>>>>> (plus attributes on priotity and such?) and implements a sort of >>>>>>>> scheduler >>>>>>>> excutes requests in some priority order. A version that only >>>>>>>> loads missing >>>>>>>> resources if is already available, one that goes even further and >>>>>>>> deletes >>>>>>>> unneeded resources could probably be ported from your resource >>>>>>>> manager. >>>>>>>> >>>>>>>> -- UI: That is why we are building on top of HTML, which is a >>>>>>>> pretty good >>>>>>>> UI layer in many requests. We have the 2D-UI GE to look into missing >>>>>>>> functionality >>>>>>>> >>>>>>>> -- Input: This also is already built in as the DOM as events >>>>>>>> traverse the >>>>>>>> DOM. It is widely used in all WEB based UIs and has proven quite >>>>>>>> useful >>>>>>>> there. Here we can nicely combine it with the 3D scene model >>>>>>>> where events >>>>>>>> are not only delivered to the 3D graphics elements but can be >>>>>>>> handled by >>>>>>>> the elements or components even before that. >>>>>>>> >>>>>>>> But maybe I am missunderstanding you here? >>>>>>>> >>>>>>>> >>>>>>>> Best, >>>>>>>> >>>>>>>> Philipp >>>>>>>> >>>>>>>> >>>>>>>> Am 30.10.2013 14:31, schrieb Jonne Nauha: >>>>>>>> >>>>>>>>> var client = >>>>>>>>> { >>>>>>>>> network : Object, // Network sync, connect, disconnect etc. >>>>>>>>> functionality. >>>>>>>>> // Implemented by scene sync GE (Ludocraft). >>>>>>>>> >>>>>>>>> renderer : Object, // API for 3D rendering engine access, >>>>>>>>> creating >>>>>>>>> scene nodes, updating their transforms, raycasting etc. >>>>>>>>> // Implemented by 3D UI (Playsign). >>>>>>>>> >>>>>>>>> scene : Object, // API for accessing the >>>>>>>>> Entity-Component-Attribute model. >>>>>>>>> // Implemented by ??? >>>>>>>>> >>>>>>>>> asset : Object, // Not strictly necessary for xml3d as >>>>>>>>> it does >>>>>>>>> asset requests for us, but for three.js this is pretty much needed. >>>>>>>>> // Implemented by ??? >>>>>>>>> >>>>>>>>> ui : Object, // API to add/remove widgets correctly >>>>>>>>> on top >>>>>>>>> of the 3D rendering canvas element, window resize events etc. >>>>>>>>> // Implemented by 2D/Input GE (Adminotech). >>>>>>>>> >>>>>>>>> input : Object // API to hook to input events occurring >>>>>>>>> on top >>>>>>>>> of the 3D scene. >>>>>>>>> // Implemented by 2D/Input GE (Adminotech). >>>>>>>>> }; >>>>>>>>> >>>>>>>>> >>>>>>>>> Best regards, >>>>>>>>> Jonne Nauha >>>>>>>>> Meshmoon developer at Adminotech Ltd. >>>>>>>>> www.meshmoon.com >>>>>>>>> >>>>>>>> > >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Oct 30, 2013 at 9:51 AM, Toni Alatalo >>>>>>>> >>>>>>>>> > wrote: >>>>>>>>> >>>>>>>>> Hi again, >>>>>>>>> new angle here: calling devs *outside* the 3D UI GE: POIs, >>>>>>>>> real-virtual interaction, interface designer, virtual >>>>>>>>> characters, 3d >>>>>>>>> capture, synchronization etc. >>>>>>>>> I think we need to proceed rapidly with integration now and >>>>>>>>> propose >>>>>>>>> that one next step towards that is to analyze the interfaces >>>>>>>>> between >>>>>>>>> 3D UI and other GEs. This is because it seems to be a >>>>>>>>> central part >>>>>>>>> with which many others interface: that is evident in the old >>>>>>>>> 'arch.png' where we analyzed GE/Epic interdependencies: is >>>>>>>>> embedded >>>>>>>>> in section 2 in the Winterthur arch discussion notes which >>>>>>>>> hopefully >>>>>>>>> works for everyone to see, >>>>>>>>> https://docs.google.com/**document/d/**1Sr4rg44yGxK8jj6yBsayCwfitZTq5 >>>>>>>>> **Cdyyb_xC25vhhE/edit >>>>>>>>> I propose a process where we go through the usage patterns >>>>>>>>> case by >>>>>>>>> case. For example so that me & Erno visit the other devs to >>>>>>>>> discuss >>>>>>>>> it. I think a good goal for those sessions is to define and >>>>>>>>> plan the >>>>>>>>> implementation of first tests / minimal use cases where the >>>>>>>>> other >>>>>>>>> GEs are used together with 3D UI to show something. I'd like >>>>>>>>> this >>>>>>>>> first pass to happen quickly so that within 2 weeks from the >>>>>>>>> planning the first case is implemented. So if we get to have the >>>>>>>>> sessions within 2 weeks from now, in a month we'd have demos >>>>>>>>> with >>>>>>>>> all parts. >>>>>>>>> Let's organize this so that those who think this applies to >>>>>>>>> their >>>>>>>>> work contact me with private email (to not spam the list), >>>>>>>>> we meet >>>>>>>>> and collect the notes to the wiki and inform this list about >>>>>>>>> that. >>>>>>>>> One question of particular interest to me here is: can the >>>>>>>>> users of >>>>>>>>> 3D UI do what they need well on the entity system level (for >>>>>>>>> example >>>>>>>>> just add and configure mesh components), or do they need deeper >>>>>>>>> access to the 3d scene and rendering (spatial queries, somehow >>>>>>>>> affect the rendering pipeline etc). With Tundra we have the >>>>>>>>> Scene API and the (Ogre)World API(s) to support the latter, >>>>>>>>> and also >>>>>>>>> access to the renderer directly. OTOH the entity system level is >>>>>>>>> renderer independent. >>>>>>>>> Synchronization is a special case which requires good two-way >>>>>>>>> integration with 3D UI. Luckily it's something that we and >>>>>>>>> especially Lasse himself knows already from how it works in >>>>>>>>> Tundra >>>>>>>>> (and in WebTundras). Definitely to be discussed and planned >>>>>>>>> now too >>>>>>>>> of course. >>>>>>>>> So please if you agree that this is a good process do raise >>>>>>>>> hands >>>>>>>>> and let's start working on it! We can discuss this in the >>>>>>>>> weekly too >>>>>>>>> if needed. >>>>>>>>> Cheers, >>>>>>>>> ~Toni >>>>>>>>> >>>>>>>>> ______________________________**_________________ >>>>>>>>> Fiware-miwi mailing list >>>>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> > >>>>>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> ______________________________**_________________ >>>>>>>>> Fiware-miwi mailing list >>>>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>>>>> >>>>>>>>> >>>>>>>> -- >>>>>>>> >>>>>>>> ------------------------------**------------------------------** >>>>>>>> ------------- >>>>>>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>>>>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>>>>>> >>>>>>>> Gesch?ftsf?hrung: >>>>>>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>>>>>> Dr. Walter Olthoff >>>>>>>> Vorsitzender des Aufsichtsrats: >>>>>>>> Prof. Dr. h.c. Hans A. Aukes >>>>>>>> >>>>>>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>>>>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>>>>>> ------------------------------**------------------------------** >>>>>>>> --------------- >>>>>>>> >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Fiware-miwi mailing list >>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>>> >>>>>> _______________________________________________ >>>>>> Fiware-miwi mailing list >>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>> >>>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Fiware-miwi mailing list >>>>> Fiware-miwi at lists.fi-ware.eu >>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>> >>>> >>>> >>>> -- >>>> >>>> ------------------------------------------------------------------------- >>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>> >>>> Gesch?ftsf?hrung: >>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>> Dr. Walter Olthoff >>>> Vorsitzender des Aufsichtsrats: >>>> Prof. Dr. h.c. Hans A. Aukes >>>> >>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>> --------------------------------------------------------------------------- >>>> >>> >> >> >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian.sons at dfki.de Sun Nov 10 11:35:11 2013 From: kristian.sons at dfki.de (Kristian Sons) Date: Sun, 10 Nov 2013 11:35:11 +0100 Subject: [Fiware-miwi] Xml3drepo access prob In-Reply-To: <527DCEE0.5000202@dfki.de> References: <20131108111714.GO47616@ee.oulu.fi> <527DCEE0.5000202@dfki.de> Message-ID: <527F615F.2060004@dfki.de> Hi, yes, Jozef restricted the access because we put some sensitive data to the repository. I will talk to him and come back to you. Best, Kristian Am 09.11.2013 06:57, schrieb Philipp Slusallek: > Hi, > > I do not. Kristian? > > Philipp > > > Am 08.11.2013 12:17, schrieb Erno Kuusela: >> Hello, >> >> These URLs from have started asking for passwords, >> any ideas if we need a user account now, or is the site down/disabled? >> >> http://verser2.cs.ucl.ac.uk/xml3drepo/ >> http://verser2.cs.ucl.ac.uk/xml3drepo/oulu3dlive/?meshformat=json >> >> Erno >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> > > -- _______________________________________________________________________________ Kristian Sons Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI Agenten und Simulierte Realit?t Campus, Geb. D 3 2, Raum 0.77 66123 Saarbr?cken, Germany Phone: +49 681 85775-3833 Phone: +49 681 302-3833 Fax: +49 681 85775?2235 kristian.sons at dfki.de http://www.xml3d.org Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 _______________________________________________________________________________ From mach at zhaw.ch Sun Nov 10 17:12:54 2013 From: mach at zhaw.ch (Christof Marti) Date: Sun, 10 Nov 2013 18:12:54 +0200 Subject: [Fiware-miwi] Oulu Beer/Dinner 19:00 Radisson Blu Lobby Message-ID: <181F2316-D7DF-4C43-8333-01690E592488@zhaw.ch> Hi everybody We (Philipp, Torsten & I) have arrived well at Oulu. We meet at 19:00 at the Radisson Blu Lobby to go for Dinner/Beer. Who want?s to join is welcome. @Toni: Could not reach you at the phone. Cheers, Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering P.O.Box, CH-8401 Winterthur Office:TD O3.18, Obere Kirchgasse 2 Phone: +41 58 934 70 63 Mobile: +41 79 416 69 50 Mail: mach at zhaw.ch Skype: christof-marti From jarkko at cyberlightning.com Sun Nov 10 17:15:16 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Sun, 10 Nov 2013 18:15:16 +0200 Subject: [Fiware-miwi] Oulu Beer/Dinner 19:00 Radisson Blu Lobby In-Reply-To: <181F2316-D7DF-4C43-8333-01690E592488@zhaw.ch> References: <181F2316-D7DF-4C43-8333-01690E592488@zhaw.ch> Message-ID: Christof and all, Just to let you know that at least I am in a middle of of Father's day celebration here and hence cannot join today. On my behalf, welcome to Oulu and see you tomorrow morning! :) On Sun, Nov 10, 2013 at 6:12 PM, Christof Marti wrote: > Hi everybody > > We (Philipp, Torsten & I) have arrived well at Oulu. > We meet at 19:00 at the Radisson Blu Lobby to go for Dinner/Beer. > Who want?s to join is welcome. > > @Toni: Could not reach you at the phone. > > Cheers, Christof > ---- > InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > Institut of Applied Information Technology - InIT > Zurich University of Applied Sciences - ZHAW > School of Engineering > P.O.Box, CH-8401 Winterthur > Office:TD O3.18, Obere Kirchgasse 2 > Phone: +41 58 934 70 63 > Mobile: +41 79 416 69 50 > Mail: mach at zhaw.ch > Skype: christof-marti > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- Jarkko Vatjus-Anttila VP, Technology Cyberlightning Ltd. mobile. +358 405245142 email. jarkko at cyberlightning.com Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Sun Nov 10 17:42:34 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Sun, 10 Nov 2013 17:42:34 +0100 Subject: [Fiware-miwi] Oulu Beer/Dinner 19:00 Radisson Blu Lobby In-Reply-To: References: <181F2316-D7DF-4C43-8333-01690E592488@zhaw.ch> Message-ID: <527FB77A.3090203@dfki.de> Hi, Just learned about that Father day event here in Finland. Of course, do not feel obliged to come. We will certainly find some nice place here. Please enjoy your celebrations! See you all tomorrow, Philipp Am 10.11.2013 17:15, schrieb Jarkko Vatjus-Anttila: > Just to let you know that at least I am in a middle of of Father's day > celebration here and hence cannot join today. On my behalf, welcome to > Oulu and see you tomorrow morning! :) > > > On Sun, Nov 10, 2013 at 6:12 PM, Christof Marti > wrote: > > Hi everybody > > We (Philipp, Torsten & I) have arrived well at Oulu. > We meet at 19:00 at the Radisson Blu Lobby to go for Dinner/Beer. > Who want?s to join is welcome. > > @Toni: Could not reach you at the phone. > > Cheers, Christof > ---- > InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > Institut of Applied Information Technology - InIT > Zurich University of Applied Sciences - ZHAW > School of Engineering > P.O.Box, CH-8401 Winterthur > Office:TD O3.18, Obere Kirchgasse 2 > Phone: +41 58 934 70 63 > Mobile: +41 79 416 69 50 > Mail: mach at zhaw.ch > Skype: christof-marti > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > > > > -- > Jarkko Vatjus-Anttila > VP, Technology > Cyberlightning Ltd. > > mobile. +358 405245142 > email. jarkko at cyberlightning.com > > Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. > Get your free evaluation version and buy it now! www.cybersli.de > > > www.cyberlightning.com > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From mach at zhaw.ch Mon Nov 11 08:15:26 2013 From: mach at zhaw.ch (Christof Marti) Date: Mon, 11 Nov 2013 09:15:26 +0200 Subject: [Fiware-miwi] Link to F2F minutes document Message-ID: https://docs.google.com/document/d/1UnrOgC5Btyn6AOEGdM6xu4M8itEvssJ4lXHNkrcLzG4/edit Cheers, Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering P.O.Box, CH-8401 Winterthur Office:TD O3.18, Obere Kirchgasse 2 Phone: +41 58 934 70 63 Mail: mach at zhaw.ch Skype: christof-marti From kristian.sons at dfki.de Mon Nov 11 09:00:11 2013 From: kristian.sons at dfki.de (Kristian Sons) Date: Mon, 11 Nov 2013 09:00:11 +0100 Subject: [Fiware-miwi] DOM as API vs as UI (Re: 3D UI Usage from other GEs / epics / apps) In-Reply-To: References: <52717BA2.807@dfki.de> <5272218B.3040202@dfki.de> <9B2CC7C7-0418-45D8-B298-97676672BA77@playsign.net> <5276153C.2020704@dfki.de> <527DDB84.1050202@dfki.de> Message-ID: <52808E8B.10307@dfki.de> Hi, yes, we have a very lean scene representation that we use to store accumulated matrices, lights, resolved shader information and such. This structure is synchronized with the DOM via adapters. Setting data values efficiently is documented here: https://github.com/xml3d/xml3d.js/wiki/How-to-efficiently-set-Xflow-input-with-TypedArrays Hope that helps, Kristian > bleh sorry I apparently misunderstood a key part: > > On 09 Nov 2013, at 11:41, Toni Alatalo > wrote: >> https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/scene.js#L125 >> leads to -> >> https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/rendernode.js#L45 >> So there's no internal representation for the full scene (apart from >> how it proxies access to the DOM) > > RenderNode does say: > this.children = []; > in > https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/rendernode.js#L23 > > So actually there is a JS list internally for the full scene where all > the scene nodes are as JS RenderNode objects etc --- so the DOM is not > used as the internal structure for the scene, and the whole system > resembles what we have in WebTundra's as well --- or? > > In Chiru-Webclient the corresponding collection of JS objects for the > scene is > https://github.com/Chiru/Chiru-WebClient/blob/master/src/ecmodel/ECManager.js#L24 > (that's not the three.js scene, the SceneManager there own both the > ECManager instance and the three scene). > > ~Toni > >> About setting object positions, upon the root node creation scene >> itself does this to set to initialise the position: >> root.setLocalMatrix(XML3D.math.mat4.create()); >> in >> https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/scene.js#L100 >> >> That seems to access the matrix directly in the 'page' which >> apparently is the memory management system you've referred to, and >> set the transform dirty flag: >> https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/rendergroup.js#L35 >> >> So that would be one way for the network code to move objects within >> xml3d.js. I suppose the same setLocalMatrix is called also when it is >> manipulated via the DOM, but as these direct JS calls are what >> xml3d.js scene code itself uses for it, perhaps it would be the way >> for e.g. network code too? Or should it just go via DOM? >> >> I'm afraid that the xml3d.js JS API is not documented at all --- at >> least I've been unable to find anything about it via >> https://github.com/xml3d/xml3d.js/wiki >> >> ~Toni >> >> >> On 09 Nov 2013, at 08:51, Philipp Slusallek >> > wrote: >> >>> Hi, >>> >>> Since many of us a traveling, let me take a stab at your questions. >>> Kristian, Torsten, please correct any technical errors. >>> >>> Xflow (which Kristian refers to) is used for realtime animation of >>> characters and such operations. We have shown real-time animation of >>> more then 25 characters with skeletal animation and skinning >>> completely running in JS (without HW acceleration yet). This should >>> show that XML3D can well be used for real-time changes even for >>> things like geometry. >>> >>> Xflow works on the internal data representation of XML3D which is >>> tailored for fast rendering through WebGL (typed arrays and such). >>> This internal data structures are similar to what three.js >>> maintains. There is actually not much difference at this layer. When >>> a frame needs to rendered, both renderers simply go through this >>> "display list" as efficiently as possible. >>> >>> What Kristian refers to regarding memory management is the issues >>> that we encountered with garbage collection in JS implementations. >>> As a result we allocate large arrays once and manage the data within >>> those arrays ourselves. This has turned out to avoid quite frequent >>> rendering stalls whenever the JS garbage collector kicked in. >>> >>> Each of the XML3D elements (e.g. mesh, data) offers JS APIs (should >>> be documented in the Wiki, I hope) to access these internal data >>> structures directly and so other JS code can have equally good >>> access to these data structures. You can (but should not) also go >>> through the text based DOM attributes but this will be slow for >>> large data (e.g. vertices). I believe its is till fine to use these >>> interfaces for small things like changing the time for an animation >>> or such. >>> >>> One thing where you have to go through the DOM is creating the DOM >>> objects themselves. There is little we can do about that. >>> >>> Of course, if your modifications are about things that can be well >>> described by Xflow, you ideally should use xflow and eventually >>> benefit from the HW acceleration that we are implementing, where >>> potentially all the computations would happen on the GPU and not be >>> touched by JS any more. I am not sure what the status of this task >>> is, though. >>> >>> Hope this helps. Kristian, Torsten: Feel free to add more detail and >>> corrections. >>> >>> >>> Best, >>> >>> Philipp >>> >>> Am 06.11.2013 12:06, schrieb Toni Alatalo: >>>> Hi returning to this as it's still unclear for me and we need to >>>> implement this for integrating the client side of the synchronisation >>>> (that Lasse is working on) & the rest of the client: >>>> >>>> On 03 Nov 2013, at 11:19, Philipp Slusallek >>>> >>>> > wrote: >>>>> No, we would still have to maintain the scene representation in the >>>>> DOM. However, we can use the specialized access functions (not the >>>>> string-valued attributes) to access the functionality of a (XML3D) DOM >>>>> node much more efficiently than having to parse strings. Torstens >>>>> example with the rotation is a good example of such a specialized >>>>> interface of an (XML3D) DOM node. >>>> >>>> Yes I think everyone agrees that the representation is good to have >>>> there (also) but the question is indeed about efficient ways of >>>> modification. >>>> >>>> Question: are those specialised access functions the same thing to what >>>> Kristian refers to in this quote from September 2nd (Re: [Fiware-miwi] >>>> massive DOM manipulation benchmark)? I think not as you & Torsten talk >>>> about access to DOM elements whereas Kristian talks about something not >>>> in the DOM, or? >>>> >>>> "The DOM is meant as in interface to the user. That's how we designed >>>> XML3D. All medium-range modification (a few hundereds per frame) are >>>> okay. One must not forget that users will use jQuery etc, which can -- >>>> used the wrong way -- slow down the whole application (not only for 3D >>>> content). For all other operations, we have concept like Xflow, where >>>> the calculation is hidden behind the DOM interface. Also the rendering >>>> is also hidden behind the DOM structure. We even have our own memory >>>> management to get a better JS performance." >>>> >>>> I'm referring to the parts about 'hidden behind the DOM', used by XFlow >>>> and rendering. >>>> >>>> Do those use and modify some non-DOM data structures which xml3d.js has >>>> for the scene internally? >>>> >>>> We are fine with reading existing docs or even just source code for >>>> these things, you don't have to explain everything in the emails here, >>>> but yes/no answers & pointers to more information (e.g. to relevant >>>> code >>>> files on github) would be very helpful. >>>> >>>> So now in particular I'm figuring out whether that kind of 'hidden' / >>>> non-DOM interface would be the suitable one for network synchronisation >>>> to use. >>>> >>>> I know that changes coming over the net are not usually *that* much, >>>> typically within the max hundreds (and usually much less) for which >>>> Kristian states that going via DOM manipulation is fine, but there can >>>> be quite large bursts (e.g. at login to create the whole scene, or when >>>> some logic changes a big part of it). And there are many consumers for >>>> the cpu time available in the browser main thread so is perhaps good to >>>> avoid wasting even a little of it in continuous movement update >>>> handling >>>> etc. And in any case it's good for us to know and understand how >>>> the DOM >>>> interfacing works --- even if it turns out the efficient alternative is >>>> not necessary for networking. >>>> >>>> In the current WebTundras we have the same structure as in the native >>>> Tundra, i.e. there are normal software objects (non-DOM) for the >>>> entity-system level entities & components, and the 3d visual ones of >>>> those are proxies for the concrete implementations of scene nodes and >>>> meshes etc. in Three.js / Ogre respectively. And the experimental >>>> DOM-integration that Jonne made in WebRocket then mirrors that JS EC >>>> structure to the DOM periodically. >>>> >>>> These two old client architecture sketch diagrams illustrate the >>>> options: >>>> >>>> a) net sync updates DOM, rendering gets updates via DOM: >>>> https://rawgithub.com/realXtend/doc/master/dom/rexdom.svg >>>> >>>> b) net sync updates JS objects, optimised batched sync updates DOM: >>>> https://rawgithub.com/realXtend/doc/master/dom/rexdom-dom_as_ui.svg >>>> >>>> Until now I thought that we settled on b) back then in early September >>>> as I understood that it's what you also do & recommend (and that's what >>>> WebTundras have been doing so far). >>>> >>>>> Philipp >>>> >>>> ~Toni >>>> >>>>> Am 31.10.2013 10:42, schrieb Toni Alatalo: >>>>>> On 31 Oct 2013, at 11:23, Torsten Spieldenner >>>>>> >>>>>> > >>>>>> wrote: >>>>>>> On top the capabilities of the DOM API and additional powers of >>>>>>> sophisticated JavaScript-libraries, XML3D introduces an API >>>>>>> extension >>>>>>> by its own to provide a convenient way to access the DOM elements as >>>>>>> XML3D-Elements, for example retrieving translation as XML3DVec3 or >>>>>>> Rotation as XML3DRotation (for example, to retrieve the rotation >>>>>>> part >>>>>>> of an XML3D transformation, you can do this by using jQuery to query >>>>>>> the transformation node from the DOM, and access the rotation there >>>>>>> then: var r = $("#my_transformation").rotation). >>>>>> >>>>>> What confuses me here is: >>>>>> >>>>>> earlier it was concluded that 'the DOM is the UI', I understood >>>>>> meaning >>>>>> how it works for people to >>>>>> >>>>>> a) author apps --- e.g. declare that oulu3d scene and reX avatar >>>>>> & chat >>>>>> apps are used in my html, along this nice christmas themed thing >>>>>> I just >>>>>> created (like txml is used in reX now) >>>>>> >>>>>> b) see and manipulate the state in the browser view-source & >>>>>> developer / >>>>>> debugger DOM views (like the Scene Structure editor in Tundra) >>>>>> >>>>>> c) (something else that escaped me now) >>>>>> >>>>>> Anyhow the point being that intensive manipulations such as >>>>>> creating and >>>>>> manipulating tens of thousands of entities are not done via it. >>>>>> This was >>>>>> the response to our initial 'massive dom manipulation' perf test. >>>>>> Manipulating transformation is a typical example where that >>>>>> happens --- I >>>>>> know that declarative ways can often be a good way to deal with e.g. >>>>>> moving objects, like the PhysicsMotor in Tundra and I think what >>>>>> XFlow >>>>>> (targets to) cover(s) too, but not always nor for everything so I >>>>>> think >>>>>> the point is still valid. >>>>>> >>>>>> So do you use a different API for heavy tasks and the DOM for other >>>>>> things or how does it go? >>>>>> >>>>>> ~Toni >>>>>> >>>>>>>> If we think that XML3D (or the DOM and XML3D acts on those >>>>>>>> manipulations) >>>>>>>> is already this perfect API I'm not sure what we are even trying to >>>>>>>> accomplish here? If we are not building a nice to use 3D SDK >>>>>>>> whats the >>>>>>>> target here? >>>>>>> I totally agree that we still need to build this easily programmable >>>>>>> 3D SDK. But XML3D makes it very simple to maintain the 3D scene >>>>>>> in the >>>>>>> DOM according to the scene state of the application. >>>>>>> You may want to have a look at our example web client for our FiVES >>>>>>> server (https://github.com/rryk/FiVES). Although I admit that >>>>>>> the code >>>>>>> needs some refactoring, the example of how entities are created >>>>>>> shows >>>>>>> this nicely : As soon as you create a new Entity object, the DOM >>>>>>> representation of its scenegraph and its transformations are created >>>>>>> automatically and maintained as View of the entity model. As >>>>>>> developer, you only need to operate on the client application's API. >>>>>>> This could be an example, of how an SDK could operate on the XML3D >>>>>>> representation of the scene. >>>>>>> >>>>>>> >>>>>>> ~ Torsten >>>>>>> >>>>>>>> On Wed, Oct 30, 2013 at 11:35 PM, Philipp Slusallek < >>>>>>>> Philipp.Slusallek at dfki.de >>>>>>>> > >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Hi Jonne, all, >>>>>>>>> >>>>>>>>> I am not sure that applying the Tudra API in the Web context is >>>>>>>>> really the >>>>>>>>> right approach. One of the key differences is that we already >>>>>>>>> have a >>>>>>>>> central "scene" data structure and it already handles rendering >>>>>>>>> and input >>>>>>>>> (DOM events), and other aspects. Also an API oriented approach may >>>>>>>>> not be >>>>>>>>> the best option in this declarative context either (even though I >>>>>>>>> understands that it feels more natural when coming from C++, I had >>>>>>>>> the same >>>>>>>>> issues). >>>>>>>>> >>>>>>>>> So let me be a bit more specific: >>>>>>>>> >>>>>>>>> -- Network: So, yes we need a network module. It's not >>>>>>>>> something that >>>>>>>>> "lives" in the DOM but rather watches it and sends updates to the >>>>>>>>> server to >>>>>>>>> achieve sync. >>>>>>>>> >>>>>>>>> -- Renderer: Why do we need an object here. Its part of the DOM >>>>>>>>> model. The >>>>>>>>> only aspect is that we may want to set renderer-specific >>>>>>>>> parameters. We >>>>>>>>> currently do so through the DOM element, which seems like >>>>>>>>> a good >>>>>>>>> approach. The issues to be discussed here is what would be the >>>>>>>>> advantages >>>>>>>>> of a three.js based renderer and implement it of really needed. >>>>>>>>> >>>>>>>>> -- Scene: This can be done in the DOM nicely and with >>>>>>>>> WebComponents its >>>>>>>>> even more elegant. The scene objects are simple part of the same >>>>>>>>> DOM but >>>>>>>>> only some of them get rendered. I am not even sure that we need >>>>>>>>> here in >>>>>>>>> addition to the DOM and suitable mappings for the components. >>>>>>>>> >>>>>>>>> -- Asset: As you say this is already built-into the XML3D DOM. I >>>>>>>>> see it a >>>>>>>>> bit like the network system in that it watches missing resources >>>>>>>>> in the DOM >>>>>>>>> (plus attributes on priotity and such?) and implements a sort of >>>>>>>>> scheduler >>>>>>>>> excutes requests in some priority order. A version that only loads >>>>>>>>> missing >>>>>>>>> resources if is already available, one that goes even further and >>>>>>>>> deletes >>>>>>>>> unneeded resources could probably be ported from your resource >>>>>>>>> manager. >>>>>>>>> >>>>>>>>> -- UI: That is why we are building on top of HTML, which is a >>>>>>>>> pretty good >>>>>>>>> UI layer in many requests. We have the 2D-UI GE to look into >>>>>>>>> missing >>>>>>>>> functionality >>>>>>>>> >>>>>>>>> -- Input: This also is already built in as the DOM as events >>>>>>>>> traverse the >>>>>>>>> DOM. It is widely used in all WEB based UIs and has proven quite >>>>>>>>> useful >>>>>>>>> there. Here we can nicely combine it with the 3D scene model where >>>>>>>>> events >>>>>>>>> are not only delivered to the 3D graphics elements but can be >>>>>>>>> handled by >>>>>>>>> the elements or components even before that. >>>>>>>>> >>>>>>>>> But maybe I am missunderstanding you here? >>>>>>>>> >>>>>>>>> >>>>>>>>> Best, >>>>>>>>> >>>>>>>>> Philipp >>>>>>>>> >>>>>>>>> >>>>>>>>> Am 30.10.2013 14:31, schrieb Jonne Nauha: >>>>>>>>> >>>>>>>>>> var client = >>>>>>>>>> { >>>>>>>>>> network : Object, // Network sync, connect, disconnect etc. >>>>>>>>>> functionality. >>>>>>>>>> // Implemented by scene sync GE (Ludocraft). >>>>>>>>>> >>>>>>>>>> renderer : Object, // API for 3D rendering engine access, >>>>>>>>>> creating >>>>>>>>>> scene nodes, updating their transforms, raycasting etc. >>>>>>>>>> // Implemented by 3D UI (Playsign). >>>>>>>>>> >>>>>>>>>> scene : Object, // API for accessing the >>>>>>>>>> Entity-Component-Attribute model. >>>>>>>>>> // Implemented by ??? >>>>>>>>>> >>>>>>>>>> asset : Object, // Not strictly necessary for xml3d as >>>>>>>>>> it does >>>>>>>>>> asset requests for us, but for three.js this is pretty much >>>>>>>>>> needed. >>>>>>>>>> // Implemented by ??? >>>>>>>>>> >>>>>>>>>> ui : Object, // API to add/remove widgets correctly >>>>>>>>>> on top >>>>>>>>>> of the 3D rendering canvas element, window resize events etc. >>>>>>>>>> // Implemented by 2D/Input GE >>>>>>>>>> (Adminotech). >>>>>>>>>> >>>>>>>>>> input : Object // API to hook to input events occurring >>>>>>>>>> on top >>>>>>>>>> of the 3D scene. >>>>>>>>>> // Implemented by 2D/Input GE (Adminotech). >>>>>>>>>> }; >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Best regards, >>>>>>>>>> Jonne Nauha >>>>>>>>>> Meshmoon developer at Adminotech Ltd. >>>>>>>>>> www.meshmoon.com >>>>>>>>>> >>>>>>>>> >>>>>>>>>> > >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Oct 30, 2013 at 9:51 AM, Toni Alatalo >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> > wrote: >>>>>>>>>> >>>>>>>>>> Hi again, >>>>>>>>>> new angle here: calling devs *outside* the 3D UI GE: POIs, >>>>>>>>>> real-virtual interaction, interface designer, virtual >>>>>>>>>> characters, 3d >>>>>>>>>> capture, synchronization etc. >>>>>>>>>> I think we need to proceed rapidly with integration now and >>>>>>>>>> propose >>>>>>>>>> that one next step towards that is to analyze the interfaces >>>>>>>>>> between >>>>>>>>>> 3D UI and other GEs. This is because it seems to be a central >>>>>>>>>> part >>>>>>>>>> with which many others interface: that is evident in the old >>>>>>>>>> 'arch.png' where we analyzed GE/Epic interdependencies: is >>>>>>>>>> embedded >>>>>>>>>> in section 2 in the Winterthur arch discussion notes which >>>>>>>>>> hopefully >>>>>>>>>> works for everyone to see, >>>>>>>>>> https://docs.google.com/**document/d/**1Sr4rg44yGxK8jj6yBsayCwfitZTq5 >>>>>>>>>> **Cdyyb_xC25vhhE/edit >>>>>>>>>> I propose a process where we go through the usage patterns >>>>>>>>>> case by >>>>>>>>>> case. For example so that me & Erno visit the other devs to >>>>>>>>>> discuss >>>>>>>>>> it. I think a good goal for those sessions is to define and >>>>>>>>>> plan the >>>>>>>>>> implementation of first tests / minimal use cases where >>>>>>>>>> the other >>>>>>>>>> GEs are used together with 3D UI to show something. I'd >>>>>>>>>> like this >>>>>>>>>> first pass to happen quickly so that within 2 weeks from the >>>>>>>>>> planning the first case is implemented. So if we get to >>>>>>>>>> have the >>>>>>>>>> sessions within 2 weeks from now, in a month we'd have >>>>>>>>>> demos with >>>>>>>>>> all parts. >>>>>>>>>> Let's organize this so that those who think this applies >>>>>>>>>> to their >>>>>>>>>> work contact me with private email (to not spam the list), we >>>>>>>>>> meet >>>>>>>>>> and collect the notes to the wiki and inform this list about >>>>>>>>>> that. >>>>>>>>>> One question of particular interest to me here is: can the >>>>>>>>>> users of >>>>>>>>>> 3D UI do what they need well on the entity system level (for >>>>>>>>>> example >>>>>>>>>> just add and configure mesh components), or do they need >>>>>>>>>> deeper >>>>>>>>>> access to the 3d scene and rendering (spatial queries, somehow >>>>>>>>>> affect the rendering pipeline etc). With Tundra we have the >>>>>>>>>> Scene API and the (Ogre)World API(s) to support the latter, >>>>>>>>>> and also >>>>>>>>>> access to the renderer directly. OTOH the entity system >>>>>>>>>> level is >>>>>>>>>> renderer independent. >>>>>>>>>> Synchronization is a special case which requires good two-way >>>>>>>>>> integration with 3D UI. Luckily it's something that we and >>>>>>>>>> especially Lasse himself knows already from how it works in >>>>>>>>>> Tundra >>>>>>>>>> (and in WebTundras). Definitely to be discussed and planned >>>>>>>>>> now too >>>>>>>>>> of course. >>>>>>>>>> So please if you agree that this is a good process do >>>>>>>>>> raise hands >>>>>>>>>> and let's start working on it! We can discuss this in the >>>>>>>>>> weekly too >>>>>>>>>> if needed. >>>>>>>>>> Cheers, >>>>>>>>>> ~Toni >>>>>>>>>> >>>>>>>>>> ______________________________**_________________ >>>>>>>>>> Fiware-miwi mailing list >>>>>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> > >>>>>>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> ______________________________**_________________ >>>>>>>>>> Fiware-miwi mailing list >>>>>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>>>>> >>>>>>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> -- >>>>>>>>> >>>>>>>>> ------------------------------**------------------------------** >>>>>>>>> ------------- >>>>>>>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>>>>>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>>>>>>> >>>>>>>>> Gesch?ftsf?hrung: >>>>>>>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>>>>>>> Dr. Walter Olthoff >>>>>>>>> Vorsitzender des Aufsichtsrats: >>>>>>>>> Prof. Dr. h.c. Hans A. Aukes >>>>>>>>> >>>>>>>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>>>>>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>>>>>>> ------------------------------**------------------------------** >>>>>>>>> --------------- >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Fiware-miwi mailing list >>>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>>> >>>>>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Fiware-miwi mailing list >>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>> >>>>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Fiware-miwi mailing list >>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>> >>>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>>> >>>>> >>>>> >>>>> -- >>>>> >>>>> ------------------------------------------------------------------------- >>>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>>> >>>>> Gesch?ftsf?hrung: >>>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>>> Dr. Walter Olthoff >>>>> Vorsitzender des Aufsichtsrats: >>>>> Prof. Dr. h.c. Hans A. Aukes >>>>> >>>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>>> --------------------------------------------------------------------------- >>>>> >>>> >>> >>> >>> -- >>> >>> ------------------------------------------------------------------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> --------------------------------------------------------------------------- >>> >> > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi -- _______________________________________________________________________________ Kristian Sons Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI Agenten und Simulierte Realit?t Campus, Geb. D 3 2, Raum 0.77 66123 Saarbr?cken, Germany Phone: +49 681 85775-3833 Phone: +49 681 302-3833 Fax: +49 681 85775--2235 kristian.sons at dfki.de http://www.xml3d.org Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 _______________________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian.sons at dfki.de Mon Nov 11 11:26:30 2013 From: kristian.sons at dfki.de (Kristian Sons) Date: Mon, 11 Nov 2013 11:26:30 +0100 Subject: [Fiware-miwi] DOM as API vs as UI (Re: 3D UI Usage from other GEs / epics / apps) In-Reply-To: <52808E8B.10307@dfki.de> References: <52717BA2.807@dfki.de> <5272218B.3040202@dfki.de> <9B2CC7C7-0418-45D8-B298-97676672BA77@playsign.net> <5276153C.2020704@dfki.de> <527DDB84.1050202@dfki.de> <52808E8B.10307@dfki.de> Message-ID: <5280B0D6.4070908@dfki.de> BTW, as soon as xml3d.org is up again, we will release a version of xml3d.js that will contain the new format handling API that allows for asychronous loading of data blocks. As an example, we'll add a OpenCTM loader that exploits WebWorkers for asychnronous decoding. Best, Kristian Am 11.11.2013 09:00, schrieb Kristian Sons: > Hi, > > yes, we have a very lean scene representation that we use to store > accumulated matrices, lights, resolved shader information and such. > This structure is synchronized with the DOM via adapters. > > Setting data values efficiently is documented here: > https://github.com/xml3d/xml3d.js/wiki/How-to-efficiently-set-Xflow-input-with-TypedArrays > > Hope that helps, > Kristian > > >> bleh sorry I apparently misunderstood a key part: >> >> On 09 Nov 2013, at 11:41, Toni Alatalo > > wrote: >>> https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/scene.js#L125 >>> leads to -> >>> https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/rendernode.js#L45 >>> So there's no internal representation for the full scene (apart from >>> how it proxies access to the DOM) >> >> RenderNode does say: >> this.children = []; >> in >> https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/rendernode.js#L23 >> >> So actually there is a JS list internally for the full scene where >> all the scene nodes are as JS RenderNode objects etc --- so the DOM >> is not used as the internal structure for the scene, and the whole >> system resembles what we have in WebTundra's as well --- or? >> >> In Chiru-Webclient the corresponding collection of JS objects for the >> scene is >> https://github.com/Chiru/Chiru-WebClient/blob/master/src/ecmodel/ECManager.js#L24 >> (that's not the three.js scene, the SceneManager there own both the >> ECManager instance and the three scene). >> >> ~Toni >> >>> About setting object positions, upon the root node creation scene >>> itself does this to set to initialise the position: >>> root.setLocalMatrix(XML3D.math.mat4.create()); >>> in >>> https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/scene.js#L100 >>> >>> That seems to access the matrix directly in the 'page' which >>> apparently is the memory management system you've referred to, and >>> set the transform dirty flag: >>> https://github.com/xml3d/xml3d.js/blob/develop/src/renderer/scene/rendergroup.js#L35 >>> >>> So that would be one way for the network code to move objects within >>> xml3d.js. I suppose the same setLocalMatrix is called also when it >>> is manipulated via the DOM, but as these direct JS calls are what >>> xml3d.js scene code itself uses for it, perhaps it would be the way >>> for e.g. network code too? Or should it just go via DOM? >>> >>> I'm afraid that the xml3d.js JS API is not documented at all --- at >>> least I've been unable to find anything about it via >>> https://github.com/xml3d/xml3d.js/wiki >>> >>> ~Toni >>> >>> >>> On 09 Nov 2013, at 08:51, Philipp Slusallek >>> > wrote: >>> >>>> Hi, >>>> >>>> Since many of us a traveling, let me take a stab at your questions. >>>> Kristian, Torsten, please correct any technical errors. >>>> >>>> Xflow (which Kristian refers to) is used for realtime animation of >>>> characters and such operations. We have shown real-time animation >>>> of more then 25 characters with skeletal animation and skinning >>>> completely running in JS (without HW acceleration yet). This should >>>> show that XML3D can well be used for real-time changes even for >>>> things like geometry. >>>> >>>> Xflow works on the internal data representation of XML3D which is >>>> tailored for fast rendering through WebGL (typed arrays and such). >>>> This internal data structures are similar to what three.js >>>> maintains. There is actually not much difference at this layer. >>>> When a frame needs to rendered, both renderers simply go through >>>> this "display list" as efficiently as possible. >>>> >>>> What Kristian refers to regarding memory management is the issues >>>> that we encountered with garbage collection in JS implementations. >>>> As a result we allocate large arrays once and manage the data >>>> within those arrays ourselves. This has turned out to avoid quite >>>> frequent rendering stalls whenever the JS garbage collector kicked in. >>>> >>>> Each of the XML3D elements (e.g. mesh, data) offers JS APIs (should >>>> be documented in the Wiki, I hope) to access these internal data >>>> structures directly and so other JS code can have equally good >>>> access to these data structures. You can (but should not) also go >>>> through the text based DOM attributes but this will be slow for >>>> large data (e.g. vertices). I believe its is till fine to use these >>>> interfaces for small things like changing the time for an animation >>>> or such. >>>> >>>> One thing where you have to go through the DOM is creating the DOM >>>> objects themselves. There is little we can do about that. >>>> >>>> Of course, if your modifications are about things that can be well >>>> described by Xflow, you ideally should use xflow and eventually >>>> benefit from the HW acceleration that we are implementing, where >>>> potentially all the computations would happen on the GPU and not be >>>> touched by JS any more. I am not sure what the status of this task >>>> is, though. >>>> >>>> Hope this helps. Kristian, Torsten: Feel free to add more detail >>>> and corrections. >>>> >>>> >>>> Best, >>>> >>>> Philipp >>>> >>>> Am 06.11.2013 12:06, schrieb Toni Alatalo: >>>>> Hi returning to this as it's still unclear for me and we need to >>>>> implement this for integrating the client side of the synchronisation >>>>> (that Lasse is working on) & the rest of the client: >>>>> >>>>> On 03 Nov 2013, at 11:19, Philipp Slusallek >>>>> >>>>> > wrote: >>>>>> No, we would still have to maintain the scene representation in the >>>>>> DOM. However, we can use the specialized access functions (not the >>>>>> string-valued attributes) to access the functionality of a >>>>>> (XML3D) DOM >>>>>> node much more efficiently than having to parse strings. Torstens >>>>>> example with the rotation is a good example of such a specialized >>>>>> interface of an (XML3D) DOM node. >>>>> >>>>> Yes I think everyone agrees that the representation is good to have >>>>> there (also) but the question is indeed about efficient ways of >>>>> modification. >>>>> >>>>> Question: are those specialised access functions the same thing to >>>>> what >>>>> Kristian refers to in this quote from September 2nd (Re: [Fiware-miwi] >>>>> massive DOM manipulation benchmark)? I think not as you & Torsten talk >>>>> about access to DOM elements whereas Kristian talks about >>>>> something not >>>>> in the DOM, or? >>>>> >>>>> "The DOM is meant as in interface to the user. That's how we designed >>>>> XML3D. All medium-range modification (a few hundereds per frame) are >>>>> okay. One must not forget that users will use jQuery etc, which can -- >>>>> used the wrong way -- slow down the whole application (not only for 3D >>>>> content). For all other operations, we have concept like Xflow, where >>>>> the calculation is hidden behind the DOM interface. Also the rendering >>>>> is also hidden behind the DOM structure. We even have our own memory >>>>> management to get a better JS performance." >>>>> >>>>> I'm referring to the parts about 'hidden behind the DOM', used by >>>>> XFlow >>>>> and rendering. >>>>> >>>>> Do those use and modify some non-DOM data structures which >>>>> xml3d.js has >>>>> for the scene internally? >>>>> >>>>> We are fine with reading existing docs or even just source code for >>>>> these things, you don't have to explain everything in the emails here, >>>>> but yes/no answers & pointers to more information (e.g. to >>>>> relevant code >>>>> files on github) would be very helpful. >>>>> >>>>> So now in particular I'm figuring out whether that kind of 'hidden' / >>>>> non-DOM interface would be the suitable one for network >>>>> synchronisation >>>>> to use. >>>>> >>>>> I know that changes coming over the net are not usually *that* much, >>>>> typically within the max hundreds (and usually much less) for which >>>>> Kristian states that going via DOM manipulation is fine, but there can >>>>> be quite large bursts (e.g. at login to create the whole scene, or >>>>> when >>>>> some logic changes a big part of it). And there are many consumers for >>>>> the cpu time available in the browser main thread so is perhaps >>>>> good to >>>>> avoid wasting even a little of it in continuous movement update >>>>> handling >>>>> etc. And in any case it's good for us to know and understand how >>>>> the DOM >>>>> interfacing works --- even if it turns out the efficient >>>>> alternative is >>>>> not necessary for networking. >>>>> >>>>> In the current WebTundras we have the same structure as in the native >>>>> Tundra, i.e. there are normal software objects (non-DOM) for the >>>>> entity-system level entities & components, and the 3d visual ones of >>>>> those are proxies for the concrete implementations of scene nodes and >>>>> meshes etc. in Three.js / Ogre respectively. And the experimental >>>>> DOM-integration that Jonne made in WebRocket then mirrors that JS EC >>>>> structure to the DOM periodically. >>>>> >>>>> These two old client architecture sketch diagrams illustrate the >>>>> options: >>>>> >>>>> a) net sync updates DOM, rendering gets updates via DOM: >>>>> https://rawgithub.com/realXtend/doc/master/dom/rexdom.svg >>>>> >>>>> b) net sync updates JS objects, optimised batched sync updates DOM: >>>>> https://rawgithub.com/realXtend/doc/master/dom/rexdom-dom_as_ui.svg >>>>> >>>>> Until now I thought that we settled on b) back then in early September >>>>> as I understood that it's what you also do & recommend (and that's >>>>> what >>>>> WebTundras have been doing so far). >>>>> >>>>>> Philipp >>>>> >>>>> ~Toni >>>>> >>>>>> Am 31.10.2013 10:42, schrieb Toni Alatalo: >>>>>>> On 31 Oct 2013, at 11:23, Torsten Spieldenner >>>>>>> >>>>>>> > >>>>>>> wrote: >>>>>>>> On top the capabilities of the DOM API and additional powers of >>>>>>>> sophisticated JavaScript-libraries, XML3D introduces an API >>>>>>>> extension >>>>>>>> by its own to provide a convenient way to access the DOM >>>>>>>> elements as >>>>>>>> XML3D-Elements, for example retrieving translation as XML3DVec3 or >>>>>>>> Rotation as XML3DRotation (for example, to retrieve the >>>>>>>> rotation part >>>>>>>> of an XML3D transformation, you can do this by using jQuery to >>>>>>>> query >>>>>>>> the transformation node from the DOM, and access the rotation there >>>>>>>> then: var r = $("#my_transformation").rotation). >>>>>>> >>>>>>> What confuses me here is: >>>>>>> >>>>>>> earlier it was concluded that 'the DOM is the UI', I understood >>>>>>> meaning >>>>>>> how it works for people to >>>>>>> >>>>>>> a) author apps --- e.g. declare that oulu3d scene and reX avatar >>>>>>> & chat >>>>>>> apps are used in my html, along this nice christmas themed thing >>>>>>> I just >>>>>>> created (like txml is used in reX now) >>>>>>> >>>>>>> b) see and manipulate the state in the browser view-source & >>>>>>> developer / >>>>>>> debugger DOM views (like the Scene Structure editor in Tundra) >>>>>>> >>>>>>> c) (something else that escaped me now) >>>>>>> >>>>>>> Anyhow the point being that intensive manipulations such as >>>>>>> creating and >>>>>>> manipulating tens of thousands of entities are not done via it. >>>>>>> This was >>>>>>> the response to our initial 'massive dom manipulation' perf test. >>>>>>> Manipulating transformation is a typical example where that >>>>>>> happens --- I >>>>>>> know that declarative ways can often be a good way to deal with e.g. >>>>>>> moving objects, like the PhysicsMotor in Tundra and I think what >>>>>>> XFlow >>>>>>> (targets to) cover(s) too, but not always nor for everything so >>>>>>> I think >>>>>>> the point is still valid. >>>>>>> >>>>>>> So do you use a different API for heavy tasks and the DOM for other >>>>>>> things or how does it go? >>>>>>> >>>>>>> ~Toni >>>>>>> >>>>>>>>> If we think that XML3D (or the DOM and XML3D acts on those >>>>>>>>> manipulations) >>>>>>>>> is already this perfect API I'm not sure what we are even >>>>>>>>> trying to >>>>>>>>> accomplish here? If we are not building a nice to use 3D SDK >>>>>>>>> whats the >>>>>>>>> target here? >>>>>>>> I totally agree that we still need to build this easily >>>>>>>> programmable >>>>>>>> 3D SDK. But XML3D makes it very simple to maintain the 3D scene >>>>>>>> in the >>>>>>>> DOM according to the scene state of the application. >>>>>>>> You may want to have a look at our example web client for our FiVES >>>>>>>> server (https://github.com/rryk/FiVES). Although I admit that >>>>>>>> the code >>>>>>>> needs some refactoring, the example of how entities are created >>>>>>>> shows >>>>>>>> this nicely : As soon as you create a new Entity object, the DOM >>>>>>>> representation of its scenegraph and its transformations are >>>>>>>> created >>>>>>>> automatically and maintained as View of the entity model. As >>>>>>>> developer, you only need to operate on the client application's >>>>>>>> API. >>>>>>>> This could be an example, of how an SDK could operate on the XML3D >>>>>>>> representation of the scene. >>>>>>>> >>>>>>>> >>>>>>>> ~ Torsten >>>>>>>> >>>>>>>>> On Wed, Oct 30, 2013 at 11:35 PM, Philipp Slusallek < >>>>>>>>> Philipp.Slusallek at dfki.de >>>>>>>>> > >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Hi Jonne, all, >>>>>>>>>> >>>>>>>>>> I am not sure that applying the Tudra API in the Web context is >>>>>>>>>> really the >>>>>>>>>> right approach. One of the key differences is that we already >>>>>>>>>> have a >>>>>>>>>> central "scene" data structure and it already handles rendering >>>>>>>>>> and input >>>>>>>>>> (DOM events), and other aspects. Also an API oriented >>>>>>>>>> approach may >>>>>>>>>> not be >>>>>>>>>> the best option in this declarative context either (even though I >>>>>>>>>> understands that it feels more natural when coming from C++, >>>>>>>>>> I had >>>>>>>>>> the same >>>>>>>>>> issues). >>>>>>>>>> >>>>>>>>>> So let me be a bit more specific: >>>>>>>>>> >>>>>>>>>> -- Network: So, yes we need a network module. It's not >>>>>>>>>> something that >>>>>>>>>> "lives" in the DOM but rather watches it and sends updates to the >>>>>>>>>> server to >>>>>>>>>> achieve sync. >>>>>>>>>> >>>>>>>>>> -- Renderer: Why do we need an object here. Its part of the DOM >>>>>>>>>> model. The >>>>>>>>>> only aspect is that we may want to set renderer-specific >>>>>>>>>> parameters. We >>>>>>>>>> currently do so through the DOM element, which seems like >>>>>>>>>> a good >>>>>>>>>> approach. The issues to be discussed here is what would be the >>>>>>>>>> advantages >>>>>>>>>> of a three.js based renderer and implement it of really needed. >>>>>>>>>> >>>>>>>>>> -- Scene: This can be done in the DOM nicely and with >>>>>>>>>> WebComponents its >>>>>>>>>> even more elegant. The scene objects are simple part of the same >>>>>>>>>> DOM but >>>>>>>>>> only some of them get rendered. I am not even sure that we need >>>>>>>>>> here in >>>>>>>>>> addition to the DOM and suitable mappings for the components. >>>>>>>>>> >>>>>>>>>> -- Asset: As you say this is already built-into the XML3D DOM. I >>>>>>>>>> see it a >>>>>>>>>> bit like the network system in that it watches missing resources >>>>>>>>>> in the DOM >>>>>>>>>> (plus attributes on priotity and such?) and implements a sort of >>>>>>>>>> scheduler >>>>>>>>>> excutes requests in some priority order. A version that only >>>>>>>>>> loads >>>>>>>>>> missing >>>>>>>>>> resources if is already available, one that goes even further and >>>>>>>>>> deletes >>>>>>>>>> unneeded resources could probably be ported from your resource >>>>>>>>>> manager. >>>>>>>>>> >>>>>>>>>> -- UI: That is why we are building on top of HTML, which is a >>>>>>>>>> pretty good >>>>>>>>>> UI layer in many requests. We have the 2D-UI GE to look into >>>>>>>>>> missing >>>>>>>>>> functionality >>>>>>>>>> >>>>>>>>>> -- Input: This also is already built in as the DOM as events >>>>>>>>>> traverse the >>>>>>>>>> DOM. It is widely used in all WEB based UIs and has proven quite >>>>>>>>>> useful >>>>>>>>>> there. Here we can nicely combine it with the 3D scene model >>>>>>>>>> where >>>>>>>>>> events >>>>>>>>>> are not only delivered to the 3D graphics elements but can be >>>>>>>>>> handled by >>>>>>>>>> the elements or components even before that. >>>>>>>>>> >>>>>>>>>> But maybe I am missunderstanding you here? >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Best, >>>>>>>>>> >>>>>>>>>> Philipp >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Am 30.10.2013 14:31, schrieb Jonne Nauha: >>>>>>>>>> >>>>>>>>>>> var client = >>>>>>>>>>> { >>>>>>>>>>> network : Object, // Network sync, connect, disconnect >>>>>>>>>>> etc. >>>>>>>>>>> functionality. >>>>>>>>>>> // Implemented by scene sync GE (Ludocraft). >>>>>>>>>>> >>>>>>>>>>> renderer : Object, // API for 3D rendering engine access, >>>>>>>>>>> creating >>>>>>>>>>> scene nodes, updating their transforms, raycasting etc. >>>>>>>>>>> // Implemented by 3D UI (Playsign). >>>>>>>>>>> >>>>>>>>>>> scene : Object, // API for accessing the >>>>>>>>>>> Entity-Component-Attribute model. >>>>>>>>>>> // Implemented by ??? >>>>>>>>>>> >>>>>>>>>>> asset : Object, // Not strictly necessary for xml3d as >>>>>>>>>>> it does >>>>>>>>>>> asset requests for us, but for three.js this is pretty much >>>>>>>>>>> needed. >>>>>>>>>>> // Implemented by ??? >>>>>>>>>>> >>>>>>>>>>> ui : Object, // API to add/remove widgets correctly >>>>>>>>>>> on top >>>>>>>>>>> of the 3D rendering canvas element, window resize events etc. >>>>>>>>>>> // Implemented by 2D/Input GE >>>>>>>>>>> (Adminotech). >>>>>>>>>>> >>>>>>>>>>> input : Object // API to hook to input events occurring >>>>>>>>>>> on top >>>>>>>>>>> of the 3D scene. >>>>>>>>>>> // Implemented by 2D/Input GE (Adminotech). >>>>>>>>>>> }; >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Best regards, >>>>>>>>>>> Jonne Nauha >>>>>>>>>>> Meshmoon developer at Adminotech Ltd. >>>>>>>>>>> www.meshmoon.com >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> > >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Wed, Oct 30, 2013 at 9:51 AM, Toni Alatalo >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> > wrote: >>>>>>>>>>> >>>>>>>>>>> Hi again, >>>>>>>>>>> new angle here: calling devs *outside* the 3D UI GE: POIs, >>>>>>>>>>> real-virtual interaction, interface designer, virtual >>>>>>>>>>> characters, 3d >>>>>>>>>>> capture, synchronization etc. >>>>>>>>>>> I think we need to proceed rapidly with integration now and >>>>>>>>>>> propose >>>>>>>>>>> that one next step towards that is to analyze the interfaces >>>>>>>>>>> between >>>>>>>>>>> 3D UI and other GEs. This is because it seems to be a central >>>>>>>>>>> part >>>>>>>>>>> with which many others interface: that is evident in the old >>>>>>>>>>> 'arch.png' where we analyzed GE/Epic interdependencies: is >>>>>>>>>>> embedded >>>>>>>>>>> in section 2 in the Winterthur arch discussion notes which >>>>>>>>>>> hopefully >>>>>>>>>>> works for everyone to see, >>>>>>>>>>> https://docs.google.com/**document/d/**1Sr4rg44yGxK8jj6yBsayCwfitZTq5 >>>>>>>>>>> **Cdyyb_xC25vhhE/edit >>>>>>>>>>> I propose a process where we go through the usage patterns >>>>>>>>>>> case by >>>>>>>>>>> case. For example so that me & Erno visit the other devs to >>>>>>>>>>> discuss >>>>>>>>>>> it. I think a good goal for those sessions is to define and >>>>>>>>>>> plan the >>>>>>>>>>> implementation of first tests / minimal use cases where >>>>>>>>>>> the other >>>>>>>>>>> GEs are used together with 3D UI to show something. I'd >>>>>>>>>>> like this >>>>>>>>>>> first pass to happen quickly so that within 2 weeks from the >>>>>>>>>>> planning the first case is implemented. So if we get to >>>>>>>>>>> have the >>>>>>>>>>> sessions within 2 weeks from now, in a month we'd have >>>>>>>>>>> demos with >>>>>>>>>>> all parts. >>>>>>>>>>> Let's organize this so that those who think this applies >>>>>>>>>>> to their >>>>>>>>>>> work contact me with private email (to not spam the list), we >>>>>>>>>>> meet >>>>>>>>>>> and collect the notes to the wiki and inform this list about >>>>>>>>>>> that. >>>>>>>>>>> One question of particular interest to me here is: can the >>>>>>>>>>> users of >>>>>>>>>>> 3D UI do what they need well on the entity system level (for >>>>>>>>>>> example >>>>>>>>>>> just add and configure mesh components), or do they need >>>>>>>>>>> deeper >>>>>>>>>>> access to the 3d scene and rendering (spatial queries, >>>>>>>>>>> somehow >>>>>>>>>>> affect the rendering pipeline etc). With Tundra we have the >>>>>>>>>>> Scene API and the (Ogre)World API(s) to support the latter, >>>>>>>>>>> and also >>>>>>>>>>> access to the renderer directly. OTOH the entity system >>>>>>>>>>> level is >>>>>>>>>>> renderer independent. >>>>>>>>>>> Synchronization is a special case which requires good two-way >>>>>>>>>>> integration with 3D UI. Luckily it's something that we and >>>>>>>>>>> especially Lasse himself knows already from how it works in >>>>>>>>>>> Tundra >>>>>>>>>>> (and in WebTundras). Definitely to be discussed and planned >>>>>>>>>>> now too >>>>>>>>>>> of course. >>>>>>>>>>> So please if you agree that this is a good process do >>>>>>>>>>> raise hands >>>>>>>>>>> and let's start working on it! We can discuss this in the >>>>>>>>>>> weekly too >>>>>>>>>>> if needed. >>>>>>>>>>> Cheers, >>>>>>>>>>> ~Toni >>>>>>>>>>> >>>>>>>>>>> ______________________________**_________________ >>>>>>>>>>> Fiware-miwi mailing list >>>>>>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> > >>>>>>>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> ______________________________**_________________ >>>>>>>>>>> Fiware-miwi mailing list >>>>>>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>>>>>> >>>>>>>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> >>>>>>>>>> ------------------------------**------------------------------** >>>>>>>>>> ------------- >>>>>>>>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) >>>>>>>>>> GmbH >>>>>>>>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>>>>>>>> >>>>>>>>>> Gesch?ftsf?hrung: >>>>>>>>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>>>>>>>> Dr. Walter Olthoff >>>>>>>>>> Vorsitzender des Aufsichtsrats: >>>>>>>>>> Prof. Dr. h.c. Hans A. Aukes >>>>>>>>>> >>>>>>>>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>>>>>>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>>>>>>>> ------------------------------**------------------------------** >>>>>>>>>> --------------- >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Fiware-miwi mailing list >>>>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>>>> >>>>>>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Fiware-miwi mailing list >>>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>>> >>>>>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>>>> >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Fiware-miwi mailing list >>>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>>> >>>>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> >>>>>> ------------------------------------------------------------------------- >>>>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>>>> >>>>>> Gesch?ftsf?hrung: >>>>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>>>> Dr. Walter Olthoff >>>>>> Vorsitzender des Aufsichtsrats: >>>>>> Prof. Dr. h.c. Hans A. Aukes >>>>>> >>>>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>>>> --------------------------------------------------------------------------- >>>>>> >>>>> >>>> >>>> >>>> -- >>>> >>>> ------------------------------------------------------------------------- >>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>> >>>> Gesch?ftsf?hrung: >>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>> Dr. Walter Olthoff >>>> Vorsitzender des Aufsichtsrats: >>>> Prof. Dr. h.c. Hans A. Aukes >>>> >>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>> --------------------------------------------------------------------------- >>>> >>> >> >> >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi > > > -- > _______________________________________________________________________________ > > Kristian Sons > Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI > Agenten und Simulierte Realit?t > Campus, Geb. D 3 2, Raum 0.77 > 66123 Saarbr?cken, Germany > > Phone: +49 681 85775-3833 > Phone: +49 681 302-3833 > Fax: +49 681 85775--2235 > kristian.sons at dfki.de > http://www.xml3d.org > > Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > > Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes > Amtsgericht Kaiserslautern, HRB 2313 > _______________________________________________________________________________ > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi -- _______________________________________________________________________________ Kristian Sons Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI Agenten und Simulierte Realit?t Campus, Geb. D 3 2, Raum 0.77 66123 Saarbr?cken, Germany Phone: +49 681 85775-3833 Phone: +49 681 302-3833 Fax: +49 681 85775--2235 kristian.sons at dfki.de http://www.xml3d.org Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 _______________________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ari.okkonen at cie.fi Mon Nov 11 13:41:00 2013 From: ari.okkonen at cie.fi (Ari Okkonen CIE) Date: Mon, 11 Nov 2013 14:41:00 +0200 Subject: [Fiware-miwi] Evening program for FI-WARE Oulu meeting Monday 11.11. Message-ID: <5280D05C.70806@cie.fi> Hi, FI-WARE at Oulu 11.11. evening starts 20:00 at Viking Restaurant Harald Kirkkokatu 16, Oulu. https://maps.google.fi/maps?q=Viikinkiravintola+Harald&oe=utf-8&client=firefox-a&fb=1&gl=fi&hq=ravintola+harald&hnear=0x468032a8c02185c1:0x8bb02d322b12e97d,Oulu&cid=0,0,7890193311483842325&t=m&z=15&iwloc=A BR Ari -- Ari Okkonen CIE, University of Oulu From Philipp.Slusallek at dfki.de Mon Nov 11 16:59:31 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Mon, 11 Nov 2013 16:59:31 +0100 Subject: [Fiware-miwi] TF-Draft WP proposal Message-ID: <5280FEE3.6000704@dfki.de> Hi, As just discussed, here is the first draft of the TF proposal for WP13 . Best, Philipp -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: TF proposal - B 1 1 WP13-phs.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 243131 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From mach at zhaw.ch Wed Nov 13 07:14:33 2013 From: mach at zhaw.ch (Christof Marti) Date: Wed, 13 Nov 2013 07:14:33 +0100 Subject: [Fiware-miwi] Todays telco canceled Message-ID: Hi everybody Thanks to all for the good meeting on monday. It was a pleasure meeting you all in person and I enjoyed the time in Oulu, even tough it was way to short :). Special thanks also to CIE for hosting the meeting and organizing the evening event. Because we could go through all topics at this meeting we cancel todays telco. I will send out some instructions later this morning about the setup of the roadmap and tracker entries and the other open action points. Please use the time saved for setting up roadmap and tracker entries. Best regard Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW Phone: +41 58 934 70 63 Skype: christof-marti From mach at zhaw.ch Thu Nov 14 18:38:35 2013 From: mach at zhaw.ch (Christof Marti) Date: Thu, 14 Nov 2013 18:38:35 +0100 Subject: [Fiware-miwi] Creation of the WP13 roadmap Message-ID: <466F5547-A5F6-416D-8C96-D4C88064B6FA@zhaw.ch> Hi everybody I prepared the WP13 roadmap wiki-page [1] It is already in the public wiki. Not yet linked from the FIWARE-Roadmap page, but you can access it from the FI-WARE private wiki ?WP13 Integration? page [4]. As soon it has an acceptable state I will link it from the public roadmap page. Next step is now that you (the GE owners) create the Feature-pages (and if required UserStories) for your GE. Deadline: Monday 18.11. EOB To create this pages please follow the instructions on the ?How to upload the full description of backlog entries to the Wiki? tutorial on the main page [2]. WITH ONE EXCEPTION: Use the roadmap-page [1] as a starting point to create your Features (instead of the Materializing page) and place the link in the respective minor version section of your GE, when you plan to have the feature implemented (see extract below) - Release 3.2 (until January 2014) - Release 3.3 (until April 2014) (We will copy the links to the ?Materializing pages?, as soon the Architecture, Backlog, etc. is ready and publicly available.) To name your entries follow the naming conventions from the ?How to assign identifiers to FI-WARE Backlog entries? tutorial [3]. For WP13 use the FIWARE.Feature.MiWi..[.] form: e.g. FIWARE.Feature.MiWi.3D-UI.DataflowProcessing.XFlowSupport or FIWARE.Feature.MiWi.Synchronization.SceneAPI etc. You can choose the yourself. When defining a Feature keep in mind, that it has to be implemented within one minor release. For content of your Feature (UserStory) page, copy the Feature (UserStory) template from the tutorial page [2]. [1] WP13 roadmap: http://wiki.fi-ware.eu/Roadmap_of_Advanced_Middleware_and_Web_UI [2] Tutorial to create backlog entries: https://wiki.fi-ware.eu/How_to_upload_the_full_description_of_backlog_entries_to_the_Wiki [3] Naming conventions: https://www.fi-ware.eu/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_(convention_to_follow) [4] WP13 Integration page: http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/WP13_Integration Cheers, Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering Phone: +41 58 934 70 63 Skype: christof-marti -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Bildschirmfoto 2013-11-14 um 18.00.05.png Type: image/png Size: 15188 bytes Desc: not available URL: From tomi.sarni at cyberlightning.com Fri Nov 15 07:12:09 2013 From: tomi.sarni at cyberlightning.com (Tomi Sarni) Date: Fri, 15 Nov 2013 08:12:09 +0200 Subject: [Fiware-miwi] Creation of the WP13 roadmap In-Reply-To: <466F5547-A5F6-416D-8C96-D4C88064B6FA@zhaw.ch> References: <466F5547-A5F6-416D-8C96-D4C88064B6FA@zhaw.ch> Message-ID: Cristof, you in charge of maintaining the main page? some kind of introduction and template could be there to ease up the filling process. Well im gona start with this page i guess anyhow? https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Advanced_User_Interfaces_in_FI-WARE versus https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Internet_of_Things_%28IoT%29_Services_Enablement_in_FI-WARE On Thu, Nov 14, 2013 at 7:38 PM, Christof Marti wrote: > Hi everybody > > I prepared the WP13 roadmap wiki-page [1] > It is *already in the public wiki*. Not yet linked from the > FIWARE-Roadmap page, but you can access it from the FI-WARE private wiki > ?WP13 Integration? page [4]. > As soon it has an acceptable state I will link it from the public roadmap > page. > > Next step is now that you (the GE owners) *create the Feature-pages *(and > if required UserStories)* for your GE*. > *Deadline: Monday 18.11. EOB* > > To create this pages please follow the instructions on the ?How to upload > the full description of backlog entries to the Wiki? tutorial on the main > page [2] > . > WITH ONE EXCEPTION: Use the *roadmap-page *[1] > as a starting point to create your Features (instead of the > Materializing page) and place the link in the respective *minor version > section *of your GE, when you plan to have the feature implemented (see > extract below) > - Release 3.2 (until January 2014) > - Release 3.3 (until April 2014) > (We will copy the links to the ?Materializing pages?, as soon the > Architecture, Backlog, etc. is ready and publicly available.) > > > To name your entries follow the naming conventions from the ?How to assign > identifiers to FI-WARE Backlog entries? tutorial [3]. > For WP13 use the FIWARE.Feature.MiWi..[.] form: > e.g. FIWARE.Feature.MiWi.3D-UI.DataflowProcessing.XFlowSupport > or FIWARE.Feature.MiWi.Synchronization.SceneAPI > etc. > You can choose the yourself. When defining a Feature keep in > mind, that it has to be implemented within one minor release. > > For content of your Feature (UserStory) page, copy the Feature (UserStory) > template from the tutorial page [2] > . > > [1] WP13 roadmap: > http://wiki.fi-ware.eu/Roadmap_of_Advanced_Middleware_and_Web_UI > [2] Tutorial to create backlog entries: > https://wiki.fi-ware.eu/How_to_upload_the_full_description_of_backlog_entries_to_the_Wiki > [3] Naming conventions: > https://www.fi-ware.eu/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_(convention_to_follow) > [4] WP13 Integration page: > http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/WP13_Integration > > > Cheers, > Christof > ---- > InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > Institut of Applied Information Technology - InIT > Zurich University of Applied Sciences - ZHAW > School of Engineering > Phone: +41 58 934 70 63 > Skype: christof-marti > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomi.sarni at cyberlightning.com Fri Nov 15 07:20:13 2013 From: tomi.sarni at cyberlightning.com (Tomi Sarni) Date: Fri, 15 Nov 2013 08:20:13 +0200 Subject: [Fiware-miwi] Creation of the WP13 roadmap In-Reply-To: References: <466F5547-A5F6-416D-8C96-D4C88064B6FA@zhaw.ch> Message-ID: ah nevermind i sort of understood now. On Fri, Nov 15, 2013 at 8:12 AM, Tomi Sarni wrote: > Cristof, you in charge of maintaining the main page? some kind of > introduction and template could be there to ease up the filling process. > Well im gona start with this page i guess anyhow? > > > https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Advanced_User_Interfaces_in_FI-WARE > > versus > > > https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Internet_of_Things_%28IoT%29_Services_Enablement_in_FI-WARE > > > On Thu, Nov 14, 2013 at 7:38 PM, Christof Marti wrote: > >> Hi everybody >> >> I prepared the WP13 roadmap wiki-page [1] >> It is *already in the public wiki*. Not yet linked from the >> FIWARE-Roadmap page, but you can access it from the FI-WARE private wiki >> ?WP13 Integration? page [4]. >> As soon it has an acceptable state I will link it from the public roadmap >> page. >> >> Next step is now that you (the GE owners) *create the Feature-pages *(and >> if required UserStories)* for your GE*. >> *Deadline: Monday 18.11. EOB* >> >> To create this pages please follow the instructions on the ?How to upload >> the full description of backlog entries to the Wiki? tutorial on the main >> page [2] >> . >> WITH ONE EXCEPTION: Use the *roadmap-page *[1] >> as a starting point to create your Features (instead of the >> Materializing page) and place the link in the respective *minor version >> section *of your GE, when you plan to have the feature implemented (see >> extract below) >> - Release 3.2 (until January 2014) >> - Release 3.3 (until April 2014) >> (We will copy the links to the ?Materializing pages?, as soon the >> Architecture, Backlog, etc. is ready and publicly available.) >> >> >> To name your entries follow the naming conventions from the ?How to >> assign identifiers to FI-WARE Backlog entries? tutorial [3]. >> For WP13 use the FIWARE.Feature.MiWi..[.] >> form: >> e.g. FIWARE.Feature.MiWi.3D-UI.DataflowProcessing.XFlowSupport >> or FIWARE.Feature.MiWi.Synchronization.SceneAPI >> etc. >> You can choose the yourself. When defining a Feature keep in >> mind, that it has to be implemented within one minor release. >> >> For content of your Feature (UserStory) page, copy the Feature >> (UserStory) template from the tutorial page [2] >> . >> >> [1] WP13 roadmap: >> http://wiki.fi-ware.eu/Roadmap_of_Advanced_Middleware_and_Web_UI >> [2] Tutorial to create backlog entries: >> https://wiki.fi-ware.eu/How_to_upload_the_full_description_of_backlog_entries_to_the_Wiki >> [3] Naming conventions: >> https://www.fi-ware.eu/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_(convention_to_follow) >> [4] WP13 Integration page: >> http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/WP13_Integration >> >> >> Cheers, >> Christof >> ---- >> InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch >> Institut of Applied Information Technology - InIT >> Zurich University of Applied Sciences - ZHAW >> School of Engineering >> Phone: +41 58 934 70 63 >> Skype: christof-marti >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tharanga.wijethilake at cyberlightning.com Fri Nov 15 07:49:23 2013 From: tharanga.wijethilake at cyberlightning.com (Tharanga Wijethilake) Date: Fri, 15 Nov 2013 08:49:23 +0200 Subject: [Fiware-miwi] Creation of the WP13 roadmap In-Reply-To: References: <466F5547-A5F6-416D-8C96-D4C88064B6FA@zhaw.ch> Message-ID: Hello Christof, You have suggested that we should "Use the *roadmap-page *[1] as a starting point to create your Features". But we , at least we at the cyberlightning, are not able to edit the page. Could you please instruct us on that? Further I am trying to follow the instructions on "Tutorial to create backlog entries". I can not see (Which implies that we do not have access rights) the edit buttons on "Materializing Advanced User Interfaces in FI-WARE" page. BR ~Tharanga On Fri, Nov 15, 2013 at 8:20 AM, Tomi Sarni wrote: > ah nevermind i sort of understood now. > > > On Fri, Nov 15, 2013 at 8:12 AM, Tomi Sarni > wrote: > >> Cristof, you in charge of maintaining the main page? some kind of >> introduction and template could be there to ease up the filling process. >> Well im gona start with this page i guess anyhow? >> >> >> https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Advanced_User_Interfaces_in_FI-WARE >> >> versus >> >> >> https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Internet_of_Things_%28IoT%29_Services_Enablement_in_FI-WARE >> >> >> On Thu, Nov 14, 2013 at 7:38 PM, Christof Marti wrote: >> >>> Hi everybody >>> >>> I prepared the WP13 roadmap wiki-page [1] >>> It is *already in the public wiki*. Not yet linked from the >>> FIWARE-Roadmap page, but you can access it from the FI-WARE private wiki >>> ?WP13 Integration? page [4]. >>> As soon it has an acceptable state I will link it from the public roadmap >>> page. >>> >>> Next step is now that you (the GE owners) *create the Feature-pages *(and >>> if required UserStories)* for your GE*. >>> *Deadline: Monday 18.11. EOB* >>> >>> To create this pages please follow the instructions on the ?How to >>> upload the full description of backlog entries to the Wiki? tutorial on the >>> main page [2] >>> . >>> WITH ONE EXCEPTION: Use the *roadmap-page *[1] >>> as a starting point to create your Features (instead of the >>> Materializing page) and place the link in the respective *minor version >>> section *of your GE, when you plan to have the feature implemented (see >>> extract below) >>> - Release 3.2 (until January 2014) >>> - Release 3.3 (until April 2014) >>> (We will copy the links to the ?Materializing pages?, as soon the >>> Architecture, Backlog, etc. is ready and publicly available.) >>> >>> >>> To name your entries follow the naming conventions from the ?How to >>> assign identifiers to FI-WARE Backlog entries? tutorial [3]. >>> For WP13 use the FIWARE.Feature.MiWi..[.] >>> form: >>> e.g. FIWARE.Feature.MiWi.3D-UI.DataflowProcessing.XFlowSupport >>> or FIWARE.Feature.MiWi.Synchronization.SceneAPI >>> etc. >>> You can choose the yourself. When defining a Feature keep in >>> mind, that it has to be implemented within one minor release. >>> >>> For content of your Feature (UserStory) page, copy the Feature >>> (UserStory) template from the tutorial page [2] >>> . >>> >>> [1] WP13 roadmap: >>> http://wiki.fi-ware.eu/Roadmap_of_Advanced_Middleware_and_Web_UI >>> [2] Tutorial to create backlog entries: >>> https://wiki.fi-ware.eu/How_to_upload_the_full_description_of_backlog_entries_to_the_Wiki >>> [3] Naming conventions: >>> https://www.fi-ware.eu/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_(convention_to_follow) >>> [4] WP13 Integration page: >>> http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/WP13_Integration >>> >>> >>> Cheers, >>> Christof >>> ---- >>> InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch >>> Institut of Applied Information Technology - InIT >>> Zurich University of Applied Sciences - ZHAW >>> School of Engineering >>> Phone: +41 58 934 70 63 >>> Skype: christof-marti >>> >>> _______________________________________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>> >>> >> > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tharanga.wijethilake at cyberlightning.com Fri Nov 15 07:59:54 2013 From: tharanga.wijethilake at cyberlightning.com (Tharanga Wijethilake) Date: Fri, 15 Nov 2013 08:59:54 +0200 Subject: [Fiware-miwi] Creation of the WP13 roadmap In-Reply-To: References: <466F5547-A5F6-416D-8C96-D4C88064B6FA@zhaw.ch> Message-ID: Sorry for multiple mails.. One more thing. Link 3 is not working. I think We could get in to writing once we have these. BR ~Tharanga On Fri, Nov 15, 2013 at 8:49 AM, Tharanga Wijethilake < tharanga.wijethilake at cyberlightning.com> wrote: > Hello Christof, > > You have suggested that we should "Use the *roadmap-page *[1] > as a starting point to create your Features". But we , at least we at > the cyberlightning, are not able to edit the page. Could you please > instruct us on that? Further I am trying to follow the instructions on > "Tutorial to create backlog entries". I can not see (Which implies that we > do not have access rights) the edit buttons on "Materializing Advanced > User Interfaces in FI-WARE" page. > > BR > > ~Tharanga > > > > On Fri, Nov 15, 2013 at 8:20 AM, Tomi Sarni > wrote: > >> ah nevermind i sort of understood now. >> >> >> On Fri, Nov 15, 2013 at 8:12 AM, Tomi Sarni < >> tomi.sarni at cyberlightning.com> wrote: >> >>> Cristof, you in charge of maintaining the main page? some kind of >>> introduction and template could be there to ease up the filling process. >>> Well im gona start with this page i guess anyhow? >>> >>> >>> https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Advanced_User_Interfaces_in_FI-WARE >>> >>> versus >>> >>> >>> https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Internet_of_Things_%28IoT%29_Services_Enablement_in_FI-WARE >>> >>> >>> On Thu, Nov 14, 2013 at 7:38 PM, Christof Marti wrote: >>> >>>> Hi everybody >>>> >>>> I prepared the WP13 roadmap wiki-page [1] >>>> It is *already in the public wiki*. Not yet linked from the >>>> FIWARE-Roadmap page, but you can access it from the FI-WARE private wiki >>>> ?WP13 Integration? page [4]. >>>> As soon it has an acceptable state I will link it from the public roadmap >>>> page. >>>> >>>> Next step is now that you (the GE owners) *create the Feature-pages *(and >>>> if required UserStories)* for your GE*. >>>> *Deadline: Monday 18.11. EOB* >>>> >>>> To create this pages please follow the instructions on the ?How to >>>> upload the full description of backlog entries to the Wiki? tutorial on the >>>> main page [2] >>>> . >>>> WITH ONE EXCEPTION: Use the *roadmap-page *[1] >>>> as a starting point to create your Features (instead of the >>>> Materializing page) and place the link in the respective *minor >>>> version section *of your GE, when you plan to have the feature >>>> implemented (see extract below) >>>> - Release 3.2 (until January 2014) >>>> - Release 3.3 (until April 2014) >>>> (We will copy the links to the ?Materializing pages?, as soon the >>>> Architecture, Backlog, etc. is ready and publicly available.) >>>> >>>> >>>> To name your entries follow the naming conventions from the ?How to >>>> assign identifiers to FI-WARE Backlog entries? tutorial [3]. >>>> For WP13 use the FIWARE.Feature.MiWi..[.] >>>> form: >>>> e.g. FIWARE.Feature.MiWi.3D-UI.DataflowProcessing.XFlowSupport >>>> or FIWARE.Feature.MiWi.Synchronization.SceneAPI >>>> etc. >>>> You can choose the yourself. When defining a Feature keep >>>> in mind, that it has to be implemented within one minor release. >>>> >>>> For content of your Feature (UserStory) page, copy the Feature >>>> (UserStory) template from the tutorial page [2] >>>> . >>>> >>>> [1] WP13 roadmap: >>>> http://wiki.fi-ware.eu/Roadmap_of_Advanced_Middleware_and_Web_UI >>>> [2] Tutorial to create backlog entries: >>>> https://wiki.fi-ware.eu/How_to_upload_the_full_description_of_backlog_entries_to_the_Wiki >>>> [3] Naming conventions: >>>> https://www.fi-ware.eu/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_(convention_to_follow) >>>> [4] WP13 Integration page: >>>> http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/WP13_Integration >>>> >>>> >>>> Cheers, >>>> Christof >>>> ---- >>>> InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch >>>> Institut of Applied Information Technology - InIT >>>> Zurich University of Applied Sciences - ZHAW >>>> School of Engineering >>>> Phone: +41 58 934 70 63 >>>> Skype: christof-marti >>>> >>>> _______________________________________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>> >>>> >>> >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jarkko at cyberlightning.com Fri Nov 15 08:02:55 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Fri, 15 Nov 2013 09:02:55 +0200 Subject: [Fiware-miwi] Creation of the WP13 roadmap In-Reply-To: References: <466F5547-A5F6-416D-8C96-D4C88064B6FA@zhaw.ch> Message-ID: I dug a littlebit, and maybe this is the correct link..? http://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_%28convention_to_follow%29 On Fri, Nov 15, 2013 at 8:59 AM, Tharanga Wijethilake < tharanga.wijethilake at cyberlightning.com> wrote: > Sorry for multiple mails.. > One more thing. Link 3 is not working. I think We could get in to writing > once we have these. > > BR > > ~Tharanga > > > On Fri, Nov 15, 2013 at 8:49 AM, Tharanga Wijethilake < > tharanga.wijethilake at cyberlightning.com> wrote: > >> Hello Christof, >> >> You have suggested that we should "Use the *roadmap-page *[1] >> as a starting point to create your Features". But we , at least we at >> the cyberlightning, are not able to edit the page. Could you please >> instruct us on that? Further I am trying to follow the instructions on >> "Tutorial to create backlog entries". I can not see (Which implies that we >> do not have access rights) the edit buttons on "Materializing Advanced >> User Interfaces in FI-WARE" page. >> >> BR >> >> ~Tharanga >> >> >> >> On Fri, Nov 15, 2013 at 8:20 AM, Tomi Sarni < >> tomi.sarni at cyberlightning.com> wrote: >> >>> ah nevermind i sort of understood now. >>> >>> >>> On Fri, Nov 15, 2013 at 8:12 AM, Tomi Sarni < >>> tomi.sarni at cyberlightning.com> wrote: >>> >>>> Cristof, you in charge of maintaining the main page? some kind of >>>> introduction and template could be there to ease up the filling process. >>>> Well im gona start with this page i guess anyhow? >>>> >>>> >>>> https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Advanced_User_Interfaces_in_FI-WARE >>>> >>>> versus >>>> >>>> >>>> https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Internet_of_Things_%28IoT%29_Services_Enablement_in_FI-WARE >>>> >>>> >>>> On Thu, Nov 14, 2013 at 7:38 PM, Christof Marti wrote: >>>> >>>>> Hi everybody >>>>> >>>>> I prepared the WP13 roadmap wiki-page [1] >>>>> It is *already in the public wiki*. Not yet linked from the >>>>> FIWARE-Roadmap page, but you can access it from the FI-WARE private wiki >>>>> ?WP13 Integration? page [4]. >>>>> As soon it has an acceptable state I will link it from the public roadmap >>>>> page. >>>>> >>>>> Next step is now that you (the GE owners) *create the Feature-pages *(and >>>>> if required UserStories)* for your GE*. >>>>> *Deadline: Monday 18.11. EOB* >>>>> >>>>> To create this pages please follow the instructions on the ?How to >>>>> upload the full description of backlog entries to the Wiki? tutorial on the >>>>> main page [2] >>>>> . >>>>> WITH ONE EXCEPTION: Use the *roadmap-page *[1] >>>>> as a starting point to create your Features (instead of the >>>>> Materializing page) and place the link in the respective *minor >>>>> version section *of your GE, when you plan to have the feature >>>>> implemented (see extract below) >>>>> - Release 3.2 (until January 2014) >>>>> - Release 3.3 (until April 2014) >>>>> (We will copy the links to the ?Materializing pages?, as soon the >>>>> Architecture, Backlog, etc. is ready and publicly available.) >>>>> >>>>> >>>>> To name your entries follow the naming conventions from the ?How to >>>>> assign identifiers to FI-WARE Backlog entries? tutorial [3]. >>>>> For WP13 use the FIWARE.Feature.MiWi..[.] >>>>> form: >>>>> e.g. FIWARE.Feature.MiWi.3D-UI.DataflowProcessing.XFlowSupport >>>>> or FIWARE.Feature.MiWi.Synchronization.SceneAPI >>>>> etc. >>>>> You can choose the yourself. When defining a Feature keep >>>>> in mind, that it has to be implemented within one minor release. >>>>> >>>>> For content of your Feature (UserStory) page, copy the Feature >>>>> (UserStory) template from the tutorial page [2] >>>>> . >>>>> >>>>> [1] WP13 roadmap: >>>>> http://wiki.fi-ware.eu/Roadmap_of_Advanced_Middleware_and_Web_UI >>>>> [2] Tutorial to create backlog entries: >>>>> https://wiki.fi-ware.eu/How_to_upload_the_full_description_of_backlog_entries_to_the_Wiki >>>>> [3] Naming conventions: >>>>> https://www.fi-ware.eu/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_(convention_to_follow) >>>>> [4] WP13 Integration page: >>>>> http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/WP13_Integration >>>>> >>>>> >>>>> Cheers, >>>>> Christof >>>>> ---- >>>>> InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch >>>>> Institut of Applied Information Technology - InIT >>>>> Zurich University of Applied Sciences - ZHAW >>>>> School of Engineering >>>>> Phone: +41 58 934 70 63 >>>>> Skype: christof-marti >>>>> >>>>> _______________________________________________ >>>>> Fiware-miwi mailing list >>>>> Fiware-miwi at lists.fi-ware.eu >>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>> >>>>> >>>> >>> >>> _______________________________________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>> >>> >> > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -- Jarkko Vatjus-Anttila VP, Technology Cyberlightning Ltd. mobile. +358 405245142 email. jarkko at cyberlightning.com Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From tharanga.wijethilake at cyberlightning.com Fri Nov 15 08:04:18 2013 From: tharanga.wijethilake at cyberlightning.com (Tharanga Wijethilake) Date: Fri, 15 Nov 2013 09:04:18 +0200 Subject: [Fiware-miwi] Creation of the WP13 roadmap In-Reply-To: References: <466F5547-A5F6-416D-8C96-D4C88064B6FA@zhaw.ch> Message-ID: Thanks....I would not have found that in million years..:) ~Tharanga On Fri, Nov 15, 2013 at 9:02 AM, Jarkko Vatjus-Anttila < jarkko at cyberlightning.com> wrote: > I dug a littlebit, and maybe this is the correct link..? > > > http://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_%28convention_to_follow%29 > > > On Fri, Nov 15, 2013 at 8:59 AM, Tharanga Wijethilake < > tharanga.wijethilake at cyberlightning.com> wrote: > >> Sorry for multiple mails.. >> One more thing. Link 3 is not working. I think We could get in to writing >> once we have these. >> >> BR >> >> ~Tharanga >> >> >> On Fri, Nov 15, 2013 at 8:49 AM, Tharanga Wijethilake < >> tharanga.wijethilake at cyberlightning.com> wrote: >> >>> Hello Christof, >>> >>> You have suggested that we should "Use the *roadmap-page *[1] >>> as a starting point to create your Features". But we , at least we at >>> the cyberlightning, are not able to edit the page. Could you please >>> instruct us on that? Further I am trying to follow the instructions on >>> "Tutorial to create backlog entries". I can not see (Which implies that we >>> do not have access rights) the edit buttons on "Materializing Advanced >>> User Interfaces in FI-WARE" page. >>> >>> BR >>> >>> ~Tharanga >>> >>> >>> >>> On Fri, Nov 15, 2013 at 8:20 AM, Tomi Sarni < >>> tomi.sarni at cyberlightning.com> wrote: >>> >>>> ah nevermind i sort of understood now. >>>> >>>> >>>> On Fri, Nov 15, 2013 at 8:12 AM, Tomi Sarni < >>>> tomi.sarni at cyberlightning.com> wrote: >>>> >>>>> Cristof, you in charge of maintaining the main page? some kind of >>>>> introduction and template could be there to ease up the filling process. >>>>> Well im gona start with this page i guess anyhow? >>>>> >>>>> >>>>> https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Advanced_User_Interfaces_in_FI-WARE >>>>> >>>>> versus >>>>> >>>>> >>>>> https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Internet_of_Things_%28IoT%29_Services_Enablement_in_FI-WARE >>>>> >>>>> >>>>> On Thu, Nov 14, 2013 at 7:38 PM, Christof Marti wrote: >>>>> >>>>>> Hi everybody >>>>>> >>>>>> I prepared the WP13 roadmap wiki-page [1] >>>>>> It is *already in the public wiki*. Not yet linked from the >>>>>> FIWARE-Roadmap page, but you can access it from the FI-WARE private wiki >>>>>> ?WP13 Integration? page [4]. >>>>>> As soon it has an acceptable state I will link it from the public roadmap >>>>>> page. >>>>>> >>>>>> Next step is now that you (the GE owners) *create the Feature-pages *(and >>>>>> if required UserStories)* for your GE*. >>>>>> *Deadline: Monday 18.11. EOB* >>>>>> >>>>>> To create this pages please follow the instructions on the ?How to >>>>>> upload the full description of backlog entries to the Wiki? tutorial on the >>>>>> main page [2] >>>>>> . >>>>>> WITH ONE EXCEPTION: Use the *roadmap-page *[1] >>>>>> as a starting point to create your Features (instead of the >>>>>> Materializing page) and place the link in the respective *minor >>>>>> version section *of your GE, when you plan to have the feature >>>>>> implemented (see extract below) >>>>>> - Release 3.2 (until January 2014) >>>>>> - Release 3.3 (until April 2014) >>>>>> (We will copy the links to the ?Materializing pages?, as soon the >>>>>> Architecture, Backlog, etc. is ready and publicly available.) >>>>>> >>>>>> >>>>>> To name your entries follow the naming conventions from the ?How to >>>>>> assign identifiers to FI-WARE Backlog entries? tutorial [3]. >>>>>> For WP13 use the FIWARE.Feature.MiWi..[.] >>>>>> form: >>>>>> e.g. FIWARE.Feature.MiWi.3D-UI.DataflowProcessing.XFlowSupport >>>>>> or FIWARE.Feature.MiWi.Synchronization.SceneAPI >>>>>> etc. >>>>>> You can choose the yourself. When defining a Feature keep >>>>>> in mind, that it has to be implemented within one minor release. >>>>>> >>>>>> For content of your Feature (UserStory) page, copy the Feature >>>>>> (UserStory) template from the tutorial page [2] >>>>>> . >>>>>> >>>>>> [1] WP13 roadmap: >>>>>> http://wiki.fi-ware.eu/Roadmap_of_Advanced_Middleware_and_Web_UI >>>>>> [2] Tutorial to create backlog entries: >>>>>> https://wiki.fi-ware.eu/How_to_upload_the_full_description_of_backlog_entries_to_the_Wiki >>>>>> [3] Naming conventions: >>>>>> https://www.fi-ware.eu/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_(convention_to_follow) >>>>>> [4] WP13 Integration page: >>>>>> http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/WP13_Integration >>>>>> >>>>>> >>>>>> Cheers, >>>>>> Christof >>>>>> ---- >>>>>> InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch >>>>>> Institut of Applied Information Technology - InIT >>>>>> Zurich University of Applied Sciences - ZHAW >>>>>> School of Engineering >>>>>> Phone: +41 58 934 70 63 >>>>>> Skype: christof-marti >>>>>> >>>>>> _______________________________________________ >>>>>> Fiware-miwi mailing list >>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>>> >>>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>> >>>> >>> >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >> > > > -- > Jarkko Vatjus-Anttila > VP, Technology > Cyberlightning Ltd. > > mobile. +358 405245142 > email. jarkko at cyberlightning.com > > Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. > Get your free evaluation version and buy it now! www.cybersli.de > > www.cyberlightning.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mach at zhaw.ch Fri Nov 15 08:05:15 2013 From: mach at zhaw.ch (Christof Marti) Date: Fri, 15 Nov 2013 08:05:15 +0100 Subject: [Fiware-miwi] Creation of the WP13 roadmap In-Reply-To: <5a10a18285bf48e98d062d230b61e941@SRV-MAIL-001.zhaw.ch> References: <466F5547-A5F6-416D-8C96-D4C88064B6FA@zhaw.ch> <5a10a18285bf48e98d062d230b61e941@SRV-MAIL-001.zhaw.ch> Message-ID: Hi everybody Link [3] is wrong should be: [3] Naming conventions: https://wiki.fi-ware.eu/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_(convention_to_follow) To edit the page you have to login on forge first: https://forge.fi-ware.eu (login on wiki itself does not work) Afterwards reload the wiki page and you should see the edit button on top. Christof Am 15.11.2013 um 07:59 schrieb Tharanga Wijethilake : > Sorry for multiple mails.. > One more thing. Link 3 is not working. I think We could get in to writing once we have these. > > BR > > ~Tharanga > > > On Fri, Nov 15, 2013 at 8:49 AM, Tharanga Wijethilake wrote: > Hello Christof, > > You have suggested that we should "Use the roadmap-page [1] as a starting point to create your Features". But we , at least we at the cyberlightning, are not able to edit the page. Could you please instruct us on that? Further I am trying to follow the instructions on "Tutorial to create backlog entries". I can not see (Which implies that we do not have access rights) the edit buttons on "Materializing Advanced User Interfaces in FI-WARE" page. > > BR > > ~Tharanga > > > > On Fri, Nov 15, 2013 at 8:20 AM, Tomi Sarni wrote: > ah nevermind i sort of understood now. > > > On Fri, Nov 15, 2013 at 8:12 AM, Tomi Sarni wrote: > Cristof, you in charge of maintaining the main page? some kind of introduction and template could be there to ease up the filling process. Well im gona start with this page i guess anyhow? > > https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Advanced_User_Interfaces_in_FI-WARE > > versus > > https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Internet_of_Things_%28IoT%29_Services_Enablement_in_FI-WARE > > > On Thu, Nov 14, 2013 at 7:38 PM, Christof Marti wrote: > Hi everybody > > I prepared the WP13 roadmap wiki-page [1] It is already in the public wiki. Not yet linked from the FIWARE-Roadmap page, but you can access it from the FI-WARE private wiki ?WP13 Integration? page [4]. As soon it has an acceptable state I will link it from the public roadmap page. > > Next step is now that you (the GE owners) create the Feature-pages (and if required UserStories) for your GE. > Deadline: Monday 18.11. EOB > > To create this pages please follow the instructions on the ?How to upload the full description of backlog entries to the Wiki? tutorial on the main page [2]. > WITH ONE EXCEPTION: Use the roadmap-page [1] as a starting point to create your Features (instead of the Materializing page) and place the link in the respective minor version section of your GE, when you plan to have the feature implemented (see extract below) > - Release 3.2 (until January 2014) > - Release 3.3 (until April 2014) > (We will copy the links to the ?Materializing pages?, as soon the Architecture, Backlog, etc. is ready and publicly available.) > > > > To name your entries follow the naming conventions from the ?How to assign identifiers to FI-WARE Backlog entries? tutorial [3]. > For WP13 use the FIWARE.Feature.MiWi..[.] form: > e.g. FIWARE.Feature.MiWi.3D-UI.DataflowProcessing.XFlowSupport > or FIWARE.Feature.MiWi.Synchronization.SceneAPI > etc. > You can choose the yourself. When defining a Feature keep in mind, that it has to be implemented within one minor release. > > For content of your Feature (UserStory) page, copy the Feature (UserStory) template from the tutorial page [2]. > > [1] WP13 roadmap: http://wiki.fi-ware.eu/Roadmap_of_Advanced_Middleware_and_Web_UI > [2] Tutorial to create backlog entries: https://wiki.fi-ware.eu/How_to_upload_the_full_description_of_backlog_entries_to_the_Wiki > [3] Naming conventions: https://www.fi-ware.eu/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_(convention_to_follow) > [4] WP13 Integration page: http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/WP13_Integration > > > Cheers, > Christof > ---- > InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > Institut of Applied Information Technology - InIT > Zurich University of Applied Sciences - ZHAW > School of Engineering > Phone: +41 58 934 70 63 > Skype: christof-marti > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From antti.kokko at adminotech.com Fri Nov 15 08:29:37 2013 From: antti.kokko at adminotech.com (Antti Kokko) Date: Fri, 15 Nov 2013 09:29:37 +0200 Subject: [Fiware-miwi] Creation of the WP13 roadmap In-Reply-To: References: <466F5547-A5F6-416D-8C96-D4C88064B6FA@zhaw.ch> <5a10a18285bf48e98d062d230b61e941@SRV-MAIL-001.zhaw.ch> Message-ID: Hi, At least I don?t have permission to edit page. I have logged in via forge. Thanks, - Antti On Fri, Nov 15, 2013 at 9:05 AM, Christof Marti wrote: > Hi everybody > > Link [3] is wrong should be: > [3] Naming conventions: > https://wiki.fi-ware.eu/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_(convention_to_follow) > > To edit the page you have to login on forge first: > https://forge.fi-ware.eu (login on wiki itself does not work) > Afterwards reload the wiki page and you should see the edit button on top. > > Christof > > Am 15.11.2013 um 07:59 schrieb Tharanga Wijethilake < > tharanga.wijethilake at cyberlightning.com>: > > Sorry for multiple mails.. > One more thing. Link 3 is not working. I think We could get in to writing > once we have these. > > BR > > ~Tharanga > > > On Fri, Nov 15, 2013 at 8:49 AM, Tharanga Wijethilake < > tharanga.wijethilake at cyberlightning.com> wrote: > >> Hello Christof, >> >> You have suggested that we should "Use the *roadmap-page *[1] >> as a starting point to create your Features". But we , at least we at >> the cyberlightning, are not able to edit the page. Could you please >> instruct us on that? Further I am trying to follow the instructions on >> "Tutorial to create backlog entries". I can not see (Which implies that we >> do not have access rights) the edit buttons on "Materializing Advanced >> User Interfaces in FI-WARE" page. >> >> BR >> >> ~Tharanga >> >> >> >> On Fri, Nov 15, 2013 at 8:20 AM, Tomi Sarni < >> tomi.sarni at cyberlightning.com> wrote: >> >>> ah nevermind i sort of understood now. >>> >>> >>> On Fri, Nov 15, 2013 at 8:12 AM, Tomi Sarni < >>> tomi.sarni at cyberlightning.com> wrote: >>> >>>> Cristof, you in charge of maintaining the main page? some kind of >>>> introduction and template could be there to ease up the filling process. >>>> Well im gona start with this page i guess anyhow? >>>> >>>> >>>> https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Advanced_User_Interfaces_in_FI-WARE >>>> >>>> versus >>>> >>>> >>>> https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Internet_of_Things_%28IoT%29_Services_Enablement_in_FI-WARE >>>> >>>> >>>> On Thu, Nov 14, 2013 at 7:38 PM, Christof Marti wrote: >>>> >>>>> Hi everybody >>>>> >>>>> I prepared the WP13 roadmap wiki-page [1] >>>>> It is *already in the public wiki*. Not yet linked from the >>>>> FIWARE-Roadmap page, but you can access it from the FI-WARE private wiki >>>>> ?WP13 Integration? page [4]. >>>>> As soon it has an acceptable state I will link it from the public roadmap >>>>> page. >>>>> >>>>> Next step is now that you (the GE owners) *create the Feature-pages *(and >>>>> if required UserStories)* for your GE*. >>>>> *Deadline: Monday 18.11. EOB* >>>>> >>>>> To create this pages please follow the instructions on the ?How to >>>>> upload the full description of backlog entries to the Wiki? tutorial on the >>>>> main page [2] >>>>> . >>>>> WITH ONE EXCEPTION: Use the *roadmap-page *[1] >>>>> as a starting point to create your Features (instead of the >>>>> Materializing page) and place the link in the respective *minor >>>>> version section *of your GE, when you plan to have the feature >>>>> implemented (see extract below) >>>>> - Release 3.2 (until January 2014) >>>>> - Release 3.3 (until April 2014) >>>>> (We will copy the links to the ?Materializing pages?, as soon the >>>>> Architecture, Backlog, etc. is ready and publicly available.) >>>>> >>>>> >>>>> To name your entries follow the naming conventions from the ?How to >>>>> assign identifiers to FI-WARE Backlog entries? tutorial [3]. >>>>> For WP13 use the FIWARE.Feature.MiWi..[.] >>>>> form: >>>>> e.g. FIWARE.Feature.MiWi.3D-UI.DataflowProcessing.XFlowSupport >>>>> or FIWARE.Feature.MiWi.Synchronization.SceneAPI >>>>> etc. >>>>> You can choose the yourself. When defining a Feature keep >>>>> in mind, that it has to be implemented within one minor release. >>>>> >>>>> For content of your Feature (UserStory) page, copy the Feature >>>>> (UserStory) template from the tutorial page [2] >>>>> . >>>>> >>>>> [1] WP13 roadmap: >>>>> http://wiki.fi-ware.eu/Roadmap_of_Advanced_Middleware_and_Web_UI >>>>> [2] Tutorial to create backlog entries: >>>>> https://wiki.fi-ware.eu/How_to_upload_the_full_description_of_backlog_entries_to_the_Wiki >>>>> [3] Naming conventions: >>>>> https://www.fi-ware.eu/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_(convention_to_follow) >>>>> [4] WP13 Integration page: >>>>> http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/WP13_Integration >>>>> >>>>> >>>>> Cheers, >>>>> Christof >>>>> ---- >>>>> InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch >>>>> Institut of Applied Information Technology - InIT >>>>> Zurich University of Applied Sciences - ZHAW >>>>> School of Engineering >>>>> Phone: +41 58 934 70 63 >>>>> Skype: christof-marti >>>>> >>>>> _______________________________________________ >>>>> Fiware-miwi mailing list >>>>> Fiware-miwi at lists.fi-ware.eu >>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>> >>>>> >>>> >>> >>> _______________________________________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>> >>> >> > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mach at zhaw.ch Fri Nov 15 08:32:12 2013 From: mach at zhaw.ch (Christof Marti) Date: Fri, 15 Nov 2013 08:32:12 +0100 Subject: [Fiware-miwi] Creation of the WP13 roadmap In-Reply-To: <57d19d5fab464159ada87dd88b79df0a@SRV-MAIL-001.zhaw.ch> References: <466F5547-A5F6-416D-8C96-D4C88064B6FA@zhaw.ch> <5a10a18285bf48e98d062d230b61e941@SRV-MAIL-001.zhaw.ch> <57d19d5fab464159ada87dd88b79df0a@SRV-MAIL-001.zhaw.ch> Message-ID: Hi Antti To get edit right on the wiki of a project you have to be member of the matching forge project. It looks like you are not a member of the "FI-WARE" forge. You should request to join the FI-WARE project (as you did for FI-WARE MiWi) See here: https://wiki.fi-ware.eu/How_to_join_a_FI-WARE_project_in_FusionForge Christof Am 15.11.2013 um 08:29 schrieb Antti Kokko : > Hi, > > At least I don?t have permission to edit page. I have logged in via forge. > > Thanks, > > - Antti > > > On Fri, Nov 15, 2013 at 9:05 AM, Christof Marti wrote: > Hi everybody > > Link [3] is wrong should be: > [3] Naming conventions: https://wiki.fi-ware.eu/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_(convention_to_follow) > > To edit the page you have to login on forge first: https://forge.fi-ware.eu (login on wiki itself does not work) > Afterwards reload the wiki page and you should see the edit button on top. > > Christof > > Am 15.11.2013 um 07:59 schrieb Tharanga Wijethilake : > >> Sorry for multiple mails.. >> One more thing. Link 3 is not working. I think We could get in to writing once we have these. >> >> BR >> >> ~Tharanga >> >> >> On Fri, Nov 15, 2013 at 8:49 AM, Tharanga Wijethilake wrote: >> Hello Christof, >> >> You have suggested that we should "Use the roadmap-page [1] as a starting point to create your Features". But we , at least we at the cyberlightning, are not able to edit the page. Could you please instruct us on that? Further I am trying to follow the instructions on "Tutorial to create backlog entries". I can not see (Which implies that we do not have access rights) the edit buttons on "Materializing Advanced User Interfaces in FI-WARE" page. >> >> BR >> >> ~Tharanga >> >> >> >> On Fri, Nov 15, 2013 at 8:20 AM, Tomi Sarni wrote: >> ah nevermind i sort of understood now. >> >> >> On Fri, Nov 15, 2013 at 8:12 AM, Tomi Sarni wrote: >> Cristof, you in charge of maintaining the main page? some kind of introduction and template could be there to ease up the filling process. Well im gona start with this page i guess anyhow? >> >> https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Advanced_User_Interfaces_in_FI-WARE >> >> versus >> >> https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Internet_of_Things_%28IoT%29_Services_Enablement_in_FI-WARE >> >> >> On Thu, Nov 14, 2013 at 7:38 PM, Christof Marti wrote: >> Hi everybody >> >> I prepared the WP13 roadmap wiki-page [1] It is already in the public wiki. Not yet linked from the FIWARE-Roadmap page, but you can access it from the FI-WARE private wiki ?WP13 Integration? page [4]. As soon it has an acceptable state I will link it from the public roadmap page. >> >> Next step is now that you (the GE owners) create the Feature-pages (and if required UserStories) for your GE. >> Deadline: Monday 18.11. EOB >> >> To create this pages please follow the instructions on the ?How to upload the full description of backlog entries to the Wiki? tutorial on the main page [2]. >> WITH ONE EXCEPTION: Use the roadmap-page [1] as a starting point to create your Features (instead of the Materializing page) and place the link in the respective minor version section of your GE, when you plan to have the feature implemented (see extract below) >> - Release 3.2 (until January 2014) >> - Release 3.3 (until April 2014) >> (We will copy the links to the ?Materializing pages?, as soon the Architecture, Backlog, etc. is ready and publicly available.) >> >> >> >> To name your entries follow the naming conventions from the ?How to assign identifiers to FI-WARE Backlog entries? tutorial [3]. >> For WP13 use the FIWARE.Feature.MiWi..[.] form: >> e.g. FIWARE.Feature.MiWi.3D-UI.DataflowProcessing.XFlowSupport >> or FIWARE.Feature.MiWi.Synchronization.SceneAPI >> etc. >> You can choose the yourself. When defining a Feature keep in mind, that it has to be implemented within one minor release. >> >> For content of your Feature (UserStory) page, copy the Feature (UserStory) template from the tutorial page [2]. >> >> [1] WP13 roadmap: http://wiki.fi-ware.eu/Roadmap_of_Advanced_Middleware_and_Web_UI >> [2] Tutorial to create backlog entries: https://wiki.fi-ware.eu/How_to_upload_the_full_description_of_backlog_entries_to_the_Wiki >> [3] Naming conventions: https://www.fi-ware.eu/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_(convention_to_follow) >> [4] WP13 Integration page: http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/WP13_Integration >> >> >> Cheers, >> Christof >> ---- >> InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch >> Institut of Applied Information Technology - InIT >> Zurich University of Applied Sciences - ZHAW >> School of Engineering >> Phone: +41 58 934 70 63 >> Skype: christof-marti >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >> >> >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >> >> > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Fri Nov 15 08:45:12 2013 From: toni at playsign.net (Toni Alatalo) Date: Fri, 15 Nov 2013 09:45:12 +0200 Subject: [Fiware-miwi] EntSys: domPerf revisited & net sync work Message-ID: News from Entity System work, on the 1. DOM integration investigation & 2. sync GE work fronts: 1. DOM performance revisited: In the xml3d.js code reading that I did last weekend & in the discussions in the Monday meet one observation was that in the xml3d.js model of DOM integration there is an optimization mechanism for attribute access. But the creation of new elements goes directly to the document object. Philipp suggested that we?d use the DOM for the scenes so we returned to investigate it again this week. In the old naive DOM perf test we did early on there is both element creation & attribute modification tests. Earlier the focus was mostly on the attribute modification. In large scenes we can however get a lot of element creation too, at startup or when large changes happen (e.g. almost the whole scene contents is switched to something else on the fly). Earlier we saw that with a quite large number of 10k elements that was not too horrible(some seconds). But on Tue I tested with 100k(*) elements and it seemed to get slower exponentially ? it took several minutes. The good news is that the usual DOM usage optimization worked: adding the new elements to a first unattached parent element which is only added to document.body as the last step reduced the time from ~4min -> 0.86secs. I took that trick from https://developers.google.com/speed/articles/javascript-dom . We were not sure whether these html-reflow avoiding optimizations affect with the invisible, unknown-to-browser elements but apparently they do. The last step to add the whole subtree of 100k elements to document.body doesn?t cost much: skipping that reduced the time from 0.86s -> 0.80s only. This optimization is now on in the microbenchmark code at https://github.com/playsign/wex-experiments/tree/master/domBenchmark (the readme doc there is not updated for this change yet though, only the code). This rises the question: when should the 3d scene added to the document? As we can get huge changes to it via net sync or from application logic at any later point as well: for example a city planning app can say: ?switch city area X from future plan A to alternative B? Proposal: Keep the 3d scene tree of DOM elements outside the document by default (in the typical realXtend usage). Add it to the document.body on demand: when the user/dev wants to see it using view-source or a browser debugger, or for some other reason. It is quick to do so not a problem, needs some way to trigger it though. Otherwise it?s off the doc so the manipulation stays efficient. It does not affect the API (much), they?d still be normal DOM elements etc. One thing we have to check is whether DOM mutation observers etc. all work correctly outside the document.body too. Loading the scene from a HTML doc, typical xml3d style, can still work too. If we for example have a getter func for the root node it doesn?t make a diff for application code whether the tree is in the document.body or not. 2. Sync GE: Lasse is now progressing with the Sync GE work and needs some place where to put&get the data from&for the network messages. He figured (on Tue/Wed) that the raw DOM does not suffice as there is no type information for the attributes. He?s now using Chiru-Webclient?s code for that, there is a typed Attribute object (similar to native Tundra). Changing that to whatever becomes the final system is not a big deal, he just needs something now to develop & verify the networking. But the type information is required and it?s not yet clear exactly how we should have that. So this is where we are now with the entity system work ? next week hopefully takes us much further again. Comments & insights & suggestions etc. are welcome. One thing I?ll do soon is to check again xml3d.js?s JS xml3dobject / attribute code to see whether that could work for Sync (& Interface Designer). ~Toni (*) 100k elements may seem like much but there the point is that for a single 3d scene object (== entity) we get multiple xml elements, even 5-10, as there is one for each reX component in the xml3d style (== normal xml style). As TXML it would be even much more as there?s an xml element for every *attribute* too. So the rationale is a big complex scene with 10k entities, avg 10 components each -> 100k xml elements as xml3d (might be a million as txml :) From mcp at tid.es Fri Nov 15 09:38:09 2013 From: mcp at tid.es (Miguel Carrillo) Date: Fri, 15 Nov 2013 09:38:09 +0100 Subject: [Fiware-miwi] New users on FI-WARE project In-Reply-To: References: <466F5547-A5F6-416D-8C96-D4C88064B6FA@zhaw.ch> <5a10a18285bf48e98d062d230b61e941@SRV-MAIL-001.zhaw.ch> Message-ID: <5285DD71.4010200@tid.es> Dear all, We have received multiple requests to join the project FI-WARE from WP13. As they seemed urgent, I just contact you to let you know that they are accepted. Best regards, Miguel El 15/11/2013 8:05, Christof Marti escribi?: Hi everybody Link [3] is wrong should be: [3] Naming conventions: https://wiki.fi-ware.eu/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_(convention_to_follow) To edit the page you have to login on forge first: https://forge.fi-ware.eu (login on wiki itself does not work) Afterwards reload the wiki page and you should see the edit button on top. Christof Am 15.11.2013 um 07:59 schrieb Tharanga Wijethilake >: Sorry for multiple mails.. One more thing. Link 3 is not working. I think We could get in to writing once we have these. BR ~Tharanga On Fri, Nov 15, 2013 at 8:49 AM, Tharanga Wijethilake > wrote: Hello Christof, You have suggested that we should "Use the roadmap-page [1] as a starting point to create your Features". But we , at least we at the cyberlightning, are not able to edit the page. Could you please instruct us on that? Further I am trying to follow the instructions on "Tutorial to create backlog entries". I can not see (Which implies that we do not have access rights) the edit buttons on "Materializing Advanced User Interfaces in FI-WARE" page. BR ~Tharanga On Fri, Nov 15, 2013 at 8:20 AM, Tomi Sarni > wrote: ah nevermind i sort of understood now. On Fri, Nov 15, 2013 at 8:12 AM, Tomi Sarni > wrote: Cristof, you in charge of maintaining the main page? some kind of introduction and template could be there to ease up the filling process. Well im gona start with this page i guess anyhow? https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Advanced_User_Interfaces_in_FI-WARE versus https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/Materializing_Internet_of_Things_%28IoT%29_Services_Enablement_in_FI-WARE On Thu, Nov 14, 2013 at 7:38 PM, Christof Marti > wrote: Hi everybody I prepared the WP13 roadmap wiki-page [1] It is already in the public wiki. Not yet linked from the FIWARE-Roadmap page, but you can access it from the FI-WARE private wiki "WP13 Integration" page [4]. As soon it has an acceptable state I will link it from the public roadmap page. Next step is now that you (the GE owners) create the Feature-pages (and if required UserStories) for your GE. Deadline: Monday 18.11. EOB To create this pages please follow the instructions on the "How to upload the full description of backlog entries to the Wiki" tutorial on the main page [2]. WITH ONE EXCEPTION: Use the roadmap-page [1] as a starting point to create your Features (instead of the Materializing page) and place the link in the respective minor version section of your GE, when you plan to have the feature implemented (see extract below) - Release 3.2 (until January 2014) - Release 3.3 (until April 2014) (We will copy the links to the "Materializing pages", as soon the Architecture, Backlog, etc. is ready and publicly available.) [X] To name your entries follow the naming conventions from the "How to assign identifiers to FI-WARE Backlog entries" tutorial [3]. For WP13 use the FIWARE.Feature.MiWi..[.] form: e.g. FIWARE.Feature.MiWi.3D-UI.DataflowProcessing.XFlowSupport or FIWARE.Feature.MiWi.Synchronization.SceneAPI etc. You can choose the yourself. When defining a Feature keep in mind, that it has to be implemented within one minor release. For content of your Feature (UserStory) page, copy the Feature (UserStory) template from the tutorial page [2]. [1] WP13 roadmap: http://wiki.fi-ware.eu/Roadmap_of_Advanced_Middleware_and_Web_UI [2] Tutorial to create backlog entries: https://wiki.fi-ware.eu/How_to_upload_the_full_description_of_backlog_entries_to_the_Wiki [3] Naming conventions: https://www.fi-ware.eu/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_(convention_to_follow) [4] WP13 Integration page: http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/WP13_Integration Cheers, Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering Phone: +41 58 934 70 63 Skype: christof-marti _______________________________________________ Fiware-miwi mailing list Fiware-miwi at lists.fi-ware.eu https://lists.fi-ware.eu/listinfo/fiware-miwi _______________________________________________ Fiware-miwi mailing list Fiware-miwi at lists.fi-ware.eu https://lists.fi-ware.eu/listinfo/fiware-miwi _______________________________________________ Fiware-miwi mailing list Fiware-miwi at lists.fi-ware.eu https://lists.fi-ware.eu/listinfo/fiware-miwi -- ---------------------------------------------------------------------- _/ _/_/ Miguel Carrillo Pacheco _/ _/ _/ _/ Telef?nica Distrito Telef?nica _/ _/_/_/ _/ _/ Investigaci?n y Edifico Oeste 1, Planta 9 _/ _/ _/ _/ Desarrollo Ronda de la Comunicaci?n S/N _/ _/_/ 28050 Madrid (Spain) Tel: (+34) 91 483 26 77 e-mail: mcp at tid.es Follow FI-WARE on the net Website: http://www.fi-ware.eu Facebook: http://www.facebook.com/pages/FI-WARE/251366491587242 Twitter: http://twitter.com/Fiware LinkedIn: http://www.linkedin.com/groups/FIWARE-4239932 ---------------------------------------------------------------------- ________________________________ Este mensaje se dirige exclusivamente a su destinatario. Puede consultar nuestra pol?tica de env?o y recepci?n de correo electr?nico en el enlace situado m?s abajo. This message is intended exclusively for its addressee. We only send and receive email on the basis of the terms set out at: http://www.tid.es/ES/PAGINAS/disclaimer.aspx -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian.sons at dfki.de Fri Nov 15 17:28:41 2013 From: kristian.sons at dfki.de (Kristian Sons) Date: Fri, 15 Nov 2013 17:28:41 +0100 Subject: [Fiware-miwi] Announcement: xml3d.js 4.5 released Message-ID: <52864BB9.2050909@dfki.de> Hi, we are happy to announce that we just released version 4.5 of xml3d.js. The biggest news: XML3D is not XML anymore! We have added full support for HTML encoding, thus XML3D will work with those many libraries that have no XHTML support. Otherwise, most changes were under the hood, including improvements such as paging to reduce garbage collection, frustum culling, smarter resource caching, mesh format handlers are able to exploit Web Works, efficient modification of data values etc. Also, prototype were replace by dataflows, a much easier way to reuse Xflow data flow graphs. Here the change log: * Full support for HTML encoding, alldemos in HTML now * Set data values efficiently using TypedArray -demo ,doc * Override shader attributes on a per-object basis -demo * Improved Error Messages * Reuse of Xflow graphs using new||element -demo ,doc * Many performance improvements, e.g. Frustum Culling and Paging * Support for multi-touch events * Dynamic near/far clip planes that adapt to scene size * #24 : WebWorker support for MeshLoader plug-ins -demo * #25 : Smarter handling of cached resources Have a nice weekend, The XML3D Team -- _______________________________________________________________________________ Kristian Sons Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI Agenten und Simulierte Realit?t Campus, Geb. D 3 2, Raum 0.77 66123 Saarbr?cken, Germany Phone: +49 681 85775-3833 Phone: +49 681 302-3833 Fax: +49 681 85775--2235 kristian.sons at dfki.de http://www.xml3d.org Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 _______________________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Sun Nov 17 16:24:31 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Sun, 17 Nov 2013 16:24:31 +0100 Subject: [Fiware-miwi] EntSys: domPerf revisited & net sync work In-Reply-To: References: Message-ID: <5288DFAF.7000806@dfki.de> Hi, Great that you have done the experiments and that there does not seem to be any issue with the DOM. A time for <1 sec for 100k elements should allow even for very fast networks :-). And yes, it is an important issue to not add the elements one-by-one but create them separately and then add the entire tree. Another option should be setting "display" to false on the new root element while adding children and finally back to true. Should do the same trick. For switching between model parts the "display" attribute can be used as well to switch in and off parts of the scene. So one could keep these parts in the document without issues. I am not sure about the attribute types. All types should be known, except for the dynamic ECA attributes. Are you talking about those? Best, Philipp Am 15.11.2013 08:45, schrieb Toni Alatalo: > News from Entity System work, on the 1. DOM integration investigation & 2. sync GE work fronts: > > 1. DOM performance revisited: > > In the xml3d.js code reading that I did last weekend & in the discussions in the Monday meet one observation was that in the xml3d.js model of DOM integration there is an optimization mechanism for attribute access. But the creation of new elements goes directly to the document object. Philipp suggested that we?d use the DOM for the scenes so we returned to investigate it again this week. > > In the old naive DOM perf test we did early on there is both element creation & attribute modification tests. Earlier the focus was mostly on the attribute modification. > > In large scenes we can however get a lot of element creation too, at startup or when large changes happen (e.g. almost the whole scene contents is switched to something else on the fly). Earlier we saw that with a quite large number of 10k elements that was not too horrible(some seconds). But on Tue I tested with 100k(*) elements and it seemed to get slower exponentially ? it took several minutes. > > The good news is that the usual DOM usage optimization worked: adding the new elements to a first unattached parent element which is only added to document.body as the last step reduced the time from ~4min -> 0.86secs. I took that trick from https://developers.google.com/speed/articles/javascript-dom . We were not sure whether these html-reflow avoiding optimizations affect with the invisible, unknown-to-browser elements but apparently they do. The last step to add the whole subtree of 100k elements to document.body doesn?t cost much: skipping that reduced the time from 0.86s -> 0.80s only. This optimization is now on in the microbenchmark code at https://github.com/playsign/wex-experiments/tree/master/domBenchmark (the readme doc there is not updated for this change yet though, only the code). > > This rises the question: when should the 3d scene added to the document? As we can get huge changes to it via net sync or from application logic at any later point as well: for example a city planning app can say: ?switch city area X from future plan A to alternative B? > > Proposal: Keep the 3d scene tree of DOM elements outside the document by default (in the typical realXtend usage). Add it to the document.body on demand: when the user/dev wants to see it using view-source or a browser debugger, or for some other reason. It is quick to do so not a problem, needs some way to trigger it though. Otherwise it?s off the doc so the manipulation stays efficient. It does not affect the API (much), they?d still be normal DOM elements etc. One thing we have to check is whether DOM mutation observers etc. all work correctly outside the document.body too. Loading the scene from a HTML doc, typical xml3d style, can still work too. If we for example have a getter func for the root node it doesn?t make a diff for application code whether the tree is in the document.body or not. > > 2. Sync GE: > > Lasse is now progressing with the Sync GE work and needs some place where to put&get the data from&for the network messages. He figured (on Tue/Wed) that the raw DOM does not suffice as there is no type information for the attributes. He?s now using Chiru-Webclient?s code for that, there is a typed Attribute object (similar to native Tundra). Changing that to whatever becomes the final system is not a big deal, he just needs something now to develop & verify the networking. But the type information is required and it?s not yet clear exactly how we should have that. > > So this is where we are now with the entity system work ? next week hopefully takes us much further again. Comments & insights & suggestions etc. are welcome. One thing I?ll do soon is to check again xml3d.js?s JS xml3dobject / attribute code to see whether that could work for Sync (& Interface Designer). > > ~Toni > > (*) 100k elements may seem like much but there the point is that for a single 3d scene object (== entity) we get multiple xml elements, even 5-10, as there is one for each reX component in the xml3d style (== normal xml style). As TXML it would be even much more as there?s an xml element for every *attribute* too. So the rationale is a big complex scene with 10k entities, avg 10 components each -> 100k xml elements as xml3d (might be a million as txml :) > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From toni at playsign.net Mon Nov 18 11:40:05 2013 From: toni at playsign.net (Toni Alatalo) Date: Mon, 18 Nov 2013 12:40:05 +0200 Subject: [Fiware-miwi] Video: N-Player Pong Message-ID: <85084FC9-D74C-4482-889F-96CB1CBD3C85@playsign.net> Hi, Philipp asked for videos about the demos showed last week, I don?t remember for what purpose but sounded important, I added it as an action point to the minutes then. Here is one, about the n-player networked pong use / test case, pretty much what we showed then too: Multiplayer Pong with WebGL & WebRTC https://www.youtube.com/watch?v=XR05_6kh5Dw We?ll make another (probably by doing screen capture) of the city rendering soon. Here we used a camera to really make it clear how it is networked cross multiple computers. Plan was to use an Android tablet as one of the clients but there was a bug in Chrome which prevented interop of Chrome v. 30 & 31 WebRTC connections (31 is still in beta for Android) which we didn?t figure out in time so this is just PC & laptops now. ~Toni From stefan.lemme at dfki.de Mon Nov 18 17:44:42 2013 From: stefan.lemme at dfki.de (Stefan Lemme) Date: Mon, 18 Nov 2013 17:44:42 +0100 Subject: [Fiware-miwi] Xflow HW Support, AR in the Browser Message-ID: <528A43FA.4080308@dfki.de> Dear all, we just started to elaborate the possible use of WeX contributions for the FIcontent project. In particular we are interested in the Xflow HW support using GLSL and Augmented Reality in the web browser. I know from Philipp and Torsten that you showed some demos regarding this at the last F2F meeting in Oulu. Moreover, I screened the activity report sent around some time ago. But in fact some links in this report were invalid and I was not able to find the results/demos. For instance, the alvar_mobile.js file is missing in the repository. I would like to ask if somebody can direct me to the right places for the two points: * Demo showing Xflow HW support using GLSL and * Augmented Reality in the web browser using the ALVAR compiled with emscripten Thanks in advance! Best, Stefan -- ******************************************************** Stefan Lemme DFKI GmbH Agenten und Simulierte Realit?t Campus, Geb. D 3 4, Raum 0.75 66123 Saarbr?cken Tel.: +49 (0) 681 / 85775 -- 5391 Fax: +49 (0) 681 / 85775 -- 2235 http://www.dfki.de/web ******************************************************** Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH Trippstadter Stra?e 122 D-67663 Kaiserslautern, Germany Geschaeftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973 Steuernummer: 19/673/0060/ ******************************************************** ******************************************************** Stefan Lemme DFKI GmbH Agents and Simulated Reality Campus, Build. D 3 4, room 0.75 D-66123 Saarbruecken Germany Phone: +49 (0) 681 / 85775 -- 5391 Fax: +49 (0) 681 / 85775 -- 2235 http://www.dfki.de/web ******************************************************** German Research Center for Artificial Intelligence Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany Management Board: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Chairman) Dr. Walter Olthoff Chairman of the Supervisory Board: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 ******************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From jarkko at cyberlightning.com Mon Nov 18 17:59:39 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Mon, 18 Nov 2013 18:59:39 +0200 Subject: [Fiware-miwi] Xflow HW Support, AR in the Browser In-Reply-To: <528A43FA.4080308@dfki.de> References: <528A43FA.4080308@dfki.de> Message-ID: Stefan, Regarding to the Xflow and GLSL (and WebCL) acceleration, you may want to peek into Cyberlightning code repository here: < https://github.com/Cyberlightning/Cyber-WeX/tree/master/DataflowProcessing/demos > Regarding to Alvar AR library, i understood that missing Alvar dependency is due to not finished contract between CIE and VTT which relates to the Alvar development work. Hence we do not yet have a permission to push the code into repo. - jarkko On Mon, Nov 18, 2013 at 6:44 PM, Stefan Lemme wrote: > > Dear all, > > we just started to elaborate the possible use of WeX contributions for the > FIcontent project. > In particular we are interested in the Xflow HW support using GLSL and > Augmented Reality in the web browser. > > I know from Philipp and Torsten that you showed some demos regarding this > at the last F2F meeting in Oulu. > Moreover, I screened the activity report sent around some time ago. > > But in fact some links in this report were invalid and I was not able to > find the results/demos. > For instance, the alvar_mobile.js file is missing in the repository. > > I would like to ask if somebody can direct me to the right places for the > two points: > > - Demo showing Xflow HW support using GLSL and > - Augmented Reality in the web browser using the ALVAR compiled with > emscripten > > Thanks in advance! > > Best, > Stefan > > > -- > ******************************************************** > Stefan Lemme > > DFKI GmbH > Agenten und Simulierte Realit?t > Campus, Geb. D 3 4, Raum 0.75 > 66123 Saarbr?cken > > Tel.: +49 (0) 681 / 85775 ? 5391 > Fax: +49 (0) 681 / 85775 ? 2235http://www.dfki.de/web > ******************************************************** > Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH > Trippstadter Stra?e 122 > D-67663 Kaiserslautern, Germany > > Geschaeftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973 > Steuernummer: 19/673/0060/ > ******************************************************** > > > > > > ******************************************************** > Stefan Lemme > > DFKI GmbH > Agents and Simulated Reality > Campus, Build. D 3 4, room 0.75 > D-66123 Saarbruecken > Germany > > Phone: +49 (0) 681 / 85775 ? 5391 > Fax: +49 (0) 681 / 85775 ? 2235 http://www.dfki.de/web > ******************************************************** > German Research Center for Artificial Intelligence > Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany > > Management Board: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Chairman) > Dr. Walter Olthoff > > Chairman of the Supervisory Board: > Prof. Dr. h.c. Hans A. Aukes > > Amtsgericht Kaiserslautern, HRB 2313 > ******************************************************** > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -- Jarkko Vatjus-Anttila VP, Technology Cyberlightning Ltd. mobile. +358 405245142 email. jarkko at cyberlightning.com Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.lemme at dfki.de Mon Nov 18 18:06:20 2013 From: stefan.lemme at dfki.de (Stefan Lemme) Date: Mon, 18 Nov 2013 18:06:20 +0100 Subject: [Fiware-miwi] Xflow HW Support, AR in the Browser In-Reply-To: References: <528A43FA.4080308@dfki.de> Message-ID: <528A490C.2090006@dfki.de> Hi Jarkko, thanks so far! Is there a chance to get the Alvar dependency for internal use only - until the contract is signed? Best, Stefan On 18.11.2013 17:59, Jarkko Vatjus-Anttila wrote: > Stefan, > > Regarding to the Xflow and GLSL (and WebCL) acceleration, you may want > to peek into Cyberlightning code repository here: > > > Regarding to Alvar AR library, i understood that missing Alvar > dependency is due to not finished contract between CIE and VTT which > relates to the Alvar development work. Hence we do not yet have a > permission to push the code into repo. > > - jarkko > > > On Mon, Nov 18, 2013 at 6:44 PM, Stefan Lemme > wrote: > > > Dear all, > > we just started to elaborate the possible use of WeX contributions > for the FIcontent project. > In particular we are interested in the Xflow HW support using GLSL > and Augmented Reality in the web browser. > > I know from Philipp and Torsten that you showed some demos > regarding this at the last F2F meeting in Oulu. > Moreover, I screened the activity report sent around some time ago. > > But in fact some links in this report were invalid and I was not > able to find the results/demos. > For instance, the alvar_mobile.js file is missing in the repository. > > I would like to ask if somebody can direct me to the right places > for the two points: > > * Demo showing Xflow HW support using GLSL and > * Augmented Reality in the web browser using the ALVAR compiled > with emscripten > > Thanks in advance! > > Best, > Stefan > > > -- > ******************************************************** > Stefan Lemme > > DFKI GmbH > Agenten und Simulierte Realit?t > Campus, Geb. D 3 4, Raum 0.75 > 66123 Saarbr?cken > > Tel.:+49 (0) 681 / 85775 ? 5391 > Fax:+49 (0) 681 / 85775 ? 2235 > http://www.dfki.de/web > ******************************************************** > Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH > Trippstadter Stra?e 122 > D-67663 Kaiserslautern, Germany > > Geschaeftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973 > Steuernummer: 19/673/0060/ > ******************************************************** > > > > > > ******************************************************** > Stefan Lemme > > DFKI GmbH > Agents and Simulated Reality > Campus, Build. D 3 4, room 0.75 > D-66123 Saarbruecken > Germany > > Phone: +49 (0) 681 / 85775 ? 5391 > Fax: +49 (0) 681 / 85775 ? 2235 > http://www.dfki.de/web > ******************************************************** > German Research Center for Artificial Intelligence > Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany > > Management Board: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Chairman) > Dr. Walter Olthoff > > Chairman of the Supervisory Board: > Prof. Dr. h.c. Hans A. Aukes > > Amtsgericht Kaiserslautern, HRB 2313 > ******************************************************** > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > > > > -- > Jarkko Vatjus-Anttila > VP, Technology > Cyberlightning Ltd. > > mobile. +358 405245142 > email. jarkko at cyberlightning.com > > Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. > Get your free evaluation version and buy it now! www.cybersli.de > > > www.cyberlightning.com -- ******************************************************** Stefan Lemme DFKI GmbH Agenten und Simulierte Realit?t Campus, Geb. D 3 4, Raum 0.75 66123 Saarbr?cken Tel.: +49 (0) 681 / 85775 ? 5391 Fax: +49 (0) 681 / 85775 ? 2235 http://www.dfki.de/web ******************************************************** Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH Trippstadter Stra?e 122 D-67663 Kaiserslautern, Germany Geschaeftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973 Steuernummer: 19/673/0060/ ******************************************************** ******************************************************** Stefan Lemme DFKI GmbH Agents and Simulated Reality Campus, Build. D 3 4, room 0.75 D-66123 Saarbruecken Germany Phone: +49 (0) 681 / 85775 ? 5391 Fax: +49 (0) 681 / 85775 ? 2235 http://www.dfki.de/web ******************************************************** German Research Center for Artificial Intelligence Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany Management Board: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Chairman) Dr. Walter Olthoff Chairman of the Supervisory Board: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 ******************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From jarkko at cyberlightning.com Mon Nov 18 18:12:24 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Mon, 18 Nov 2013 19:12:24 +0200 Subject: [Fiware-miwi] Xflow HW Support, AR in the Browser In-Reply-To: <528A490C.2090006@dfki.de> References: <528A43FA.4080308@dfki.de> <528A490C.2090006@dfki.de> Message-ID: Stefan, I would not know the answer for the question. Mr. Kari Autio is the correct one to comment on this one. - jarkko On Mon, Nov 18, 2013 at 7:06 PM, Stefan Lemme wrote: > > Hi Jarkko, > > thanks so far! > Is there a chance to get the Alvar dependency for internal use only - > until the contract is signed? > > Best, > Stefan > > > > On 18.11.2013 17:59, Jarkko Vatjus-Anttila wrote: > > Stefan, > > Regarding to the Xflow and GLSL (and WebCL) acceleration, you may want to > peek into Cyberlightning code repository here: < > https://github.com/Cyberlightning/Cyber-WeX/tree/master/DataflowProcessing/demos > > > > Regarding to Alvar AR library, i understood that missing Alvar dependency > is due to not finished contract between CIE and VTT which relates to the > Alvar development work. Hence we do not yet have a permission to push the > code into repo. > > - jarkko > > > On Mon, Nov 18, 2013 at 6:44 PM, Stefan Lemme wrote: > >> >> Dear all, >> >> we just started to elaborate the possible use of WeX contributions for >> the FIcontent project. >> In particular we are interested in the Xflow HW support using GLSL and >> Augmented Reality in the web browser. >> >> I know from Philipp and Torsten that you showed some demos regarding this >> at the last F2F meeting in Oulu. >> Moreover, I screened the activity report sent around some time ago. >> >> But in fact some links in this report were invalid and I was not able to >> find the results/demos. >> For instance, the alvar_mobile.js file is missing in the repository. >> >> I would like to ask if somebody can direct me to the right places for the >> two points: >> >> - Demo showing Xflow HW support using GLSL and >> - Augmented Reality in the web browser using the ALVAR compiled with >> emscripten >> >> Thanks in advance! >> >> Best, >> Stefan >> >> >> -- >> ******************************************************** >> Stefan Lemme >> >> DFKI GmbH >> Agenten und Simulierte Realit?t >> Campus, Geb. D 3 4, Raum 0.75 >> 66123 Saarbr?cken >> >> Tel.: +49 (0) 681 / 85775 ? 5391 >> Fax: +49 (0) 681 / 85775 ? 2235http://www.dfki.de/web >> ******************************************************** >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH >> Trippstadter Stra?e 122 >> D-67663 Kaiserslautern, Germany >> >> Geschaeftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973 >> Steuernummer: 19/673/0060/ >> ******************************************************** >> >> >> >> >> >> ******************************************************** >> Stefan Lemme >> >> DFKI GmbH >> Agents and Simulated Reality >> Campus, Build. D 3 4, room 0.75 >> D-66123 Saarbruecken >> Germany >> >> Phone: +49 (0) 681 / 85775 ? 5391 >> Fax: +49 (0) 681 / 85775 ? 2235 http://www.dfki.de/web >> ******************************************************** >> German Research Center for Artificial Intelligence >> Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany >> >> Management Board: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Chairman) >> Dr. Walter Olthoff >> >> Chairman of the Supervisory Board: >> Prof. Dr. h.c. Hans A. Aukes >> >> Amtsgericht Kaiserslautern, HRB 2313 >> ******************************************************** >> >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >> > > > -- > Jarkko Vatjus-Anttila > VP, Technology > Cyberlightning Ltd. > > mobile. +358 405245142 > email. jarkko at cyberlightning.com > > Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. > Get your free evaluation version and buy it now! www.cybersli.de > > www.cyberlightning.com > > > -- > ******************************************************** > Stefan Lemme > > DFKI GmbH > Agenten und Simulierte Realit?t > Campus, Geb. D 3 4, Raum 0.75 > 66123 Saarbr?cken > > Tel.: +49 (0) 681 / 85775 ? 5391 > Fax: +49 (0) 681 / 85775 ? 2235http://www.dfki.de/web > ******************************************************** > Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH > Trippstadter Stra?e 122 > D-67663 Kaiserslautern, Germany > > Geschaeftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973 > Steuernummer: 19/673/0060/ > ******************************************************** > > > > > > ******************************************************** > Stefan Lemme > > DFKI GmbH > Agents and Simulated Reality > Campus, Build. D 3 4, room 0.75 > D-66123 Saarbruecken > Germany > > Phone: +49 (0) 681 / 85775 ? 5391 > Fax: +49 (0) 681 / 85775 ? 2235 http://www.dfki.de/web > ******************************************************** > German Research Center for Artificial Intelligence > Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany > > Management Board: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Chairman) > Dr. Walter Olthoff > > Chairman of the Supervisory Board: > Prof. Dr. h.c. Hans A. Aukes > > Amtsgericht Kaiserslautern, HRB 2313 > ******************************************************** > > -- Jarkko Vatjus-Anttila VP, Technology Cyberlightning Ltd. mobile. +358 405245142 email. jarkko at cyberlightning.com Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From erno at playsign.net Tue Nov 19 10:10:10 2013 From: erno at playsign.net (Erno Kuusela) Date: Tue, 19 Nov 2013 11:10:10 +0200 Subject: [Fiware-miwi] Oulu bi-weekly f2f meeting Message-ID: <20131119091010.GU47616@ee.oulu.fi> Hello, It's that tuesday again - any volunteers for hosting the 13:00 meeting? Erno From jarkko at cyberlightning.com Tue Nov 19 10:13:44 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Tue, 19 Nov 2013 11:13:44 +0200 Subject: [Fiware-miwi] Oulu bi-weekly f2f meeting In-Reply-To: <20131119091010.GU47616@ee.oulu.fi> References: <20131119091010.GU47616@ee.oulu.fi> Message-ID: Yesterday we talked that due to having the joint meeting last week and now that we have wiki updates and continuation project discussions ongoing, we would skip this meeting this week. - j On Tue, Nov 19, 2013 at 11:10 AM, Erno Kuusela wrote: > Hello, > > It's that tuesday again - any volunteers for hosting the 13:00 meeting? > > Erno > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- Jarkko Vatjus-Anttila VP, Technology Cyberlightning Ltd. mobile. +358 405245142 email. jarkko at cyberlightning.com Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.lemme at dfki.de Tue Nov 19 12:43:11 2013 From: stefan.lemme at dfki.de (Stefan Lemme) Date: Tue, 19 Nov 2013 12:43:11 +0100 Subject: [Fiware-miwi] Xflow HW Support, AR in the Browser In-Reply-To: References: <528A43FA.4080308@dfki.de> <528A490C.2090006@dfki.de> Message-ID: <528B4ECF.3010809@dfki.de> Thanks! Please notify me once the code is available. Best, Stefan On 19.11.2013 08:01, Kari Autio wrote: > Contract should be signed by now. I will check the status of the code. > -kari > > Kari > 040-1676545 > > > 2013/11/18 Jarkko Vatjus-Anttila > > > Stefan, > > I would not know the answer for the question. Mr. Kari Autio is > the correct one to comment on this one. > > - jarkko > > > On Mon, Nov 18, 2013 at 7:06 PM, Stefan Lemme > > wrote: > > > Hi Jarkko, > > thanks so far! > Is there a chance to get the Alvar dependency for internal use > only - until the contract is signed? > > Best, > Stefan > > > > On 18.11.2013 17:59, Jarkko Vatjus-Anttila wrote: >> Stefan, >> >> Regarding to the Xflow and GLSL (and WebCL) acceleration, you >> may want to peek into Cyberlightning code repository here: >> >> >> Regarding to Alvar AR library, i understood that missing >> Alvar dependency is due to not finished contract between CIE >> and VTT which relates to the Alvar development work. Hence we >> do not yet have a permission to push the code into repo. >> >> - jarkko >> >> >> On Mon, Nov 18, 2013 at 6:44 PM, Stefan Lemme >> > wrote: >> >> >> Dear all, >> >> we just started to elaborate the possible use of WeX >> contributions for the FIcontent project. >> In particular we are interested in the Xflow HW support >> using GLSL and Augmented Reality in the web browser. >> >> I know from Philipp and Torsten that you showed some >> demos regarding this at the last F2F meeting in Oulu. >> Moreover, I screened the activity report sent around some >> time ago. >> >> But in fact some links in this report were invalid and I >> was not able to find the results/demos. >> For instance, the alvar_mobile.js file is missing in the >> repository. >> >> I would like to ask if somebody can direct me to the >> right places for the two points: >> >> * Demo showing Xflow HW support using GLSL and >> * Augmented Reality in the web browser using the ALVAR >> compiled with emscripten >> >> Thanks in advance! >> >> Best, >> Stefan >> >> >> -- >> ******************************************************** >> Stefan Lemme >> >> DFKI GmbH >> Agenten und Simulierte Realit?t >> Campus, Geb. D 3 4, Raum 0.75 >> 66123 Saarbr?cken >> >> Tel.:+49 (0) 681 / 85775 ? 5391 >> Fax:+49 (0) 681 / 85775 ? 2235 >> http://www.dfki.de/web >> ******************************************************** >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH >> Trippstadter Stra?e 122 >> D-67663 Kaiserslautern, Germany >> >> Geschaeftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973 >> Steuernummer: 19/673/0060/ >> ******************************************************** >> >> >> >> >> >> ******************************************************** >> Stefan Lemme >> >> DFKI GmbH >> Agents and Simulated Reality >> Campus, Build. D 3 4, room 0.75 >> D-66123 Saarbruecken >> Germany >> >> Phone: +49 (0) 681 / 85775 ? 5391 >> Fax: +49 (0) 681 / 85775 ? 2235 >> http://www.dfki.de/web >> ******************************************************** >> German Research Center for Artificial Intelligence >> Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany >> >> Management Board: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Chairman) >> Dr. Walter Olthoff >> >> Chairman of the Supervisory Board: >> Prof. Dr. h.c. Hans A. Aukes >> >> Amtsgericht Kaiserslautern, HRB 2313 >> ******************************************************** >> >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >> >> >> >> -- >> Jarkko Vatjus-Anttila >> VP, Technology >> Cyberlightning Ltd. >> >> mobile. +358 405245142 >> email. jarkko at cyberlightning.com >> >> >> Enrich Your Presentations! New CyberSlide 2.0 released on >> February 27th. >> Get your free evaluation version and buy it now! >> www.cybersli.de >> >> www.cyberlightning.com > > -- > ******************************************************** > Stefan Lemme > > DFKI GmbH > Agenten und Simulierte Realit?t > Campus, Geb. D 3 4, Raum 0.75 > 66123 Saarbr?cken > > Tel.:+49 (0) 681 / 85775 ? 5391 > Fax:+49 (0) 681 / 85775 ? 2235 > http://www.dfki.de/web > ******************************************************** > Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH > Trippstadter Stra?e 122 > D-67663 Kaiserslautern, Germany > > Geschaeftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973 > Steuernummer: 19/673/0060/ > ******************************************************** > > > > > > ******************************************************** > Stefan Lemme > > DFKI GmbH > Agents and Simulated Reality > Campus, Build. D 3 4, room 0.75 > D-66123 Saarbruecken > Germany > > Phone: +49 (0) 681 / 85775 ? 5391 > Fax: +49 (0) 681 / 85775 ? 2235 > http://www.dfki.de/web > ******************************************************** > German Research Center for Artificial Intelligence > Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany > > Management Board: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Chairman) > Dr. Walter Olthoff > > Chairman of the Supervisory Board: > Prof. Dr. h.c. Hans A. Aukes > > Amtsgericht Kaiserslautern, HRB 2313 > ******************************************************** > > > > > -- > Jarkko Vatjus-Anttila > VP, Technology > Cyberlightning Ltd. > > mobile. +358 405245142 > email. jarkko at cyberlightning.com > > Enrich Your Presentations! New CyberSlide 2.0 released on February > 27th. > Get your free evaluation version and buy it now! www.cybersli.de > > > www.cyberlightning.com > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -- ******************************************************** Stefan Lemme DFKI GmbH Agenten und Simulierte Realit?t Campus, Geb. D 3 4, Raum 0.75 66123 Saarbr?cken Tel.: +49 (0) 681 / 85775 ? 5391 Fax: +49 (0) 681 / 85775 ? 2235 http://www.dfki.de/web ******************************************************** Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH Trippstadter Stra?e 122 D-67663 Kaiserslautern, Germany Geschaeftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973 Steuernummer: 19/673/0060/ ******************************************************** ******************************************************** Stefan Lemme DFKI GmbH Agents and Simulated Reality Campus, Build. D 3 4, room 0.75 D-66123 Saarbruecken Germany Phone: +49 (0) 681 / 85775 ? 5391 Fax: +49 (0) 681 / 85775 ? 2235 http://www.dfki.de/web ******************************************************** German Research Center for Artificial Intelligence Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany Management Board: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Chairman) Dr. Walter Olthoff Chairman of the Supervisory Board: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 ******************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From jarkko at cyberlightning.com Tue Nov 19 13:14:26 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Tue, 19 Nov 2013 14:14:26 +0200 Subject: [Fiware-miwi] Updates to the Open specifications Message-ID: A general question: We are having incoming updates to the Open Specification documents, so can we do that freely whenever we see fit? I just want to clarify that we are not accidentally manipulating documents when someone is generating those PDF's or similar. I take we can freely edit at this point? -- Jarkko Vatjus-Anttila VP, Technology Cyberlightning Ltd. mobile. +358 405245142 email. jarkko at cyberlightning.com Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mach at zhaw.ch Tue Nov 19 14:36:57 2013 From: mach at zhaw.ch (Christof Marti) Date: Tue, 19 Nov 2013 14:36:57 +0100 Subject: [Fiware-miwi] Updates to the Open specifications In-Reply-To: References: Message-ID: Hi Jarkko At the moment you can still do updates on the documents. I will send you an email, before we need to freeze it and/or we create the static documents. Christof Am 19.11.2013 um 13:14 schrieb Jarkko Vatjus-Anttila : > A general question: We are having incoming updates to the Open Specification documents, so can we do that freely whenever we see fit? I just want to clarify that we are not accidentally manipulating documents when someone is generating those PDF's or similar. I take we can freely edit at this point? > > -- > Jarkko Vatjus-Anttila > VP, Technology > Cyberlightning Ltd. > > mobile. +358 405245142 > email. jarkko at cyberlightning.com > > Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. > Get your free evaluation version and buy it now! www.cybersli.de > > www.cyberlightning.com > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mach at zhaw.ch Tue Nov 19 15:38:31 2013 From: mach at zhaw.ch (Christof Marti) Date: Tue, 19 Nov 2013 15:38:31 +0100 Subject: [Fiware-miwi] Creation of the WP13 roadmap In-Reply-To: <466F5547-A5F6-416D-8C96-D4C88064B6FA@zhaw.ch> References: <466F5547-A5F6-416D-8C96-D4C88064B6FA@zhaw.ch> Message-ID: <84A0201F-502E-480C-A60A-2EA8E82546B8@zhaw.ch> Hi Thanks to all who provided their feature descriptions in the roadmap document yesterday on time. There are still some GEs missing. Would be great if those could be available before tomorrows meeting. I will cover the following points at the meeting tomorrow, but you can/should already start working on them before. As a preparation for tomorrow, check that your features for the releases are aligned with the other GEs, to make sure all dependencies for your features are available in the same release. Check that your features cover and are aligned with the epics from the materializing page (but don?t yet add the features to this page, we will add the GEs and features to the materializing page later) The descriptions in the first part of the document (above the table) should contain textual descriptions of the new functionality and not links to the feature pages. (see example here: http://wiki.fi-ware.eu/Roadmap_of_Data/Context_Management . Typically you can use the goal description of the feature, maybe with some minor rewording (instead of ?should? or ?will provide? use ?does? and ?is providing?). Thanks, See you tomorrow. Cheers, Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering Phone: +41 58 934 70 63 Skype: christof-marti Am 14.11.2013 um 18:38 schrieb Christof Marti : > Hi everybody > > I prepared the WP13 roadmap wiki-page [1] It is already in the public wiki. Not yet linked from the FIWARE-Roadmap page, but you can access it from the FI-WARE private wiki ?WP13 Integration? page [4]. As soon it has an acceptable state I will link it from the public roadmap page. > > Next step is now that you (the GE owners) create the Feature-pages (and if required UserStories) for your GE. > Deadline: Monday 18.11. EOB > > To create this pages please follow the instructions on the ?How to upload the full description of backlog entries to the Wiki? tutorial on the main page [2]. > WITH ONE EXCEPTION: Use the roadmap-page [1] as a starting point to create your Features (instead of the Materializing page) and place the link in the respective minor version section of your GE, when you plan to have the feature implemented (see extract below) > - Release 3.2 (until January 2014) > - Release 3.3 (until April 2014) > (We will copy the links to the ?Materializing pages?, as soon the Architecture, Backlog, etc. is ready and publicly available.) > > > > To name your entries follow the naming conventions from the ?How to assign identifiers to FI-WARE Backlog entries? tutorial [3]. > For WP13 use the FIWARE.Feature.MiWi..[.] form: > e.g. FIWARE.Feature.MiWi.3D-UI.DataflowProcessing.XFlowSupport > or FIWARE.Feature.MiWi.Synchronization.SceneAPI > etc. > You can choose the yourself. When defining a Feature keep in mind, that it has to be implemented within one minor release. > > For content of your Feature (UserStory) page, copy the Feature (UserStory) template from the tutorial page [2]. > > [1] WP13 roadmap: http://wiki.fi-ware.eu/Roadmap_of_Advanced_Middleware_and_Web_UI > [2] Tutorial to create backlog entries: https://wiki.fi-ware.eu/How_to_upload_the_full_description_of_backlog_entries_to_the_Wiki > [3] Naming conventions: https://wiki.fi-ware.eu/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_(convention_to_follow) > [4] WP13 Integration page: http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/WP13_Integration > > > Cheers, > Christof > ---- > InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > Institut of Applied Information Technology - InIT > Zurich University of Applied Sciences - ZHAW > School of Engineering > Phone: +41 58 934 70 63 > Skype: christof-marti -------------- next part -------------- An HTML attachment was scrubbed... URL: From jarkko at cyberlightning.com Tue Nov 19 15:52:01 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Tue, 19 Nov 2013 16:52:01 +0200 Subject: [Fiware-miwi] Demo videos from Oulu meeting Message-ID: Hello all, Videos related to Cyberlighting GE's what we presented in Oulu are behind this link: https://www.dropbox.com/sh/jnafhxactii03hd/PlIvXHOiXl/FI-ware?lst -- Jarkko Vatjus-Anttila VP, Technology Cyberlightning Ltd. mobile. +358 405245142 email. jarkko at cyberlightning.com Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Wed Nov 20 07:27:32 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Wed, 20 Nov 2013 07:27:32 +0100 Subject: [Fiware-miwi] Meeting today Message-ID: <528C5654.4090109@dfki.de> Hi all, I am traveling this morning in the train and the connection will most likely not be very stable. Since we talked already yesterday, I will not call in unless you think this is necessary. Best, Philipp -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From tharanga.wijethilake at cyberlightning.com Wed Nov 20 07:51:25 2013 From: tharanga.wijethilake at cyberlightning.com (Tharanga Wijethilake) Date: Wed, 20 Nov 2013 08:51:25 +0200 Subject: [Fiware-miwi] Creation of the WP13 roadmap In-Reply-To: <84A0201F-502E-480C-A60A-2EA8E82546B8@zhaw.ch> References: <466F5547-A5F6-416D-8C96-D4C88064B6FA@zhaw.ch> <84A0201F-502E-480C-A60A-2EA8E82546B8@zhaw.ch> Message-ID: Some Typos had to be fixed in 2D3D capture....They are fixed now... ~Tharanga On Tue, Nov 19, 2013 at 4:38 PM, Christof Marti wrote: > Hi > > Thanks to all who provided their feature descriptions in the roadmap > document yesterday > on time. > *There are still some GEs missing. Would be great if those could be > available before tomorrows meeting.* > > I will cover the following points at the meeting tomorrow, but you > can/should already start working on them before. > > - As a preparation for tomorrow, check that your features for the > releases are aligned with the other GEs, to make sure all dependencies for > your features are available in the same release. > - Check that your features cover and are aligned with the epics from the > materializing page > > (but don?t yet add the features to this page, we will add the GEs and > features to the materializing page later) > - The descriptions in the first part of the document (above the table) > should contain textual descriptions of the new functionality and not links > to the feature pages. (see example here: > http://wiki.fi-ware.eu/Roadmap_of_Data/Context_Management . Typically > you can use the goal description of the feature, maybe with some minor > rewording (instead of ?should? or ?will provide? use ?does? and ?is > providing?). > > > Thanks, > See you tomorrow. > > Cheers, Christof > ---- > InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > Institut of Applied Information Technology - InIT > Zurich University of Applied Sciences - ZHAW > School of Engineering > Phone: +41 58 934 70 63 > Skype: christof-marti > > > > Am 14.11.2013 um 18:38 schrieb Christof Marti : > > Hi everybody > > I prepared the WP13 roadmap wiki-page [1] > It is *already in the public wiki*. Not yet linked from the > FIWARE-Roadmap page, but you can access it from the FI-WARE private wiki > ?WP13 Integration? page [4]. > As soon it has an acceptable state I will link it from the public roadmap > page. > > Next step is now that you (the GE owners) *create the Feature-pages *(and > if required UserStories)* for your GE*. > *Deadline: Monday 18.11. EOB* > > To create this pages please follow the instructions on the ?How to upload > the full description of backlog entries to the Wiki? tutorial on the main > page [2] > . > WITH ONE EXCEPTION: Use the *roadmap-page *[1] > as a starting point to create your Features (instead of the > Materializing page) and place the link in the respective *minor version > section *of your GE, when you plan to have the feature implemented (see > extract below) > - Release 3.2 (until January 2014) > - Release 3.3 (until April 2014) > (We will copy the links to the ?Materializing pages?, as soon the > Architecture, Backlog, etc. is ready and publicly available.) > > > > To name your entries follow the naming conventions from the ?How to assign > identifiers to FI-WARE Backlog entries? tutorial [3]. > For WP13 use the FIWARE.Feature.MiWi..[.] form: > e.g. FIWARE.Feature.MiWi.3D-UI.DataflowProcessing.XFlowSupport > or FIWARE.Feature.MiWi.Synchronization.SceneAPI > etc. > You can choose the yourself. When defining a Feature keep in > mind, that it has to be implemented within one minor release. > > For content of your Feature (UserStory) page, copy the Feature (UserStory) > template from the tutorial page [2] > . > > [1] WP13 roadmap: > http://wiki.fi-ware.eu/Roadmap_of_Advanced_Middleware_and_Web_UI > [2] Tutorial to create backlog entries: > https://wiki.fi-ware.eu/How_to_upload_the_full_description_of_backlog_entries_to_the_Wiki > [3] Naming conventions: > https://wiki.fi-ware.eu/How_to_assign_identifiers_to_FI-WARE_Backlog_entries_(convention_to_follow) > [4] WP13 Integration page: > http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/WP13_Integration > > > Cheers, > Christof > ---- > InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > Institut of Applied Information Technology - InIT > Zurich University of Applied Sciences - ZHAW > School of Engineering > Phone: +41 58 934 70 63 > Skype: christof-marti > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kari.autio at gmail.com Tue Nov 19 08:01:02 2013 From: kari.autio at gmail.com (Kari Autio) Date: Tue, 19 Nov 2013 09:01:02 +0200 Subject: [Fiware-miwi] Xflow HW Support, AR in the Browser In-Reply-To: References: <528A43FA.4080308@dfki.de> <528A490C.2090006@dfki.de> Message-ID: Contract should be signed by now. I will check the status of the code. -kari Kari 040-1676545 2013/11/18 Jarkko Vatjus-Anttila > Stefan, > > I would not know the answer for the question. Mr. Kari Autio is the > correct one to comment on this one. > > - jarkko > > > On Mon, Nov 18, 2013 at 7:06 PM, Stefan Lemme wrote: > >> >> Hi Jarkko, >> >> thanks so far! >> Is there a chance to get the Alvar dependency for internal use only - >> until the contract is signed? >> >> Best, >> Stefan >> >> >> >> On 18.11.2013 17:59, Jarkko Vatjus-Anttila wrote: >> >> Stefan, >> >> Regarding to the Xflow and GLSL (and WebCL) acceleration, you may want >> to peek into Cyberlightning code repository here: < >> https://github.com/Cyberlightning/Cyber-WeX/tree/master/DataflowProcessing/demos >> > >> >> Regarding to Alvar AR library, i understood that missing Alvar >> dependency is due to not finished contract between CIE and VTT which >> relates to the Alvar development work. Hence we do not yet have a >> permission to push the code into repo. >> >> - jarkko >> >> >> On Mon, Nov 18, 2013 at 6:44 PM, Stefan Lemme wrote: >> >>> >>> Dear all, >>> >>> we just started to elaborate the possible use of WeX contributions for >>> the FIcontent project. >>> In particular we are interested in the Xflow HW support using GLSL and >>> Augmented Reality in the web browser. >>> >>> I know from Philipp and Torsten that you showed some demos regarding >>> this at the last F2F meeting in Oulu. >>> Moreover, I screened the activity report sent around some time ago. >>> >>> But in fact some links in this report were invalid and I was not able to >>> find the results/demos. >>> For instance, the alvar_mobile.js file is missing in the repository. >>> >>> I would like to ask if somebody can direct me to the right places for >>> the two points: >>> >>> - Demo showing Xflow HW support using GLSL and >>> - Augmented Reality in the web browser using the ALVAR compiled >>> with emscripten >>> >>> Thanks in advance! >>> >>> Best, >>> Stefan >>> >>> >>> -- >>> ******************************************************** >>> Stefan Lemme >>> >>> DFKI GmbH >>> Agenten und Simulierte Realit?t >>> Campus, Geb. D 3 4, Raum 0.75 >>> 66123 Saarbr?cken >>> >>> Tel.: +49 (0) 681 / 85775 ? 5391 >>> Fax: +49 (0) 681 / 85775 ? 2235http://www.dfki.de/web >>> ******************************************************** >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH >>> Trippstadter Stra?e 122 >>> D-67663 Kaiserslautern, Germany >>> >>> Geschaeftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973 >>> Steuernummer: 19/673/0060/ >>> ******************************************************** >>> >>> >>> >>> >>> >>> ******************************************************** >>> Stefan Lemme >>> >>> DFKI GmbH >>> Agents and Simulated Reality >>> Campus, Build. D 3 4, room 0.75 >>> D-66123 Saarbruecken >>> Germany >>> >>> Phone: +49 (0) 681 / 85775 ? 5391 >>> Fax: +49 (0) 681 / 85775 ? 2235 http://www.dfki.de/web >>> ******************************************************** >>> German Research Center for Artificial Intelligence >>> Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany >>> >>> Management Board: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Chairman) >>> Dr. Walter Olthoff >>> >>> Chairman of the Supervisory Board: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Amtsgericht Kaiserslautern, HRB 2313 >>> ******************************************************** >>> >>> >>> _______________________________________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>> >>> >> >> >> -- >> Jarkko Vatjus-Anttila >> VP, Technology >> Cyberlightning Ltd. >> >> mobile. +358 405245142 >> email. jarkko at cyberlightning.com >> >> Enrich Your Presentations! New CyberSlide 2.0 released on February >> 27th. >> Get your free evaluation version and buy it now! www.cybersli.de >> >> www.cyberlightning.com >> >> >> -- >> ******************************************************** >> Stefan Lemme >> >> DFKI GmbH >> Agenten und Simulierte Realit?t >> Campus, Geb. D 3 4, Raum 0.75 >> 66123 Saarbr?cken >> >> Tel.: +49 (0) 681 / 85775 ? 5391 >> Fax: +49 (0) 681 / 85775 ? 2235http://www.dfki.de/web >> ******************************************************** >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH >> Trippstadter Stra?e 122 >> D-67663 Kaiserslautern, Germany >> >> Geschaeftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973 >> Steuernummer: 19/673/0060/ >> ******************************************************** >> >> >> >> >> >> ******************************************************** >> Stefan Lemme >> >> DFKI GmbH >> Agents and Simulated Reality >> Campus, Build. D 3 4, room 0.75 >> D-66123 Saarbruecken >> Germany >> >> Phone: +49 (0) 681 / 85775 ? 5391 >> Fax: +49 (0) 681 / 85775 ? 2235 http://www.dfki.de/web >> ******************************************************** >> German Research Center for Artificial Intelligence >> Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany >> >> Management Board: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Chairman) >> Dr. Walter Olthoff >> >> Chairman of the Supervisory Board: >> Prof. Dr. h.c. Hans A. Aukes >> >> Amtsgericht Kaiserslautern, HRB 2313 >> ******************************************************** >> >> > > > -- > Jarkko Vatjus-Anttila > VP, Technology > Cyberlightning Ltd. > > mobile. +358 405245142 > email. jarkko at cyberlightning.com > > Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. > Get your free evaluation version and buy it now! www.cybersli.de > > www.cyberlightning.com > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arto.heikkinen at cie.fi Wed Nov 20 08:36:44 2013 From: arto.heikkinen at cie.fi (Arto Heikkinen) Date: Wed, 20 Nov 2013 09:36:44 +0200 Subject: [Fiware-miwi] POI and AR demo videos Message-ID: <528C668C.9020108@cie.fi> Hi all, Demo videos related to POI and AR GE's can be found behind the following links: POI: http://www.youtube.com/watch?v=7jbXca_a_rY&feature=youtu.be AR: http://www.youtube.com/watch?v=Xhpt1sr5Akw&feature=youtu.be Br, Arto -- _______________________________________________________ Arto Heikkinen, Doctoral student, M.Sc. (Eng.) Center for Internet Excellence (CIE) P.O. BOX 1001, FIN-90014 University of Oulu, Finland e-mail: arto.heikkinen at cie.fi, http://www.cie.fi From mach at zhaw.ch Wed Nov 20 10:01:47 2013 From: mach at zhaw.ch (Christof Marti) Date: Wed, 20 Nov 2013 10:01:47 +0100 Subject: [Fiware-miwi] todays meeting Message-ID: Hi everybody Here is the link to todays agenda/minutes: https://docs.google.com/document/d/1q31jbPynNwqZsYwrFpmqXxopnrqsLfzSKOHj_s6V2II/edit Cheers, Christof From toni at playsign.net Thu Nov 21 06:12:45 2013 From: toni at playsign.net (Toni Alatalo) Date: Thu, 21 Nov 2013 07:12:45 +0200 Subject: [Fiware-miwi] SceneAPI features: original intent of the EPIC vs . current plans? Message-ID: <1819400B-5329-414E-BCE8-F1C7890600DD@playsign.net> Hi again, am just wondering about Lasse?s SceneAPI plans ? the features in the roadmap now, what was planned for it originally and especially how we learned in the F2F meet that we had actually misunderstood the intent of the original EPIC in the call earlier. We discussed the matter some days ago (late last week?) on IRC with at least Lasse and I think were and are on the same page now about the original intent as how Philipp described it in the F2F. It seems to me that the current feature descriptions reflect that new understanding so that they are perhaps kind of adapted towards that goal? However they still stick with the original plan of implementing a server side REST HTTP API. I?m not sure that is actually the way to go with this GE at all, for two reasons: 1) Protocol: The call is for low latency and high performance, quoting the EPIC: "Such updates to the UI have the user in the loop and thus have to happen in real-time with low latency. Because they may also involve the transfer of large amounts of changed scene data, a high-performance service implementation is critical for many applications.? That is what the Sync GE is for and (possibly) not what HTTP delivers. http://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/FIWARE.Epic.AdvUI.AdvWebUI.SceneAPI 2) Direction of the connection: I think for the scenario with a e.g. simulation service we typically do the connection vice versa: expect the outside simulation service to provide a way for the VW system to connect to to get data. For example to integrate a weather forecast visualisation into a scene we?d by default make an app, as in an either server or client side script, which connects to the weather simulation service to get the raw data and implements the visualization in itself (for example includes a set of shaders for drawing heatmaps, wind directions & strenghts and different types of rain & feeds that with the data). That?s similar to how we integrate POI and GIS but with realtime streaming of the simulation data as described in 1), instead of HTTP transactions of static data which is the case with POI & GIS. So the question is: is the proposed HTTP access to a Tundra scene necessary at all? These features: http://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/FIWARE.Feature.MiWi.Synchronization.SceneAPI.SceneAPIQueries http://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/FIWARE.Feature.MiWi.Synchronization.SceneAPI.SceneAPIManipulation I understand that this point it is perhaps simpler to go with what the plan has been so far ? that?s how it was described in the plan made in spring and later in the arch diagram reviewed in the July Winterthur meeting etc. Also it can well be that a REST API to Tundra is useful to enable all sorts of simple clients, ranging from custom apps and scripts (e.g. some bot-like server or even a modeling app like Blender, or some web page). So that HTTP SceneAPI idea can be good but not for the EPIC where it originated from (I think due to our misunderstanding, we took the ?web service? from one sentence there too literally to mean http). However, if the sync GE is what actually provides what the envisioned simulation service connectivity would need then it?d be worth a check how the scenario would work with that and what?s possibly missing .. and could hence possibly be the actual work that would be needed for this EPIC. For example the scenario in 2) we can not currently implement on the server side as a Tundra server can not open Tundra (i.e. Sync GE) connections to other servers. In the web client it would work, it?d just have multiple websockets open to both the world & the simulation servers (and also for native Tundra there is a branch for client side multiconnection support, demonstrated with tabbed browsing & very cool Portal functionality where a portal in one scene shows a live connection to another server). Also, it would not be very easy for the simulation service to provide a Sync GE service endpoint as the protocol is implemented in Tundra?s protocol module which currently only works inside the framework there. So one activity could be refactoring the kNet-using TundraProtocolModule to something that be could also be used as a simple lib to add Sync GE support to any service (a minimal deps c(++) lib is easy to wrap as a node.js plugin or in some .net or python web service framework for example). We?ll actually meet with Lasse and some others in the afternoon (for the continuation project plans) and probably discuss the on-going sync GE + entity system + 3DUI integration work again too, can probably talk about this SceneAPI as well. If Philipp or Christof or someone has insight on how to best go about this now please tell so we can consider it. Easiest I think is to go with the plan and Ludocraft has already invested planning etc. for it so I don?t know if it?s even possible to change anymore. However as AFAIK no implementation work has been done for it (apart from the websocket implementation on the server and the same lib would be used for plain http too) so may be possible to change plans if it?s beneficial. Sync GE is now getting there in any case and if also this EPIC is better delivered with it then perhaps it?s good that Lasse could focus even more on it and ignore the http part? (which is simple to add later if we need it for other reasons, also at Playsign we?ve implemented http endpoints in Tundra earlier for simple still image based cloud rendering needs to control the camera from a browser GUI etc). ~Toni P.S. very cool to see this on the table - can be related too if this adds the capability for Tundra servers to connect with each other, part of the func noted missing for scenario in point 2) above - http://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/FIWARE.Feature.MiWi.Synchronization.Synchronization.DistributedSceneSupport From toni at playsign.net Thu Nov 21 06:31:27 2013 From: toni at playsign.net (Toni Alatalo) Date: Thu, 21 Nov 2013 07:31:27 +0200 Subject: [Fiware-miwi] SceneAPI features: original intent of the EPIC vs . current plans? In-Reply-To: <1819400B-5329-414E-BCE8-F1C7890600DD@playsign.net> References: <1819400B-5329-414E-BCE8-F1C7890600DD@playsign.net> Message-ID: On 21 Nov 2013, at 07:12, Toni Alatalo wrote: just a little afterthought here: > 2) Direction of the connection: I think for the scenario with a e.g. simulation service we typically do the connection vice versa: expect the outside simulation service to provide a way for the VW system to connect to to get data. For example to integrate a weather forecast visualisation into a scene we?d by default make an app, as in an either server or client side script, which connects to the weather simulation service to get the In some cases it can be perfectly fine to do vice versa: have the simulation server connect to the main scene / sync server to feed in the simulation updates. This the current Sync GE techs and plans would already support somewhat ok. For example for the Oulu city service we could run a separate traffic simulation (possible based on real sensor analysis, the university actually already has those traffic monitoring sensors in use in the city AFAIK) server as a Tundra client, for example as a native client where the simulation would be in a plugin. That simulation-server Tundra-client would just connect to the scene server and create and update the traffic entities and that?d all work fine with the existing tech. We?ve actually thought of these kinds of things often, for example Chiru planned testing separate physics Tundra servers this way, and we have experience of this kind of setups with XMPP at Playsign (not using Tundra in that case). Also in that scenario however it might be easily beneficial to have a small standalone protocol impl lib so that simulation server would not need all of the Tundra framework (with qt and ogre) just to connect to feed data. On the Javascript side I think we?ll get that with current plans (is a good idea to separate the WebGL & networking parts enough so that making non-webgl clients is simple and normal). But servers typically don?t want to run a browser environment to connect to another server :) (possibly the mocked browser env in node.js might work enough for sync GE though if it provides websockets the same way). Ludo?s current plan of enabling this with HTTP makes a lot of sense as that?s easy from any environment. It only may miss the realtime / streaming nature called for in the EPIC ? that depends on the exact nature of the data & updates and how the visualizations are made etc. > ~Toni same. From lasse.oorni at ludocraft.com Thu Nov 21 09:54:09 2013 From: lasse.oorni at ludocraft.com (=?iso-8859-1?Q?=22Lasse_=D6=F6rni=22?=) Date: Thu, 21 Nov 2013 10:54:09 +0200 Subject: [Fiware-miwi] SceneAPI features: original intent of the EPIC vs . current plans? In-Reply-To: References: <1819400B-5329-414E-BCE8-F1C7890600DD@playsign.net> Message-ID: > On 21 Nov 2013, at 07:12, Toni Alatalo wrote: > Ludo?s current plan of enabling this with HTTP makes a lot of sense as > that?s easy from any environment. It only may miss the realtime / > streaming nature called for in the EPIC ? that depends on the exact nature > of the data & updates and how the visualizations are made etc. > >> ~Toni Hi, the tasks in the roadmap indeed are using the "old" design ie. a server implementation that could serve read/write access to the scene via REST. I believe in the early telcos the lack of differentiation between SceneAPI and Synchronization in the Open Call submission was criticized, and therefore the REST API aspect was added to it. If there is a good concrete plan to how it should be done instead it's not at all too late to change (as any implementation has not began), and if it's administratively OK, for example our architecture pictures now include the REST scene API described. -- Lasse ??rni Game Programmer LudoCraft Ltd. From toni at playsign.net Thu Nov 21 22:24:41 2013 From: toni at playsign.net (Toni Alatalo) Date: Thu, 21 Nov 2013 23:24:41 +0200 Subject: [Fiware-miwi] SceneAPI features: original intent of the EPIC vs . current plans? In-Reply-To: References: <1819400B-5329-414E-BCE8-F1C7890600DD@playsign.net> Message-ID: <4CEA5C22-038C-46C2-B00E-1E6699866821@playsign.net> On 21 Nov 2013, at 10:54, Lasse ??rni wrote: > If there is a good concrete plan to how it should be done instead it's not > at all too late to change (as any implementation has not began), and if > it's administratively OK, for example our architecture pictures now > include the REST scene API described. I talked today with a guy who is working on the vw / visualization front in the university project with the traffic sensors in the city. We agreed preliminarily that could use their data and system as a use case for this SceneAPI biz on the fi-ware side ? if you and others here find it?s a good idea. Their data currently updates once per hour, though, so http would work :) But he had already proposed as a next step a visualisation where traffic is simulated / visualized as actual individual cars. We could have that simulation service as a user of the scene api / sync biz to control the cars so we?d get streaming nature for the data and much harder reqs for the usage (in the spirit of the EPIC). I still have to confirm with the prof whose leading that project that this all would be ok. We could do it so that the actual implementation of the visualization and even the integration comes from the uni and fi-ware (Ludocraft) only provides the API. I can use some of my university time for this as the integration of the city model and the traffic data is good to get there. This is not a must and I don?t mean to overcomplicate thing but just figured that a real use case would help to make the *concrete plan* that you called for above. The experience with the quick and simple POI & 3DUI integration (completed yesterday) was great, I?ll post about it tomorrow hopefully (the POI guys checked the demo today on ok?d it as this first minimal step). So I hope more integrations and usage of the GEs takes us well forward. > Lasse ??rni Cheers, ~Toni From jarkko at cyberlightning.com Fri Nov 22 11:50:22 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Fri, 22 Nov 2013 12:50:22 +0200 Subject: [Fiware-miwi] Missing GE definitions in the tracker Message-ID: When feeding new data into the tracker, the GE list does not contain our GEs, but instead only "General" and "middleware". Shall we pick either one of those, or should someone define collectively our GEs somewhere? I trust we cannot create new GEs for this list, am I right? -- Jarkko Vatjus-Anttila VP, Technology Cyberlightning Ltd. mobile. +358 405245142 email. jarkko at cyberlightning.com Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mach at zhaw.ch Fri Nov 22 12:02:15 2013 From: mach at zhaw.ch (Christof Marti) Date: Fri, 22 Nov 2013 12:02:15 +0100 Subject: [Fiware-miwi] Missing GE definitions in the tracker In-Reply-To: <5ace166292a14313947a8f5432e0fcc9@SRV-MAIL-001.zhaw.ch> References: <5ace166292a14313947a8f5432e0fcc9@SRV-MAIL-001.zhaw.ch> Message-ID: <2A4779CD-5319-4C6C-8363-32998BD8B1B1@zhaw.ch> HI Jarkko Sorry. My mistake. I have to add the GEs to the tracker. Will do this asap and send you an email, when done. Christof Am 22.11.2013 um 11:50 schrieb Jarkko Vatjus-Anttila : > When feeding new data into the tracker, the GE list does not contain our GEs, but instead only "General" and "middleware". Shall we pick either one of those, or should someone define collectively our GEs somewhere? I trust we cannot create new GEs for this list, am I right? > > -- > Jarkko Vatjus-Anttila > VP, Technology > Cyberlightning Ltd. > > mobile. +358 405245142 > email. jarkko at cyberlightning.com > > Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. > Get your free evaluation version and buy it now! www.cybersli.de > > www.cyberlightning.com > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mach at zhaw.ch Fri Nov 22 12:53:18 2013 From: mach at zhaw.ch (Christof Marti) Date: Fri, 22 Nov 2013 12:53:18 +0100 Subject: [Fiware-miwi] Missing GE definitions in the tracker In-Reply-To: <2A4779CD-5319-4C6C-8363-32998BD8B1B1@zhaw.ch> References: <5ace166292a14313947a8f5432e0fcc9@SRV-MAIL-001.zhaw.ch> <2A4779CD-5319-4C6C-8363-32998BD8B1B1@zhaw.ch> Message-ID: Hi All the Generic Enabler?s are now available in the tracker. Please update your existing entries to the correct GE. Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering Phone: +41 58 934 70 63 Skype: christof-marti Am 22.11.2013 um 12:02 schrieb Christof Marti : > HI Jarkko > > Sorry. My mistake. I have to add the GEs to the tracker. > Will do this asap and send you an email, when done. > > Christof > > Am 22.11.2013 um 11:50 schrieb Jarkko Vatjus-Anttila : > >> When feeding new data into the tracker, the GE list does not contain our GEs, but instead only "General" and "middleware". Shall we pick either one of those, or should someone define collectively our GEs somewhere? I trust we cannot create new GEs for this list, am I right? >> >> -- >> Jarkko Vatjus-Anttila >> VP, Technology >> Cyberlightning Ltd. >> >> mobile. +358 405245142 >> email. jarkko at cyberlightning.com >> >> Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. >> Get your free evaluation version and buy it now! www.cybersli.de >> >> www.cyberlightning.com >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Fri Nov 22 13:09:35 2013 From: toni at playsign.net (Toni Alatalo) Date: Fri, 22 Nov 2013 14:09:35 +0200 Subject: [Fiware-miwi] POI -> 3DUI integration Message-ID: <0D0B04FC-6C2B-4606-B88E-2FD61A742590@playsign.net> A first test for the integration of POI & 3DUI GEs is finished now. Live demo: http://playsign.tklapp.com:8000/POIThreeJS/POI.html Code & docs: https://github.com/playsign/POIThreeJS Developed with Chrome. I saw Ari Okkonen running it ok with Firefox yesterday with a small glitch: mouse clicks to turn camera opened browser context menu or so. Known issue, copy pasted from the readme: - mouse clicks can go through the 2d UI elements to the 3d UI. This is reportedly solved by Adminotech's 2DUI GE / input system. So the solution is to integrate with that somehow. The main motivation for this effort was to learn in practice about the integration of 3DUI and other GEs. What kind of APIs they require, how they work together etc. Brief points about main observations: 1. It seems ok to handle POIs separately, they don?t necessarily need to a part of the entity system data. What I mean is: this client-side solution works so that it gets the 3d scene from 3DUI, and fetches the POI data itself. It has those as two separate sets of data. This seems fine as the POIs are handled specifically anyways: the visualisations and interactions are created for them. OTOH if this was a client-server setup, the server could do the same as this client does, and at that point the client would receive only the visualisations + interaction definitions normally as a part of the scene and would not necessarily need to know that they are POIs .. they?d just work using basic features (meshes, billboards, click handling). I try to clarify this with pseudocode still: /* a) attempt to unify everything to the same data container somehow */ myscene = 3dui.load(?oulu?); myscene.extendFromUrl("http://poi.com/oulu?); //fetches pois and adds the data from them to the scene / dom / generic overall container pois = myscene.entitiesWithComponent(POI); //gets the entities that the previous fetch created //.. handle the pois /* b) current code with 3d scene & poi data separately */ myscene = 3dui.load(?oulu?); pois = poi.searchPOIs("http://poi.com/oulu?) //.. handle the pois This is debatable and I?m not sure of my own stand either yet, certainly come kind of POI component might well be a nice way etc. Anyhow now we at least have concrete basis to think of that more. 2. It would be nice for the service backend type of GEs to provide client side libraries for easy access to the services. Even though doing http requests is trivial from anywhere, it is still even nicer to just call an existing JS func from a lib. In this case, CIE POI folks? ?demo4? (featured at F2F & video, the google maps integration) was a nice enough up-to-date client side code we could adopt here. Just needed to clean the searchPOIs function from the few places that depended on google maps, result of that cleaned 'function searchPOIs(lat, lng)? is at https://github.com/playsign/POIThreeJS/blob/master/POI.js#L369 3. Good solutions for integrating 2d elements to 3d scene would be nice to provide as components, here Tapani reused the html (jquery.ui) 2d widget to 3d scene placing code from the twitter thing prepared for London earlier. Also we now encountered a concrete need to integrate Adminotech?s 2D UI input management biz I think, to know whether clicks hit 2d elements or the 3d scene. Final note: the POI schema & backend supports having more data, pictures, urls etc. and those are populated in the DB for some test points at university. Unfortunately for the city center for which we have the 3d model there is no content yet apart from the poi names. The 3d client already supports showing the data, we talked with POI folks about populating something to the center. However I did not want to further delay informing the MIWI group about this as the dev work was already completed and we?ve moved to other things (asset pipe & net sync integration) .. you?re all programmers so can imagine it :p Cheers, ~Toni From Philipp.Slusallek at dfki.de Sat Nov 23 17:28:58 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Sat, 23 Nov 2013 17:28:58 +0100 Subject: [Fiware-miwi] POI -> 3DUI integration In-Reply-To: <0D0B04FC-6C2B-4606-B88E-2FD61A742590@playsign.net> References: <0D0B04FC-6C2B-4606-B88E-2FD61A742590@playsign.net> Message-ID: <5290D7CA.4010706@dfki.de> Hi Toni, Nice work! From my POV both cases (a) and (b) are both perfectly valid way of implementing apps that use POIs and since we are providing tools for developers to choose from, we should actually provide both (or at least not prohibit one of them even if we only provide the other). One thing that would be very useful in this context is the definition of a base POI WebComponent. This could receive the POI data, handle the display and interaction part, and offer to be configurable and stylable through CSS. Best, Philipp Am 22.11.2013 13:09, schrieb Toni Alatalo: > A first test for the integration of POI & 3DUI GEs is finished now. > > Live demo: http://playsign.tklapp.com:8000/POIThreeJS/POI.html > Code & docs: https://github.com/playsign/POIThreeJS > > Developed with Chrome. I saw Ari Okkonen running it ok with Firefox yesterday with a small glitch: mouse clicks to turn camera opened browser context menu or so. > > Known issue, copy pasted from the readme: > - mouse clicks can go through the 2d UI elements to the 3d UI. This is reportedly solved by Adminotech's 2DUI GE / input system. So the solution is to integrate with that somehow. > > The main motivation for this effort was to learn in practice about the integration of 3DUI and other GEs. What kind of APIs they require, how they work together etc. Brief points about main observations: > > 1. It seems ok to handle POIs separately, they don?t necessarily need to a part of the entity system data. What I mean is: this client-side solution works so that it gets the 3d scene from 3DUI, and fetches the POI data itself. It has those as two separate sets of data. This seems fine as the POIs are handled specifically anyways: the visualisations and interactions are created for them. OTOH if this was a client-server setup, the server could do the same as this client does, and at that point the client would receive only the visualisations + interaction definitions normally as a part of the scene and would not necessarily need to know that they are POIs .. they?d just work using basic features (meshes, billboards, click handling). I try to clarify this with pseudocode still: > > /* a) attempt to unify everything to the same data container somehow */ > myscene = 3dui.load(?oulu?); > myscene.extendFromUrl("http://poi.com/oulu?); //fetches pois and adds the data from them to the scene / dom / generic overall container > pois = myscene.entitiesWithComponent(POI); //gets the entities that the previous fetch created > //.. handle the pois > > /* b) current code with 3d scene & poi data separately */ > myscene = 3dui.load(?oulu?); > pois = poi.searchPOIs("http://poi.com/oulu?) > //.. handle the pois > > This is debatable and I?m not sure of my own stand either yet, certainly come kind of POI component might well be a nice way etc. Anyhow now we at least have concrete basis to think of that more. > > 2. It would be nice for the service backend type of GEs to provide client side libraries for easy access to the services. Even though doing http requests is trivial from anywhere, it is still even nicer to just call an existing JS func from a lib. In this case, CIE POI folks? ?demo4? (featured at F2F & video, the google maps integration) was a nice enough up-to-date client side code we could adopt here. Just needed to clean the searchPOIs function from the few places that depended on google maps, result of that cleaned 'function searchPOIs(lat, lng)? is at https://github.com/playsign/POIThreeJS/blob/master/POI.js#L369 > > 3. Good solutions for integrating 2d elements to 3d scene would be nice to provide as components, here Tapani reused the html (jquery.ui) 2d widget to 3d scene placing code from the twitter thing prepared for London earlier. Also we now encountered a concrete need to integrate Adminotech?s 2D UI input management biz I think, to know whether clicks hit 2d elements or the 3d scene. > > Final note: the POI schema & backend supports having more data, pictures, urls etc. and those are populated in the DB for some test points at university. Unfortunately for the city center for which we have the 3d model there is no content yet apart from the poi names. The 3d client already supports showing the data, we talked with POI folks about populating something to the center. However I did not want to further delay informing the MIWI group about this as the dev work was already completed and we?ve moved to other things (asset pipe & net sync integration) .. you?re all programmers so can imagine it :p > > Cheers, > ~Toni > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From habl at zhaw.ch Mon Nov 25 11:47:41 2013 From: habl at zhaw.ch (Mathias =?iso-8859-1?Q?Habl=FCtzel?=) Date: Mon, 25 Nov 2013 11:47:41 +0100 Subject: [Fiware-miwi] [CISPA Certification of KIARA] KASLR Bypass mitigation under Windows Message-ID: <20131125104741.GB69480@clt-mob-t-6257.local> Hi, came across this post which may be interesting for the CISPA team which works on certifing KIARA: http://www.alex-ionescu.com/?p=82 Since I'm not at all into Windows I cannot judge the value of this blogpost, so ? sorry if you consider this to be obvious, spam or unneeded. sincerly Mathias -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mach at zhaw.ch Tue Nov 26 19:21:26 2013 From: mach at zhaw.ch (Christof Marti) Date: Tue, 26 Nov 2013 19:21:26 +0100 Subject: [Fiware-miwi] postpone of weekly meeting to the afternoon Message-ID: <80A130F4-1111-4059-9418-0EB6EE33369F@zhaw.ch> Hi everybody I have to attend an important telco tomorrow wednesday from 09:00 to ~11:00 (CET). Can we bush back the WP13 telco into the afternoon 13:00-14:30 CET? BR, Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering Phone: +41 58 934 70 63 Skype: christof-marti From jarkko at cyberlightning.com Tue Nov 26 19:33:25 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Tue, 26 Nov 2013 20:33:25 +0200 Subject: [Fiware-miwi] postpone of weekly meeting to the afternoon In-Reply-To: <80A130F4-1111-4059-9418-0EB6EE33369F@zhaw.ch> References: <80A130F4-1111-4059-9418-0EB6EE33369F@zhaw.ch> Message-ID: Hello, I would have time to attend for 30 minutes before my next meeting. Otherwise ok. On Tue, Nov 26, 2013 at 8:21 PM, Christof Marti wrote: > Hi everybody > > I have to attend an important telco tomorrow wednesday from 09:00 to > ~11:00 (CET). > > Can we bush back the WP13 telco into the afternoon 13:00-14:30 CET? > > > BR, Christof > ---- > InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > Institut of Applied Information Technology - InIT > Zurich University of Applied Sciences - ZHAW > School of Engineering > Phone: +41 58 934 70 63 > Skype: christof-marti > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- Jarkko Vatjus-Anttila VP, Technology Cyberlightning Ltd. mobile. +358 405245142 email. jarkko at cyberlightning.com Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonne at adminotech.com Tue Nov 26 21:25:02 2013 From: jonne at adminotech.com (Jonne Nauha) Date: Tue, 26 Nov 2013 22:25:02 +0200 Subject: [Fiware-miwi] postpone of weekly meeting to the afternoon In-Reply-To: References: <80A130F4-1111-4059-9418-0EB6EE33369F@zhaw.ch> Message-ID: Fine for Adminotech. Best regards, Jonne Nauha Meshmoon developer at Adminotech Ltd. www.meshmoon.com On Tue, Nov 26, 2013 at 8:33 PM, Jarkko Vatjus-Anttila < jarkko at cyberlightning.com> wrote: > Hello, > > I would have time to attend for 30 minutes before my next meeting. > Otherwise ok. > > > > > On Tue, Nov 26, 2013 at 8:21 PM, Christof Marti wrote: > >> Hi everybody >> >> I have to attend an important telco tomorrow wednesday from 09:00 to >> ~11:00 (CET). >> >> Can we bush back the WP13 telco into the afternoon 13:00-14:30 CET? >> >> >> BR, Christof >> ---- >> InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch >> Institut of Applied Information Technology - InIT >> Zurich University of Applied Sciences - ZHAW >> School of Engineering >> Phone: +41 58 934 70 63 >> Skype: christof-marti >> >> >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> > > > > -- > Jarkko Vatjus-Anttila > VP, Technology > Cyberlightning Ltd. > > mobile. +358 405245142 > email. jarkko at cyberlightning.com > > Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. > Get your free evaluation version and buy it now! www.cybersli.de > > www.cyberlightning.com > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Wed Nov 27 04:26:17 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Wed, 27 Nov 2013 04:26:17 +0100 Subject: [Fiware-miwi] postpone of weekly meeting to the afternoon In-Reply-To: <80A130F4-1111-4059-9418-0EB6EE33369F@zhaw.ch> References: <80A130F4-1111-4059-9418-0EB6EE33369F@zhaw.ch> Message-ID: <52956659.4000701@dfki.de> Hi, I have a meeting at 13h that I cannot change but might be able to switch move a meeting at 14h and join then. Best, Philipp Am 26.11.2013 19:21, schrieb Christof Marti: > Hi everybody > > I have to attend an important telco tomorrow wednesday from 09:00 to ~11:00 (CET). > > Can we bush back the WP13 telco into the afternoon 13:00-14:30 CET? > > > BR, Christof > ---- > InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > Institut of Applied Information Technology - InIT > Zurich University of Applied Sciences - ZHAW > School of Engineering > Phone: +41 58 934 70 63 > Skype: christof-marti > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From Philipp.Slusallek at dfki.de Wed Nov 27 06:30:09 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Wed, 27 Nov 2013 06:30:09 +0100 Subject: [Fiware-miwi] Fwd: Mixamo animation web service In-Reply-To: <7F48ADF8-8ED5-44EE-81EB-571E37734A7F@dfki.de> References: <7F48ADF8-8ED5-44EE-81EB-571E37734A7F@dfki.de> Message-ID: <52958361.3020006@dfki.de> Hi, This sounds quite interesting. Has anyone any experience with them. It seems that such Web Services (Autorigging) could be very interesting at least for the Virtual Character aspect. Best, Philipp -------- Original-Nachricht -------- Betreff: Mixamo animation web service Datum: Tue, 26 Nov 2013 11:06:05 +0100 Von: Alexis Heloir An: Philipp Slusallek Hi, Following our short conversation, here is a link pointing towards the mixamo animation webservice. It's becoming quite popular among independent game developers. They accept multiple output format like FBX, collada and Blender. Http://mixamo.com Best, Alexis -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From toni.dahl at cyberlightning.com Wed Nov 27 09:22:34 2013 From: toni.dahl at cyberlightning.com (Toni Dahl) Date: Wed, 27 Nov 2013 10:22:34 +0200 Subject: [Fiware-miwi] XML3D.js WebCL Integration Message-ID: FYI Felix and Kristian, I made a pull request to XML3D.js repo concerning the integration of WebCL platform into XML3D.js: https://github.com/xml3d/xml3d.js/pull/33 . The proposed WebCL API is due to change as the current WebCL specification and implementations (Nokia's webcl plugin and Samsung's WebKit version) evolves. Also, we will add new features when they are needed. Especially, we will concentrate on adding the features that are needed in the Xflow WebCL integration on which we will start focusing now. I think it is important that we start the WebCL -> XML3D.js integration process in an early phase, so you don't have to merge a massive feature branch, so please take a look into the pull request. -- Toni Dahl Software Developer Cyberlightning Ltd. Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lasse.oorni at ludocraft.com Wed Nov 27 09:37:12 2013 From: lasse.oorni at ludocraft.com (=?iso-8859-1?Q?=22Lasse_=D6=F6rni=22?=) Date: Wed, 27 Nov 2013 10:37:12 +0200 Subject: [Fiware-miwi] postpone of weekly meeting to the afternoon In-Reply-To: <80A130F4-1111-4059-9418-0EB6EE33369F@zhaw.ch> References: <80A130F4-1111-4059-9418-0EB6EE33369F@zhaw.ch> Message-ID: > Hi everybody > > I have to attend an important telco tomorrow wednesday from 09:00 to > ~11:00 (CET). > > Can we bush back the WP13 telco into the afternoon 13:00-14:30 CET? Hi, I have another meeting starting at 11:00 and I'm not sure if I'm yet back at 13:00. But that's OK, I will attend if possible. - Lasse From toni at playsign.net Wed Nov 27 10:04:31 2013 From: toni at playsign.net (Toni Alatalo) Date: Wed, 27 Nov 2013 11:04:31 +0200 Subject: [Fiware-miwi] A Candidate AssetPipe: glTF aka. Collada JSON Message-ID: The recommended asset pipeline is one of our main goals for the 3DUI now, https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/FIWARE.Feature.MiWi.3D-UI.RecommendedAssetPipeline We have now studied and experimented with a candidate solution: glTF, the proposed gl Transfer Format by the Chronos group (creators of OpenGL, WebGL and COLLADA), http://www.khronos.org/gltf . Erno discovered it from SIGGRAPH proceedings I think. This is a quick preliminary heads-up of the activity ? also to call for views / opinions about that tech if e.g. the DFKI folks already know whether it could be the solution for FI-WARE. The motivation for the new format and the associated complete pipeline is explained with a nice diagram in https://github.com/KhronosGroup/glTF/blob/master/specification/README.md . The idea is simple: use COLLADA to export from modeling tools and COLLADA2GLTF to convert it to the runtime format, optimized for transfers and easy loading. This way it covers all modeling apps without having to do anything in/with them (unlike the Three?s ?native? JSON export that we used in the first city rendering test). The scene/library format is similar to COLLADA but as JSON, with pointers to ready made byte array buffers in an external file with the geometry. Compressions such as CTM for geometry are planned to be supported as plugins. The spec is not finished yet and has been in fact living quite a lot during the past half a year. It is expected to be finished early next year. This is not a bad schedule for us as we can now verify whether it works for our needs now or does it need changes. We can also help with completing the work (e.g. loader(s)). Yesterday we finally got our test working: the old optimized 9 blocks city model, exported from Blender as COLLADA, shows with the raw WebGL glTF viewer: http://playsign.tklapp.com:8000/glTF-webgl-viewer/ (Oulu model is default but you can use the gui menu to switch to a few other models, bundled with the viewer project by glTF folks). This took a few days of intense testing and debugging. Main problem was incompatibilities with different versions of the glTF converter and the loaders: current converter is not compatible with any viewer we know :) .. this works by using a version from Nov ~5th, luckily the glTF files nowadays have the git hash of the version used so we could find this out from the example files that worked. Current test is with uncompressed data, we?ll test with compressed next (if it?s implemented in the loader). We also had problems with getting Blender export COLLADA correctly (texture / texcoord export settings). There is also a Three.js loader but so far it seems to work with the spec from about 5-6 months ago. We have not compiled that version of the converter yet to test and have not been able to load any own exports to Three. The guy working on the Three loader (Tony Parisi, one of the spec authors, author of a O?Reilly WebGL book and co-chair of the San Francisco WebGL meets) has worked on it a month ago to add shader loading support and animation / skinning loading earlier. I think we?ll test the Three loader soon again to see if can actually get animated models (e.g. avatars) over & animations working. So this is still early and definitely not conclusive. I am not surprised if we find out that the whole glTF idea based on COLLADA does not work. I?ve been evaluating COLLADA for soon 10 years and yet very rarely actually found it useful. But so far it looks that the time to jump to it could come now .. in 2014, in time for 10 year celebration of the 1.0 spec from 2004 :) . Second Life and hence Opensimulator does happily rely on COLLADA only for getting meshes in (they have their own optimized LLMesh format for transfers then). There?s still way to go with the glTF pipeline so for now (probably for a month still) our recommendation for getting scenes simply from e.g. Blender to Three.js is the Three JSON format (used in the master branch of the city rendering repo & live demo), and CTM for an optimized solution (but it requires custom setup to set materials). Am very curious to hear what the DFKI gfx / XML3D folks think of this ? some of you perhaps attended the SIGGRAPH sessions even?. Thanks to Tapani again for the valiant efforts in sorting all kinds of problems with getting started with this! Was a quite nice pair programming / battle we had.. Cheers, ~Toni -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Wed Nov 27 10:06:25 2013 From: toni at playsign.net (Toni Alatalo) Date: Wed, 27 Nov 2013 11:06:25 +0200 Subject: [Fiware-miwi] postpone of weekly meeting to the afternoon In-Reply-To: References: <80A130F4-1111-4059-9418-0EB6EE33369F@zhaw.ch> Message-ID: <24BB5C6D-B7B8-49DF-A5FE-3B9990791020@playsign.net> For Playsign this is fine ? we?re also pretty much up-to-date about Lasse?s sync GE work so can fill in a bit if there?s questions, in case he can?t make it. On 27 Nov 2013, at 10:37, Lasse ??rni wrote: >> Hi everybody >> >> I have to attend an important telco tomorrow wednesday from 09:00 to >> ~11:00 (CET). >> >> Can we bush back the WP13 telco into the afternoon 13:00-14:30 CET? > > Hi, I have another meeting starting at 11:00 and I'm not sure if I'm yet > back at 13:00. But that's OK, I will attend if possible. > > - Lasse > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi From mach at zhaw.ch Wed Nov 27 10:54:35 2013 From: mach at zhaw.ch (Christof Marti) Date: Wed, 27 Nov 2013 10:54:35 +0100 Subject: [Fiware-miwi] postpone of weekly meeting to the afternoon In-Reply-To: References: <80A130F4-1111-4059-9418-0EB6EE33369F@zhaw.ch> Message-ID: <921B254F-0525-48B9-871A-473858CEC803@zhaw.ch> Hi OK. My propose is to split the meeting: We will start with the WP13 meeting at 13:00 CET to handle all the current FI-WARE topics. Would be good to have as many GE owners as possible here because this is about the coming deliverables. For this we will use the telco infrastructure Then we will continue the call at 14:00 CET with the discussion of the TF continuation proposal. Would be good to have one representative from each partner here. We could use google hangout for this. Any objections? Best regards, Christof Am 27.11.2013 um 04:26 schrieb Philipp Slusallek : > Hi, > > I have a meeting at 13h that I cannot change but might be able to switch > move a meeting at 14h and join then. > > Best, > > Philipp > > Am 26.11.2013 19:21, schrieb Christof Marti: >> Hi everybody >> >> I have to attend an important telco tomorrow wednesday from 09:00 to ~11:00 (CET). >> >> Can we bush back the WP13 telco into the afternoon 13:00-14:30 CET? >> >> >> BR, Christof >> ---- >> InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch >> Institut of Applied Information Technology - InIT >> Zurich University of Applied Sciences - ZHAW >> School of Engineering >> Phone: +41 58 934 70 63 >> Skype: christof-marti >> >> >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > From jarkko at cyberlightning.com Wed Nov 27 12:15:58 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Wed, 27 Nov 2013 13:15:58 +0200 Subject: [Fiware-miwi] postpone of weekly meeting to the afternoon In-Reply-To: <921B254F-0525-48B9-871A-473858CEC803@zhaw.ch> References: <80A130F4-1111-4059-9418-0EB6EE33369F@zhaw.ch> <921B254F-0525-48B9-871A-473858CEC803@zhaw.ch> Message-ID: Yes this is ok at least for cyberlightning. I will attend at 13 and esa posio will attend at 14. 27.11.2013 11.55 kirjoitti "Christof Marti" : > Hi > > OK. My propose is to split the meeting: > > We will start with the WP13 meeting at 13:00 CET to handle all the current > FI-WARE topics. > Would be good to have as many GE owners as possible here because this is > about the coming deliverables. > For this we will use the telco infrastructure > > Then we will continue the call at 14:00 CET with the discussion of the TF > continuation proposal. > Would be good to have one representative from each partner here. > We could use google hangout for this. > > Any objections? > > Best regards, Christof > > > Am 27.11.2013 um 04:26 schrieb Philipp Slusallek < > Philipp.Slusallek at dfki.de>: > > > Hi, > > > > I have a meeting at 13h that I cannot change but might be able to switch > > move a meeting at 14h and join then. > > > > Best, > > > > Philipp > > > > Am 26.11.2013 19:21, schrieb Christof Marti: > >> Hi everybody > >> > >> I have to attend an important telco tomorrow wednesday from 09:00 to > ~11:00 (CET). > >> > >> Can we bush back the WP13 telco into the afternoon 13:00-14:30 CET? > >> > >> > >> BR, Christof > >> ---- > >> InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > >> Institut of Applied Information Technology - InIT > >> Zurich University of Applied Sciences - ZHAW > >> School of Engineering > >> Phone: +41 58 934 70 63 > >> Skype: christof-marti > >> > >> > >> > >> _______________________________________________ > >> Fiware-miwi mailing list > >> Fiware-miwi at lists.fi-ware.eu > >> https://lists.fi-ware.eu/listinfo/fiware-miwi > >> > > > > > > -- > > > > ------------------------------------------------------------------------- > > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > > Trippstadter Strasse 122, D-67663 Kaiserslautern > > > > Gesch?ftsf?hrung: > > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > > Dr. Walter Olthoff > > Vorsitzender des Aufsichtsrats: > > Prof. Dr. h.c. Hans A. Aukes > > > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > > > --------------------------------------------------------------------------- > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Wed Nov 27 12:06:50 2013 From: toni at playsign.net (Toni Alatalo) Date: Wed, 27 Nov 2013 13:06:50 +0200 Subject: [Fiware-miwi] EntSys Sync & UI integration: separate EC data & view objects Message-ID: Another early w.i.p. report, now about the integration of the Sync GE with the rest of the client & 3DUI. Summary: we propose separating the data & view parts of EC implementations. Main motivation is for client side network code to be independent of WebGL so that e.g. a 2d HTML client and headless Node.js bot / service development is easy. This is achieved by packaging the scene & sync code so that also without 3d rendering the scene gets populated normally and manipulating it works. This may sound obvious but is not how any of the existing realXtend clients have been implemented so far. The design is not verified by an implementation yet, Erno is working on it, but we discussed with Lasse last week and decided to give a shot. Longer version: Current Tundra clients all feature EC classes that bundle both 1) the data model definition as the set of attributes and 2) the implementation of the component (with e.g. 3d graphics, physics engine or audio playback) in the same class. For example in the C++ Tundra we have the data model of a Mesh in https://github.com/realXtend/naali/blob/tundra2/src/Core/OgreRenderingModule/EC_Mesh.cpp#L39 and the code to show the mesh with Ogre3D later in the same file in EC_Mesh::SetMesh, https://github.com/realXtend/naali/blob/tundra2/src/Core/OgreRenderingModule/EC_Mesh.cpp#L250 . The equivalent in Chiru-WebClient, using Three.js the same way, is in https://github.com/Chiru/Chiru-WebClient/blob/master/src/ecmodel/EC_Mesh.js (and AFAIK WebRocket is made the same way). This introduces a hard dependency for the 3d rendering. Initially when Tundra (then known as Naali) was made as a 3d client only codebase, against Opensimulator, it was not a problem. There was already a networking-only client implementation of the same LLUDP protocol, LibOMV, which works well for making services, bots or custom 2d clients. When we switched to using the Naali codebase as the server too and renamed it to Tundra we lost that. Support for headless servers was hacked in by adding boolean !headless checks and also with the NullRenderer plugin to Ogre. But there?s no nice small network lib for custom clients. Now we have a chance to get it right in the Web client code. Regarding use cases, we already have an existing custom 2d client for mobile phones from before, made by a 3rd party ? NIMO project?s Daniele Zanni implemented a minimal 2d html GUI for mobile phones for his Street Art Gangs city game by using websockets to connect to Tundra to send entity actions for the game app running there .. so people can run around the city and make virtual graffititi, by pressing a big button in the phone display, no view of the 3d scene is necessary. This was easy against the naive 1st Tundra websocket server made by Playsign which used plain JSON for comms. It could easily be nice to have scene sync working there to for example show locations of other players on a map, better than just e.g. sending actions with HTTP in a disconnected mode. Another usage we envision is to run services or bots as e.g. Node.js apps, on external servers which can connect to Tundra servers. This might be the good solution for what the SceneAPI Epic is calling for. With this in mind Lasse started the work on Sync GE on a specific repo, https://github.com/realXtend/WebTundraNetworking/ . I did not think this through then and thought it would mostly be the kNet and TundraProtocol parts, to manage the connection and parse messages etc. ? not the ECs. But we soon realized that for nice usage the networking module should actually have the entire scene model. So for example a Node.js bot is written with the same API as 3D client functionality: the networking takes care that the scene is there with all the data and the app code can just use it. It would be all too low level for every client to need to handle the network messages themselves. This is identical to LibOMV: it provides a readily populated scene-like object which custom client / bot code can just read & manipulate. For example all Avatars in a sim are in this dict in the client side Simulator object: http://lib.openmetaverse.org/docs/trunk/html/F_OpenMetaverse_Simulator_ObjectsAvatars.htm We plan to implement this by making the data & view parts of ECs separate classes, somehow. Lasse already has the data parts for a set of core components, e.g. Mesh in https://github.com/realXtend/WebTundraNetworking/blob/master/src/scene/EC_Mesh.js (the focus there is to have something to populate the data from the net, to very that the messaging works). Erno is working to get some kind of view objects to show the data with WebGL ? but so that the client works also without those. Erno has also continued with the DOM integration investigations, testing the attribute mutation observers to inform the rest of the client of changes in the data (Lasse?s code doesn?t do that part). However, to support Node.js (and other similar) non-browser usage of the networking module it seems better to have the DOM integration as an optional feature (Node.js does not have nor plan to have a native DOM implementation). This way the internal data representation can use Lasse?s attribute objects which are typed and provide serialization via themselves, in https://github.com/realXtend/WebTundraNetworking/blob/master/src/scene/Attribute.js . We?ll see how the details with the DOM integration go as the implementation proceeds. Again, this is now basically just an idea and the purpose of this post is to give an early warning and possibility to comment. Do note that none of us involved here are fans of fancy OO designs or layers of abstractions etc (this design may or may not be what MVC is :) and we definitely want to implement this separation of concerns in most straightforward and efficient way (e.g. not with events but for example by EC data objects having the active corresponding view implementation as a member it can call directly for updates, or vice versa .. let?s see). If we come up with a good design it might be interesting to refactor the native Tundra code to that as well (to not depend on Ogre for headless runs, and perhaps make it easier to implement alternative renderers .. is probably more complex in the land of static typing though). One worry I had was that application programming would become more complex if app code would need to know about separate data & view objects and could not just manipulate the single ECs. But I think this worry was moot as the data / model object is always the interface, and the separated view is a hidden implementation mechanism behind that. Other worry is that implementing new components becomes more complex but that?s may not be the case either ? should be simple to make two classes following a pattern well established by the core components, and they can probably be in a single JS file. Finally about repositories: current plan is to do this all in a single repo by renaming the current WebTundraNetworking -> WebTundra and add a renderer/view directory there to accompany scene and networking. Then a headless client is just a configuration which does not need the view part. We can later split those to separate repos if needed, perhaps use submodules to aggregate them as the full 3d client etc., but we figured it?s best now to collaborate in a single one as the parts are tightly integrated and in flux. Cheers, ~Toni From tapani at playsign.net Wed Nov 27 12:53:43 2013 From: tapani at playsign.net (=?ISO-8859-1?B?VGFwYW5pIErkbXPk?=) Date: Wed, 27 Nov 2013 13:53:43 +0200 Subject: [Fiware-miwi] A Candidate AssetPipe: glTF aka. Collada JSON In-Reply-To: References: Message-ID: Model updated to compressed version (Open3DGC). Binary size dropped from 23.2mb to 3.7mb. Viewer takes some time to initialize. -Tapani 2013/11/27 Toni Alatalo > The recommended asset pipeline is one of our main goals for the 3DUI now, > https://forge.fi-ware.eu/plugins/mediawiki/wiki/fiware/index.php/FIWARE.Feature.MiWi.3D-UI.RecommendedAssetPipeline > > We have now studied and experimented with a candidate solution: glTF, the > proposed gl Transfer Format by the Chronos group (creators of OpenGL, WebGL > and COLLADA), http://www.khronos.org/gltf . Erno discovered it from > SIGGRAPH proceedings I think. > > This is a quick preliminary heads-up of the activity ? also to call for > views / opinions about that tech if e.g. the DFKI folks already know > whether it could be the solution for FI-WARE. > > The motivation for the new format and the associated complete pipeline is > explained with a nice diagram in > https://github.com/KhronosGroup/glTF/blob/master/specification/README.md . > The idea is simple: use COLLADA to export from modeling tools and > COLLADA2GLTF to convert it to the runtime format, optimized for transfers > and easy loading. This way it covers all modeling apps without having to do > anything in/with them (unlike the Three?s ?native? JSON export that we used > in the first city rendering test). The scene/library format is similar to > COLLADA but as JSON, with pointers to ready made byte array buffers in an > external file with the geometry. Compressions such as CTM for geometry are > planned to be supported as plugins. > > The spec is not finished yet and has been in fact living quite a lot > during the past half a year. It is expected to be finished early next year. > This is not a bad schedule for us as we can now verify whether it works for > our needs now or does it need changes. We can also help with completing the > work (e.g. loader(s)). > > Yesterday we finally got our test working: the old optimized 9 blocks city > model, exported from Blender as COLLADA, shows with the raw WebGL glTF > viewer: http://playsign.tklapp.com:8000/glTF-webgl-viewer/ (Oulu model is > default but you can use the gui menu to switch to a few other models, > bundled with the viewer project by glTF folks). > > This took a few days of intense testing and debugging. Main problem was > incompatibilities with different versions of the glTF converter and the > loaders: current converter is not compatible with any viewer we know :) .. > this works by using a version from Nov ~5th, luckily the glTF files > nowadays have the git hash of the version used so we could find this out > from the example files that worked. Current test is with uncompressed data, > we?ll test with compressed next (if it?s implemented in the loader). We > also had problems with getting Blender export COLLADA correctly (texture / > texcoord export settings). > > There is also a Three.js loader but so far it seems to work with the spec > from about 5-6 months ago. We have not compiled that version of the > converter yet to test and have not been able to load any own exports to > Three. The guy working on the Three loader (Tony Parisi, one of the spec > authors, author of a O?Reilly WebGL book and co-chair of the San Francisco > WebGL meets) has worked on it a month ago to add shader loading support and > animation / skinning loading earlier. I think we?ll test the Three loader > soon again to see if can actually get animated models (e.g. avatars) over & > animations working. > > So this is still early and definitely not conclusive. I am not surprised > if we find out that the whole glTF idea based on COLLADA does not work. > I?ve been evaluating COLLADA for soon 10 years and yet very rarely actually > found it useful. But so far it looks that the time to jump to it could come > now .. in 2014, in time for 10 year celebration of the 1.0 spec from 2004 > :) . Second Life and hence Opensimulator does happily rely on COLLADA only > for getting meshes in (they have their own optimized LLMesh format for > transfers then). There?s still way to go with the glTF pipeline so for now > (probably for a month still) our recommendation for getting scenes simply > from e.g. Blender to Three.js is the Three JSON format (used in the master > branch of the city rendering repo & live demo), and CTM for an optimized > solution (but it requires custom setup to set materials). Am very curious > to hear what the DFKI gfx / XML3D folks think of this ? some of you perhaps > attended the SIGGRAPH sessions even?. > > Thanks to Tapani again for the valiant efforts in sorting all kinds of > problems with getting started with this! Was a quite nice pair programming > / battle we had.. > > Cheers, > ~Toni > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mach at zhaw.ch Wed Nov 27 13:06:27 2013 From: mach at zhaw.ch (Christof Marti) Date: Wed, 27 Nov 2013 13:06:27 +0100 Subject: [Fiware-miwi] postpone of weekly meeting to the afternoon In-Reply-To: <94be347164934ea7abc189f25c875913@SRV-MAIL-001.zhaw.ch> References: <80A130F4-1111-4059-9418-0EB6EE33369F@zhaw.ch> <921B254F-0525-48B9-871A-473858CEC803@zhaw.ch> <94be347164934ea7abc189f25c875913@SRV-MAIL-001.zhaw.ch> Message-ID: Here are the minutes/agenda: https://docs.google.com/document/d/1THeRIPBTq3IVgUaMV_6yTSNU6K2QgPXlxATKO607YDk/edit I will start the telco right now. Christof Am 27.11.2013 um 12:15 schrieb Jarkko Vatjus-Anttila : > Yes this is ok at least for cyberlightning. I will attend at 13 and esa posio will attend at 14. > > 27.11.2013 11.55 kirjoitti "Christof Marti" : > Hi > > OK. My propose is to split the meeting: > > We will start with the WP13 meeting at 13:00 CET to handle all the current FI-WARE topics. > Would be good to have as many GE owners as possible here because this is about the coming deliverables. > For this we will use the telco infrastructure > > Then we will continue the call at 14:00 CET with the discussion of the TF continuation proposal. > Would be good to have one representative from each partner here. > We could use google hangout for this. > > Any objections? > > Best regards, Christof > > > Am 27.11.2013 um 04:26 schrieb Philipp Slusallek : > > > Hi, > > > > I have a meeting at 13h that I cannot change but might be able to switch > > move a meeting at 14h and join then. > > > > Best, > > > > Philipp > > > > Am 26.11.2013 19:21, schrieb Christof Marti: > >> Hi everybody > >> > >> I have to attend an important telco tomorrow wednesday from 09:00 to ~11:00 (CET). > >> > >> Can we bush back the WP13 telco into the afternoon 13:00-14:30 CET? > >> > >> > >> BR, Christof > >> ---- > >> InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > >> Institut of Applied Information Technology - InIT > >> Zurich University of Applied Sciences - ZHAW > >> School of Engineering > >> Phone: +41 58 934 70 63 > >> Skype: christof-marti > >> > >> > >> > >> _______________________________________________ > >> Fiware-miwi mailing list > >> Fiware-miwi at lists.fi-ware.eu > >> https://lists.fi-ware.eu/listinfo/fiware-miwi > >> > > > > > > -- > > > > ------------------------------------------------------------------------- > > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > > Trippstadter Strasse 122, D-67663 Kaiserslautern > > > > Gesch?ftsf?hrung: > > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > > Dr. Walter Olthoff > > Vorsitzender des Aufsichtsrats: > > Prof. Dr. h.c. Hans A. Aukes > > > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > > --------------------------------------------------------------------------- > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi -------------- next part -------------- An HTML attachment was scrubbed... URL: From jarkko at cyberlightning.com Wed Nov 27 13:59:16 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Wed, 27 Nov 2013 14:59:16 +0200 Subject: [Fiware-miwi] postpone of weekly meeting to the afternoon In-Reply-To: <921B254F-0525-48B9-871A-473858CEC803@zhaw.ch> References: <80A130F4-1111-4059-9418-0EB6EE33369F@zhaw.ch> <921B254F-0525-48B9-871A-473858CEC803@zhaw.ch> Message-ID: Christof, I guess you will generate the hangout link? - j On Wed, Nov 27, 2013 at 11:54 AM, Christof Marti wrote: > Hi > > OK. My propose is to split the meeting: > > We will start with the WP13 meeting at 13:00 CET to handle all the current > FI-WARE topics. > Would be good to have as many GE owners as possible here because this is > about the coming deliverables. > For this we will use the telco infrastructure > > Then we will continue the call at 14:00 CET with the discussion of the TF > continuation proposal. > Would be good to have one representative from each partner here. > We could use google hangout for this. > > Any objections? > > Best regards, Christof > > > Am 27.11.2013 um 04:26 schrieb Philipp Slusallek < > Philipp.Slusallek at dfki.de>: > > > Hi, > > > > I have a meeting at 13h that I cannot change but might be able to switch > > move a meeting at 14h and join then. > > > > Best, > > > > Philipp > > > > Am 26.11.2013 19:21, schrieb Christof Marti: > >> Hi everybody > >> > >> I have to attend an important telco tomorrow wednesday from 09:00 to > ~11:00 (CET). > >> > >> Can we bush back the WP13 telco into the afternoon 13:00-14:30 CET? > >> > >> > >> BR, Christof > >> ---- > >> InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > >> Institut of Applied Information Technology - InIT > >> Zurich University of Applied Sciences - ZHAW > >> School of Engineering > >> Phone: +41 58 934 70 63 > >> Skype: christof-marti > >> > >> > >> > >> _______________________________________________ > >> Fiware-miwi mailing list > >> Fiware-miwi at lists.fi-ware.eu > >> https://lists.fi-ware.eu/listinfo/fiware-miwi > >> > > > > > > -- > > > > ------------------------------------------------------------------------- > > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > > Trippstadter Strasse 122, D-67663 Kaiserslautern > > > > Gesch?ftsf?hrung: > > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > > Dr. Walter Olthoff > > Vorsitzender des Aufsichtsrats: > > Prof. Dr. h.c. Hans A. Aukes > > > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > > > --------------------------------------------------------------------------- > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- Jarkko Vatjus-Anttila VP, Technology Cyberlightning Ltd. mobile. +358 405245142 email. jarkko at cyberlightning.com Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mach at zhaw.ch Wed Nov 27 14:03:38 2013 From: mach at zhaw.ch (Christof Marti) Date: Wed, 27 Nov 2013 14:03:38 +0100 Subject: [Fiware-miwi] postpone of weekly meeting to the afternoon In-Reply-To: <88ecc5e8a9bd44c4b8098a68f98ddaae@SRV-MAIL-001.zhaw.ch> References: <80A130F4-1111-4059-9418-0EB6EE33369F@zhaw.ch> <921B254F-0525-48B9-871A-473858CEC803@zhaw.ch> <88ecc5e8a9bd44c4b8098a68f98ddaae@SRV-MAIL-001.zhaw.ch> Message-ID: Here is the link for the hangout: https://plus.google.com/hangouts/_/72cpj8phmt7orcvu2r1j1b87lk Christof Am 27.11.2013 um 13:59 schrieb Jarkko Vatjus-Anttila : > Christof, > > I guess you will generate the hangout link? > > - j > > > On Wed, Nov 27, 2013 at 11:54 AM, Christof Marti wrote: > Hi > > OK. My propose is to split the meeting: > > We will start with the WP13 meeting at 13:00 CET to handle all the current FI-WARE topics. > Would be good to have as many GE owners as possible here because this is about the coming deliverables. > For this we will use the telco infrastructure > > Then we will continue the call at 14:00 CET with the discussion of the TF continuation proposal. > Would be good to have one representative from each partner here. > We could use google hangout for this. > > Any objections? > > Best regards, Christof > > > Am 27.11.2013 um 04:26 schrieb Philipp Slusallek : > > > Hi, > > > > I have a meeting at 13h that I cannot change but might be able to switch > > move a meeting at 14h and join then. > > > > Best, > > > > Philipp > > > > Am 26.11.2013 19:21, schrieb Christof Marti: > >> Hi everybody > >> > >> I have to attend an important telco tomorrow wednesday from 09:00 to ~11:00 (CET). > >> > >> Can we bush back the WP13 telco into the afternoon 13:00-14:30 CET? > >> > >> > >> BR, Christof > >> ---- > >> InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > >> Institut of Applied Information Technology - InIT > >> Zurich University of Applied Sciences - ZHAW > >> School of Engineering > >> Phone: +41 58 934 70 63 > >> Skype: christof-marti > >> > >> > >> > >> _______________________________________________ > >> Fiware-miwi mailing list > >> Fiware-miwi at lists.fi-ware.eu > >> https://lists.fi-ware.eu/listinfo/fiware-miwi > >> > > > > > > -- > > > > ------------------------------------------------------------------------- > > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > > Trippstadter Strasse 122, D-67663 Kaiserslautern > > > > Gesch?ftsf?hrung: > > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > > Dr. Walter Olthoff > > Vorsitzender des Aufsichtsrats: > > Prof. Dr. h.c. Hans A. Aukes > > > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > > --------------------------------------------------------------------------- > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > > > -- > Jarkko Vatjus-Anttila > VP, Technology > Cyberlightning Ltd. > > mobile. +358 405245142 > email. jarkko at cyberlightning.com > > Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. > Get your free evaluation version and buy it now! www.cybersli.de > > www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Wed Nov 27 14:53:08 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Wed, 27 Nov 2013 14:53:08 +0100 Subject: [Fiware-miwi] Text for TF-Proposal Message-ID: <5295F944.5050706@dfki.de> Hi all, Here is the text we talked about, in the call right now. It was sent to Juanjo and the TF core group: ---- -- Testing? Live Demos? ======================= Here I can suggest that together with the team from Oulu and interested others we could organize such a task (e.g. as part of WP13). We have the UI components to make this very appealing to the public (industry, media, and politics) and there is already a strong interest in real applications within this team (so these would not just be mockups but prototypes). This is further supported by our close links to the very popular FI-Content UC (e.g. with Disney and other great partners) as well as FITMAN. I think strengthening such interactions with UCs on a technical and practical side would be an important strong point for the TF proposal. I would be willing to organize this quickly if there is enough support in the team. Essentially, the goal would be to coordinate one or more larger demonstrators that would also involve the UC projects and their SEs. I believe this could be a great step forward in getting the UCs and Phase-III projects closer together and show what functionality we really have on offer. Thematically, this should probably be based on the notion of "Dual/Simulated Reality" scenario where represent and interact with the real world out there via the computer/mobile. We would be using IoT to gather data from large numbers of sensors and actuators (e.g. home, city, production line) this data is preprocessed with help from the Data chapter and then visualized (2D, VR, AR in the Web) via WP13, which would also allows the user to interact with the real scene via its virtual representation. I2ND, Cloud, and Security are supporting this all very nicely. If there is interest, I can provide a draft of such a task this weekend. It would be important to know who would want to contribute/support such activity in what way. -- Robotics: A robot cannot really be understood without its 3D environment. This is true for its sensing capabilities but even more so for its actuators. They are all placed on articulated structures, such that their 3D position, orientation, and motion has to be taken into account for many aspects. This cannot really be done in a pure IoT context where we essentially have a simply a collection of sensors and actuators but no real relationship between them or framework for interpreting or working with those relationships. This is the main reason why I had previously suggested that robotics would fit well into WP13, where we have exactly these frameworks for representing, computing and simulating the robot as well as its environment. We can then (semantically) link the many sensors and actuators to such a 3D structure and do things like move the actuator to a specific position, which in turn involves animating many joint angles correctly (via Inverse Kinematics for example). I do not see any way of even representing such data/services in a IoT chapter. We at least need to make this connection clear, independent of which WP the robotics task end up in. I actually do think that As I have mentioned to some of you before: The big robotics department of DFKI has been involved in the ROS development and if there is still expertize needed, I could easily set up the connection and also act as their representative (since I no FI-PPP better). They would be happy to be engaged, so please let me know if I should contact them. ---- Best, Philipp -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From kari.autio at gmail.com Thu Nov 28 09:20:06 2013 From: kari.autio at gmail.com (Kari Autio) Date: Thu, 28 Nov 2013 10:20:06 +0200 Subject: [Fiware-miwi] POI and AR demo videos In-Reply-To: <528C668C.9020108@cie.fi> References: <528C668C.9020108@cie.fi> Message-ID: Are the videos available somewhere in MiWi Wiki or is it the plan to add them? -kari Kari 040-1676545 2013/11/20 Arto Heikkinen > Hi all, > > Demo videos related to POI and AR GE's can be found behind the following > links: > > POI: http://www.youtube.com/watch?v=7jbXca_a_rY&feature=youtu.be > AR: http://www.youtube.com/watch?v=Xhpt1sr5Akw&feature=youtu.be > > Br, > Arto > > -- > _______________________________________________________ > Arto Heikkinen, Doctoral student, M.Sc. (Eng.) > Center for Internet Excellence (CIE) > P.O. BOX 1001, FIN-90014 University of Oulu, Finland > e-mail: arto.heikkinen at cie.fi, http://www.cie.fi > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian.sons at dfki.de Thu Nov 28 11:03:03 2013 From: kristian.sons at dfki.de (Kristian Sons) Date: Thu, 28 Nov 2013 11:03:03 +0100 Subject: [Fiware-miwi] A Candidate AssetPipe: glTF aka. Collada JSON In-Reply-To: References: Message-ID: <529714D7.2090505@dfki.de> Hi Toni, > There?s still way to go with the glTF pipeline so for now (probably > for a month still) our recommendation for getting scenes simply from > e.g. Blender to Three.js is the Three JSON format (used in the master > branch of the city rendering repo & live demo), and CTM for an > optimized solution (but it requires custom setup to set materials). Am > very curious to hear what the DFKI gfx / XML3D folks think of this ? > some of you perhaps attended the SIGGRAPH sessions even?. yes, we were really active in this area. I presented together with Fabrice Robinet at the WebGL and COLLADA BOF at Siggraph last year. We also had discussions with the transmission format group at the Ninth AR Standards Community Meeting this year and at Web3D2013 in San Sebastian. Especially the discussions with Neil in San Sebastian were really good. From our experiences of the XML3DRepo paper we created a position paper: http://www.perey.com/ARStandards/[Klein]3dtf-position-paper_Ninth_AR_Standards_Meeting.pdf As also mentioned in the paper, I see three issues: 1. The minimum of number of request is given by the number of meshes. For scenes with many meshes, this will be the bottleneck. Additionally, this minimum can only be reached if the attribute data is interleaved and not indexed. I don't know the status for current exporters, but I guess most exporters would just support interleaving for very specific signatures 2. The registry for compressors is way better than a fixed set of compression methods and Khronos has a good practice of extension registries. However, I think that 3D data is so inhomogeneous that even this is not flexible enough. Instead we propose to use a "Code on demand" appraoch, which means that a fallback decoder implementation could be downloaded if the rendering system has no built-in (possible HW accelerated) decompressor for a data set. Neil liked this idea, I don't know if maybe glTF has moved into this direction 3. Using the GL buffers as common ground for all mesh formats is a good idea. However, finding a suitable abstraction layer for materials and animations is way harder and not even solved in a high abstraction format such as COLLADA BTW, it would probably be easy (and interesting) to implement a glTF plug-in for XML3D. Jan (in CC) is a PhD student in our group who started to design and implement a transmission format that addresses the requirements we collected in the position paper above. It is very generic and allows to stream structured binary data and to decode the data in parallel. Currently it's not possible to chunks of a request if ArrayBuffers are requested (this is only possible for text requests). We identified this issue in the W3C CG together with Fraunhofer IGD (and some Audio guys who are interested in the same feature). If we had this, it would be perfect cause everything could be in one request. In a next step, we will implemnt some encoder/decoders. We want to show that some of the decoding could be achieved in the vertex shader (e.g. decoding of quantized vertex attributes) other could be decoded using Web Workers/Xflow and/or WebCL. Neil is also very open for good technical solutions. Best regards, Kristian -- _______________________________________________________________________________ Kristian Sons Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI Agenten und Simulierte Realit?t Campus, Geb. D 3 2, Raum 0.77 66123 Saarbr?cken, Germany Phone: +49 681 85775-3833 Phone: +49 681 302-3833 Fax: +49 681 85775?2235 kristian.sons at dfki.de http://www.xml3d.org Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 _______________________________________________________________________________ From Philipp.Slusallek at dfki.de Thu Nov 28 12:08:52 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Thu, 28 Nov 2013 12:08:52 +0100 Subject: [Fiware-miwi] POI and AR demo videos In-Reply-To: References: <528C668C.9020108@cie.fi> Message-ID: <52972444.9000809@dfki.de> Hi, It would be good to add them there as well (not only with Link to YouTube. Best, Philipp Am 28.11.2013 09:20, schrieb Kari Autio: > > Are the videos available somewhere in MiWi Wiki or is it the plan to add > them? -kari > > Kari > 040-1676545 > > > 2013/11/20 Arto Heikkinen > > > Hi all, > > Demo videos related to POI and AR GE's can be found behind the > following links: > > POI: http://www.youtube.com/watch?__v=7jbXca_a_rY&feature=youtu.be > > AR: http://www.youtube.com/watch?__v=Xhpt1sr5Akw&feature=youtu.be > > > Br, > Arto > > -- > _________________________________________________________ > Arto Heikkinen, Doctoral student, M.Sc. (Eng.) > Center for Internet Excellence (CIE) > P.O. BOX 1001, FIN-90014 University of Oulu, Finland > e-mail: arto.heikkinen at cie.fi , > http://www.cie.fi > > _________________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/__listinfo/fiware-miwi > > > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From arto.heikkinen at cie.fi Fri Nov 29 11:34:22 2013 From: arto.heikkinen at cie.fi (Arto Heikkinen) Date: Fri, 29 Nov 2013 12:34:22 +0200 Subject: [Fiware-miwi] Release 3 document templates Message-ID: <52986DAE.7000106@cie.fi> Hi all, I have created templates for the documents, you can find them from the following links: https://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/Template_GE_-_Installation_and_Administration_Guide https://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/Template_GE_-_User_and_Programmers_Guide https://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/Template_GE_-_Unit_Testing_Plan_and_Report Br, Arto From arto.heikkinen at cie.fi Fri Nov 29 11:55:52 2013 From: arto.heikkinen at cie.fi (Arto Heikkinen) Date: Fri, 29 Nov 2013 12:55:52 +0200 Subject: [Fiware-miwi] POI and AR demo videos In-Reply-To: <52972444.9000809@dfki.de> References: <528C668C.9020108@cie.fi> <52972444.9000809@dfki.de> Message-ID: <529872B8.2080207@cie.fi> Hi, Sure we can add the videos to the wiki. Christof, what would be a good page to add them to? -Arto On 28.11.2013 13:08, Philipp Slusallek wrote: > Hi, > > It would be good to add them there as well (not only with Link to > YouTube. > > Best, > > Philipp > > Am 28.11.2013 09:20, schrieb Kari Autio: >> >> Are the videos available somewhere in MiWi Wiki or is it the plan to add >> them? -kari >> >> Kari >> 040-1676545 >> >> >> 2013/11/20 Arto Heikkinen > > >> >> Hi all, >> >> Demo videos related to POI and AR GE's can be found behind the >> following links: >> >> POI: http://www.youtube.com/watch?__v=7jbXca_a_rY&feature=youtu.be >> >> AR: http://www.youtube.com/watch?__v=Xhpt1sr5Akw&feature=youtu.be >> >> >> Br, >> Arto >> >> -- >> _________________________________________________________ >> Arto Heikkinen, Doctoral student, M.Sc. (Eng.) >> Center for Internet Excellence (CIE) >> P.O. BOX 1001, FIN-90014 University of Oulu, Finland >> e-mail: arto.heikkinen at cie.fi , >> http://www.cie.fi >> >> _________________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/__listinfo/fiware-miwi >> >> >> >> >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> > >