From jarkko at cyberlightning.com Wed Oct 2 08:57:09 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Wed, 2 Oct 2013 09:57:09 +0300 Subject: [Fiware-miwi] Hangout meeting for Synchronization / KIARA In-Reply-To: <5244183A.2050402@dfki.de> References: <5242DC8D.4090705@dfki.de> <5244183A.2050402@dfki.de> Message-ID: Hello, Are we having a hangout link for this one yet? On Thu, Sep 26, 2013 at 2:19 PM, Torsten Spieldenner < torsten.spieldenner at dfki.de> wrote: > Hi, > > let's fix Wednesday 9 am then. > > > > Am 9/25/2013 2:54 PM, schrieb "Lasse ??rni": > > Hello, >>> >>> Philipp won't be available apart from next Wednesday, right before the >>> weekly meeting. We could set up a Hangout for 9:00 am then, if this is >>> ok. >>> Furthermore, I suggest that Sergiy, who is also in charge of >>> implementing the KIARA part in FiVES, will join as well. He has already >>> provided a quite detailed explanation of how KIARA and FIVES work >>> together on the mailinglist this morning, and he can provide information >>> about some details that I may have less experience with during the >>> Hangout as well. >>> >>> ~ Torsten >>> >> Hello, >> Wednesday morning is fine for me. >> >> > ______________________________**_________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/**listinfo/fiware-miwi > -- Jarkko Vatjus-Anttila VP, Technology Cyberlightning Ltd. mobile. +358 405245142 email. jarkko at cyberlightning.com Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From torsten.spieldenner at dfki.de Wed Oct 2 08:58:01 2013 From: torsten.spieldenner at dfki.de (Torsten Spieldenner) Date: Wed, 02 Oct 2013 08:58:01 +0200 Subject: [Fiware-miwi] Hangout meeting for Synchronization / KIARA In-Reply-To: References: <5242DC8D.4090705@dfki.de> <5244183A.2050402@dfki.de> Message-ID: <524BC3F9.6080509@dfki.de> Hello, we have :) I've just opened the hangout here: https://plus.google.com/hangouts/_/6d1f1f12bddc88cda34456dc4ecdce37fc6f75e0 Am 10/2/2013 8:57 AM, schrieb Jarkko Vatjus-Anttila: > Hello, > > Are we having a hangout link for this one yet? > > > On Thu, Sep 26, 2013 at 2:19 PM, Torsten Spieldenner < > torsten.spieldenner at dfki.de> wrote: > >> Hi, >> >> let's fix Wednesday 9 am then. >> >> >> >> Am 9/25/2013 2:54 PM, schrieb "Lasse ??rni": >> >> Hello, >>>> Philipp won't be available apart from next Wednesday, right before the >>>> weekly meeting. We could set up a Hangout for 9:00 am then, if this is >>>> ok. >>>> Furthermore, I suggest that Sergiy, who is also in charge of >>>> implementing the KIARA part in FiVES, will join as well. He has already >>>> provided a quite detailed explanation of how KIARA and FIVES work >>>> together on the mailinglist this morning, and he can provide information >>>> about some details that I may have less experience with during the >>>> Hangout as well. >>>> >>>> ~ Torsten >>>> >>> Hello, >>> Wednesday morning is fine for me. >>> >>> >> ______________________________**_________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/**listinfo/fiware-miwi >> > > From mach at zhaw.ch Wed Oct 2 09:08:06 2013 From: mach at zhaw.ch (Marti Christof (mach)) Date: Wed, 2 Oct 2013 07:08:06 +0000 Subject: [Fiware-miwi] WP13 weekly meeting Message-ID: <4CD987B6-29C9-4F96-AE54-03A3D955A99B@zhaw.ch> Hi I prepared the agenda/minutes for todays weekly meeting: https://docs.google.com/document/d/1puonHlBZhu1AUtt1qHq5RoRt0DNthRWY_0Fom72yD2k/edit Because we hit the Google Hangout limit of 10 participants last week, I was looking for a conferencing system supporting enough participants which also has local access numbers in Finland. We will try freeconferencecall.com (up to 96 participants). Conference access code: 345446 Dial In numbers: ? Finland +358 (0) 9 74790024 ? Germany +49 (0) 30 255550300 ? Switzerland +41 (0) 44 595 90 80 ? Spain +34 911 19 67 50 Full list of international dial in numbers: ? https://www.freeconferencecall.com/free-international-conference-call/internationalphonenumbers.aspx If required we can also use the Web-Conference option for screen sharing. See you - Christof ---- Christof Marti InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering P.O.Box, CH-8401 Winterthur Office:TD O3.18, Obere Kirchgasse 2 Phone: +41 58 934 70 63 Mail: mach at zhaw.ch Skype: christof-marti From lasse.oorni at ludocraft.com Wed Oct 2 17:36:32 2013 From: lasse.oorni at ludocraft.com (=?iso-8859-1?Q?=22Lasse_=D6=F6rni=22?=) Date: Wed, 2 Oct 2013 18:36:32 +0300 Subject: [Fiware-miwi] First draft of Synchronization GE description Message-ID: Hi, did some cleaning up of the Synchronization GE description, now it certainly should be ready for first draft review. http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/FIWARE.ArchitectureDescription.MiWi.Synchronization -- Lasse ??rni Game Programmer LudoCraft Ltd. From lasse.oorni at ludocraft.com Wed Oct 2 17:38:18 2013 From: lasse.oorni at ludocraft.com (=?iso-8859-1?Q?=22Lasse_=D6=F6rni=22?=) Date: Wed, 2 Oct 2013 18:38:18 +0300 Subject: [Fiware-miwi] First draft of Virtual Characters GE description Message-ID: <82f0c467cfbe9ccf1c719bbca910e774.squirrel@urho.ludocraft.com> Virtual Characters GE description was also minorly cleaned up today and should be ready for review. http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/FIWARE.ArchitectureDescription.MiWi.VirtualCharacters -- Lasse ??rni Game Programmer LudoCraft Ltd. From jonne at adminotech.com Thu Oct 3 15:41:17 2013 From: jonne at adminotech.com (Jonne Nauha) Date: Thu, 3 Oct 2013 16:41:17 +0300 Subject: [Fiware-miwi] First drafts of Adminotech GE descriptions available for review Message-ID: I've ported all our GE pages from the MIWI wiki to the general one. They are ready for review, although the table seems to have ? for reviewer for two of them. Lasse can check out the CouldRendering today or on monday. http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/FIWARE.ArchitectureDescription.MiWi.CloudRendering http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/FIWARE.ArchitectureDescription.MiWi.2D-UiInput http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/FIWARE.ArchitectureDescription.MiWi.InterfaceDesigner Best regards, Jonne Nauha Meshmoon developer at Adminotech Ltd. www.meshmoon.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mach at zhaw.ch Thu Oct 3 21:42:57 2013 From: mach at zhaw.ch (Marti Christof (mach)) Date: Thu, 3 Oct 2013 19:42:57 +0000 Subject: [Fiware-miwi] Review process (Reviewers!!!) for OpenSpecification Message-ID: Hi The cockpit for the OpenSpecification review is set up, directly on the "WP13 Integration" wicki - page: http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/WP13_Integration GE/Document-Owners please check the current status for your document. We are urgently looking for internal reviewers for the following GEs: * GISDataProvider * POIDataProvider * InterfaceDesigner * 2D-UI (incl. Input Devices) Volunteers? Please speak up. Otherwise I will pick somebody "randomly" at Friday noon How to review / criteria: It is a little bit tricky to review a wiki document. I therefore prepared and linked a GoogleDocs document for each GE, which can be used to write the feedback and mark it as resolved when fixed (e.g. by using comments). The GoogleDoc also contains the criteria to check the document against (Structure, Relevance, Accuracy, Completeness, Comprehensibility, Neutrality, Other). Reviews should be done (and page fixed) latest until Monday 7. October. (It's a tight schedule. Please try hard to meet the deadlines) Thanks for your contributions and support. Cheers, ? Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering P.O.Box, CH-8401 Winterthur Office:TD O3.18, Obere Kirchgasse 2 Phone: +41 58 934 70 63 Mail: mach at zhaw.ch Skype: christof-marti -------------- next part -------------- An HTML attachment was scrubbed... URL: From lasse.oorni at ludocraft.com Fri Oct 4 12:48:41 2013 From: lasse.oorni at ludocraft.com (=?iso-8859-1?Q?=22Lasse_=D6=F6rni=22?=) Date: Fri, 4 Oct 2013 13:48:41 +0300 Subject: [Fiware-miwi] First drafts of Adminotech GE descriptions available for review In-Reply-To: References: Message-ID: <8858303862a70aab66bcee781e98e0bc.squirrel@urho.ludocraft.com> > I've ported all our GE pages from the MIWI wiki to the general one. They > are ready for review, although the table seems to have ? for reviewer for > two of them. Lasse can check out the CouldRendering today or on monday. > > http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/FIWARE.ArchitectureDescription.MiWi.CloudRendering Hi, my review of CloudRendering is complete, in the doc on the integration page http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/WP13_Integration -- Lasse ??rni Game Programmer LudoCraft Ltd. From erno at playsign.net Fri Oct 4 15:28:43 2013 From: erno at playsign.net (Erno Kuusela) Date: Fri, 4 Oct 2013 16:28:43 +0300 Subject: [Fiware-miwi] 2D-3D capture review done Message-ID: <20131004132843.GZ62563@ee.oulu.fi> as per subject. From mach at zhaw.ch Fri Oct 4 19:28:03 2013 From: mach at zhaw.ch (Marti Christof (mach)) Date: Fri, 4 Oct 2013 17:28:03 +0000 Subject: [Fiware-miwi] Review process (Reviewers!!!) for OpenSpecification In-Reply-To: References: Message-ID: Hi everybody Thanks to all who already finished or are working on the reviews. Because I got no volunteers for the missing slots I have now assigned partners / people to these documents: * GISDataProvider -> CIE / Arto Heikkinen?, Antti Karhu? * POIDataProvider -> Cyberlighning / Juha Hyv?rinen?, Sami Jylkk? * InterfaceDesigner -> DFKI / Felix Klein?, Torsten Spieldenner? * 2D-UI (incl. Input Devices) -> Playsign / Toni Alatalo? Erno Kuusla? I added two names for redundancy. If somebody can not do it (missing skills, not available at the moment, ...) feel free to replace the name with somebody capable. Cheers, ? Christof Am 03.10.2013 um 21:42 schrieb Christof Marti >: Hi The cockpit for the OpenSpecification review is set up, directly on the "WP13 Integration" wicki - page: http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/WP13_Integration GE/Document-Owners please check the current status for your document. We are urgently looking for internal reviewers for the following GEs: * GISDataProvider * POIDataProvider * InterfaceDesigner * 2D-UI (incl. Input Devices) Volunteers? Please speak up. Otherwise I will pick somebody "randomly" at Friday noon How to review / criteria: It is a little bit tricky to review a wiki document. I therefore prepared and linked a GoogleDocs document for each GE, which can be used to write the feedback and mark it as resolved when fixed (e.g. by using comments). The GoogleDoc also contains the criteria to check the document against (Structure, Relevance, Accuracy, Completeness, Comprehensibility, Neutrality, Other). Reviews should be done (and page fixed) latest until Monday 7. October. (It's a tight schedule. Please try hard to meet the deadlines) Thanks for your contributions and support. Cheers, ? Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering P.O.Box, CH-8401 Winterthur Office:TD O3.18, Obere Kirchgasse 2 Phone: +41 58 934 70 63 Mail: mach at zhaw.ch Skype: christof-marti -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonne at adminotech.com Mon Oct 7 00:22:41 2013 From: jonne at adminotech.com (Jonne Nauha) Date: Mon, 7 Oct 2013 01:22:41 +0300 Subject: [Fiware-miwi] VirtualCharacters and RealVirtualInteraction GEs reviewed Message-ID: VirtualCharacters https://docs.google.com/a/adminotech.com/document/d/1zzerQFnIu0ps4SFE_EY01faC5ky6ue8oJUplg49LC94/edit#heading=h.rygstg1n944n RealVirtualInteraction https://docs.google.com/a/adminotech.com/document/d/1YXfqfSfExPLC3eVPkbHYtCx3FjKi_YmrAuGbTAJ_sHo/edit# The real virtual interaction was a tricky one. It's very detailed and complete but left me wondering what the GE is actually implementing :) I don't know if this will be the case for the official reviewers but I gave my view on it in the review. Best regards, Jonne Nauha Meshmoon developer at Adminotech Ltd. www.meshmoon.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomi.sarni at cyberlightning.com Mon Oct 7 08:20:26 2013 From: tomi.sarni at cyberlightning.com (Tomi Sarni) Date: Mon, 7 Oct 2013 09:20:26 +0300 Subject: [Fiware-miwi] VirtualCharacters and RealVirtualInteraction GEs reviewed In-Reply-To: References: Message-ID: Good points, the point of the RealVirtualIntegration is to offer developers access to physical devices and the GE wiki currently represents set of standards and components that need to be put together in the backend and in real world device level in order to accomplish this. The API section in the end of the wiki is for the developers, but that part will be filled last once this GE is complete. And currently there is a concrete use-case missing still within MiWi, but i am sure this will be soon figured out and the work for practical use-case implementation can start. Ofcourse this practical co-operation between architecture development and application development should provide a generally understandable API specification. Tomi On Mon, Oct 7, 2013 at 1:22 AM, Jonne Nauha wrote: > VirtualCharacters > https://docs.google.com/a/adminotech.com/document/d/1zzerQFnIu0ps4SFE_EY01faC5ky6ue8oJUplg49LC94/edit#heading=h.rygstg1n944n > > RealVirtualInteraction > https://docs.google.com/a/adminotech.com/document/d/1YXfqfSfExPLC3eVPkbHYtCx3FjKi_YmrAuGbTAJ_sHo/edit# > > The real virtual interaction was a tricky one. It's very detailed and > complete but left me wondering what the GE is actually implementing :) I > don't know if this will be the case for the official reviewers but I gave > my view on it in the review. > > Best regards, > Jonne Nauha > Meshmoon developer at Adminotech Ltd. > www.meshmoon.com > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Mon Oct 7 09:01:12 2013 From: toni at playsign.net (Toni Alatalo) Date: Mon, 7 Oct 2013 10:01:12 +0300 Subject: [Fiware-miwi] Review process (Reviewers!!!) for OpenSpecification In-Reply-To: References: Message-ID: <7AAFB7C0-BA9F-4149-A59C-D7D5358ACAB7@playsign.net> On Oct 4, 2013, at 8:28 PM, Marti Christof (mach) wrote: > Because I got no volunteers for the missing slots I have now assigned partners / people to these documents: > InterfaceDesigner -> DFKI / Felix Klein?, Torsten Spieldenner? > 2D-UI (incl. Input Devices) -> Playsign / Toni Alatalo? Erno Kuusla? These were vice versa on the wiki - I went according to the wiki table and reviewed InterfaceDesigner. Cvetan heads up: comments are in https://docs.google.com/document/d/1nMpV5HC-bF3I6DpXiUP6Ic3j2vlP1is1FfmwIZNscC4/edit# > ? Christof ~Toni > Am 03.10.2013 um 21:42 schrieb Christof Marti : > >> Hi >> >> The cockpit for the OpenSpecification review is set up, directly on the "WP13 Integration" wicki - page: >> http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/WP13_Integration >> >> GE/Document-Owners please check the current status for your document. >> >> We are urgently looking for internal reviewers for the following GEs: >> GISDataProvider >> POIDataProvider >> InterfaceDesigner >> 2D-UI (incl. Input Devices) >> Volunteers? Please speak up. Otherwise I will pick somebody "randomly" at Friday noon >> >> How to review / criteria: >> It is a little bit tricky to review a wiki document. >> I therefore prepared and linked a GoogleDocs document for each GE, which can be used to write the feedback and mark it as resolved when fixed (e.g. by using comments). >> The GoogleDoc also contains the criteria to check the document against (Structure, Relevance, Accuracy, Completeness, Comprehensibility, Neutrality, Other). >> >> Reviews should be done (and page fixed) latest until Monday 7. October. (It's a tight schedule. Please try hard to meet the deadlines) >> Thanks for your contributions and support. >> >> Cheers, >> ? Christof >> ---- >> InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch >> Institut of Applied Information Technology - InIT >> Zurich University of Applied Sciences - ZHAW >> School of Engineering >> P.O.Box, CH-8401 Winterthur >> Office:TD O3.18, Obere Kirchgasse 2 >> Phone: +41 58 934 70 63 >> Mail: mach at zhaw.ch >> Skype: christof-marti > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi -------------- next part -------------- An HTML attachment was scrubbed... URL: From arto.heikkinen at cie.fi Mon Oct 7 09:04:48 2013 From: arto.heikkinen at cie.fi (Arto Heikkinen) Date: Mon, 07 Oct 2013 10:04:48 +0300 Subject: [Fiware-miwi] GISDataProvider GE reviewed Message-ID: <52525D10.4030308@cie.fi> Hi all, The review for the GISDataProvider GE can be found at: https://docs.google.com/document/d/1FeLcsjf15SQBvgukLV8jvNoEq3NwM9Yk_SWd8Ph63x8/edit# Br, Arto Heikkinen -- _______________________________________________________ Arto Heikkinen, Doctoral student, M.Sc. (Eng.) Center for Internet Excellence (CIE) P.O. BOX 1001, FIN-90014 University of Oulu, Finland e-mail: arto.heikkinen at cie.fi, http://www.cie.fi From sami.jylkka at cyberlightning.com Mon Oct 7 10:25:01 2013 From: sami.jylkka at cyberlightning.com (Sami J) Date: Mon, 7 Oct 2013 11:25:01 +0300 Subject: [Fiware-miwi] FIWARE.ArchitectureDescription.MiWi.POIDataProvider reviewed Message-ID: Comments can be found in google docs Br, Sami -------------- next part -------------- An HTML attachment was scrubbed... URL: From lasse.oorni at ludocraft.com Mon Oct 7 11:27:42 2013 From: lasse.oorni at ludocraft.com (=?iso-8859-1?Q?=22Lasse_=D6=F6rni=22?=) Date: Mon, 7 Oct 2013 12:27:42 +0300 Subject: [Fiware-miwi] XML3D GE review Message-ID: Hi, my comments of the XML3D GE are now in the review document. Perhaps Jarkko has something to add? https://docs.google.com/document/d/1EpR4K965_pO3UL71BRED9xcsK8zEIf_8Bye7IN9ZUGE -- Lasse ??rni Game Programmer LudoCraft Ltd. From torsten.spieldenner at dfki.de Mon Oct 7 17:57:20 2013 From: torsten.spieldenner at dfki.de (Torsten Spieldenner) Date: Mon, 07 Oct 2013 17:57:20 +0200 Subject: [Fiware-miwi] FIWARE.ArchitectureDescription.MiWi.Synchronization reviewed Message-ID: <5252D9E0.3020102@dfki.de> Comments can be found in the respective word document Best, Torsten From toni at playsign.net Mon Oct 7 21:58:17 2013 From: toni at playsign.net (toni at playsign.net) Date: Mon, 7 Oct 2013 19:58:17 +0000 Subject: [Fiware-miwi] =?utf-8?q?XML3D_GE_review?= In-Reply-To: References: Message-ID: <20131007200302.B976B18003E@dionysos.netplaza.fi> I added one reply-comment there (we?ve also discussed it sometimes, how the synchronization GE probably sets additional requirements for the data, like local/replicated flags etc). Also a question for the DFKI guys about possibility of using the same scene for multiple view areas (may be impossible due to how webgl works, I?m not sure, hence asked). Do please Jarkko take a look too. And anyone ofc, I think this document describes xml3d very nicely. ~Toni From: ""Lasse ??rni"" Sent: ?Monday?, ?October? ?7?, ?2013 ?12?:?27? ?PM To: fiware-miwi at lists.fi-ware.eu; Philipp Slusallek; Torsten Spieldenner; mach at zhaw.ch Hi, my comments of the XML3D GE are now in the review document. Perhaps Jarkko has something to add? https://docs.google.com/document/d/1EpR4K965_pO3UL71BRED9xcsK8zEIf_8Bye7IN9ZUGE -- Lasse ??rni Game Programmer LudoCraft Ltd. _______________________________________________ Fiware-miwi mailing list Fiware-miwi at lists.fi-ware.eu https://lists.fi-ware.eu/listinfo/fiware-miwi -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Wed Oct 9 08:19:07 2013 From: toni at playsign.net (Toni Alatalo) Date: Wed, 9 Oct 2013 09:19:07 +0300 Subject: [Fiware-miwi] the entity system Message-ID: <5230D17F-07DE-45C5-929B-8067240DC0C7@playsign.net> Hi, this post is in a way a general observation of the reviews and the overall situation, not about any particular current GE but more to point out that we are kind of missing one. I don't think it is a problem for the current spec reviews so was postponing this till now when those are already done. We can perhaps talk about this in the weekly so i post quickly before that now. The question is about the entity system. Who 'owns' it? How do we work on it? It is assumed by many GEs and is considered the central mechanism via which we integrate parts. We discussed it in the Oulu meet last week and later on IRC then with Lasse. He made a point about how it can be implicitly thought of as a part of the Synchronization GE and it is true that a key aspect of the entity system is to provide the networked multiuser functionality. In the sense that you basically only need it for networked apps -- making a single user app works also just by using e.g. current xml3d.js or three.js directly. That's not completely true though as the way it (for example in native Tundra) integrates physics and scripting can be useful for non-networked apps too. Besides synchronization, the entity system is obviously central for the 3d UI epic area. Indeed, the xml3d GE describes a data model and a way to use the DOM which is similar to how we've been using the entity system in Tundra. Basically they are the same thing as we discovered when specifying the mapping between them. For example, this is how to create a mesh component with xml3d -- example from the GE spec: // Create a new mesh element var newMesh = XML3D.createElement("mesh"); newMesh.setAttribute("src", "teapot.xml#meshData"); I was also originally thinking that the DOM is the API so we don't need to create a new API for the entity system. Jonne however made I think a good counterargument a while ago: using the generic DOM funcs like createElement and setAttribute with the string attributes to identify what you want to work on is not really a nice API, compared to something like this: // Create a new mesh component: var mymesh = new x.ec.Mesh(); mymesh.src = "teapot.xml#meshData"; //this can validate the attribute etc. - i don't know if that's possible with plain DOM setAttribute We have that kind of implementations of the realXtend entity system in the pre-existing incarnations of WebTundra: in Chiru-WebClient and Admino's WebRocket. At Playsign we've actually worked a few days now with Chiru-WebClient as ended up testing the scene loading code when starting work on large scene scalability (paging, LoDs etc) and encountered problems both with the Three's native JSON and with Collada pipelines from Blender to Three.js, that's a different story however. Anyhow it has been a nice exercise as has taught concretely how CIE/Chiru's implementation (by Toni Dahl, now at Cyberlightning AFAIK) of this kind of an entity system works. The EC model implementation in that codebase is at https://github.com/Chiru/Chiru-WebClient/tree/master/src/ecmodel . I think it is well documented too as Toni D. wrote his diploma thesis and a conference article about it - I haven't had access to those, though -- perhaps Jarkko or someone could provide the docs for us? The similar implementation on the Adminotech side has API docs in http://doc.meshmoon.com/doxygen/webrocket/ About other GEs, the Interface Designer (aka. scene builder) explicitly specifies using the entity system as the central model for the editor. With other GEs, I suspect that we encounter cases where in some GE it would be nice to implement new Component types -- that has been the common way for many plugins to extend realXtend so far. On the C++ side we have the IComponent interface for that, http://doc.meshmoon.com/doxygen/webrocket/classes/tundra.IComponent.html . WebRocket version of that is documented at http://doc.meshmoon.com/doxygen/webrocket/classes/tundra.IComponent.html and in Chiru's version for example the implementation of the Mesh component shows how to do it: https://github.com/Chiru/Chiru-WebClient/blob/master/src/ecmodel/EC_Mesh.js But again now I think the question is: how do we organize the development related to this? Who 'owns' it, in the sense of being responsible but also having authority for the design? Is this driven by Synchronization, or somehow else if sync just focuses on the network messaging and assumes the entity system just be there like all the other GEs do too? talk to you soon, ~Toni -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Wed Oct 9 08:25:22 2013 From: toni at playsign.net (Toni Alatalo) Date: Wed, 9 Oct 2013 09:25:22 +0300 Subject: [Fiware-miwi] the entity system In-Reply-To: <5230D17F-07DE-45C5-929B-8067240DC0C7@playsign.net> References: <5230D17F-07DE-45C5-929B-8067240DC0C7@playsign.net> Message-ID: oops one mispaste: the doc of the C++ IComponent is at http://doc.meshmoon.com/doxygen/class_i_component.html On Oct 9, 2013, at 9:19 AM, Toni Alatalo wrote: > extend realXtend so far. On the C++ side we have the IComponent interface for that, http://doc.meshmoon.com/doxygen/webrocket/classes/tundra.IComponent.html . WebRocket version of that is documented at http://doc.meshmoon.com/doxygen/webrocket/classes/tundra.IComponent.html and in Chiru's version for example the implementation of the Mesh component shows how to do it: https://github.com/Chiru/Chiru-WebClient/blob/master/src/ecmodel/EC_Mesh.js > > But again now I think the question is: how do we organize the development related to this? Who 'owns' it, in the sense of being responsible but also having authority for the design? Is this driven by Synchronization, or somehow else if sync just focuses on the network messaging and assumes the entity system just be there like all the other GEs do too? > > talk to you soon, > ~Toni > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mach at zhaw.ch Wed Oct 9 09:12:15 2013 From: mach at zhaw.ch (Christof Marti) Date: Wed, 9 Oct 2013 09:12:15 +0200 Subject: [Fiware-miwi] WP13 weekly meeting Message-ID: Hi I prepared the agenda/minutes for todays weekly meeting: https://docs.google.com/document/d/1aqOcoUr-lnQlQ42GZIjUfyP-Iurh28o67SQeZKHpyGo/edit Because we hit the Google Hangout limit of 10 participants last week, I was looking for a conferencing system supporting enough participants which also has local access numbers in Finland. We will tryfreeconferencecall.com (up to 96 participants). Conference access code: 345446 Dial In numbers: ? Finland +358 (0) 9 74790024 ? Germany +49 (0) 30 255550300 ? Switzerland +41 (0) 44 595 90 80 ? Spain +34 911 19 67 50 Full list of international dial in numbers: ? https://www.freeconferencecall.com/free-international-conference-call/internationalphonenumbers.aspx If required we can also use the Web-Conference option for screen sharing. See you - Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering P.O.Box, CH-8401 Winterthur Office:TD O3.18, Obere Kirchgasse 2 Phone: +41 58 934 70 63 Mail: mach at zhaw.ch Skype: christof-marti From torsten.spieldenner at dfki.de Wed Oct 9 09:20:08 2013 From: torsten.spieldenner at dfki.de (Torsten Spieldenner) Date: Wed, 09 Oct 2013 09:20:08 +0200 Subject: [Fiware-miwi] XML3D GE review In-Reply-To: <20131007200302.B976B18003E@dionysos.netplaza.fi> References: <20131007200302.B976B18003E@dionysos.netplaza.fi> Message-ID: <525503A8.1030705@dfki.de> Hello, Am 10/7/2013 9:58 PM, schrieb toni at playsign.net: > I added one reply-comment there (we?ve also discussed it sometimes, how the synchronization GE probably sets additional requirements for the data, like local/replicated flags etc). If I understand it correctly that the question is about whether additional flags can be added to XML3D nodes, then the answer is, yes: this is possible. You can add whatever node attributes or child nodes you want to an XML3D node. If those are not part of the XML3D spec, they will be ignored when being processed to the renderer, but can be used by any other code via DOM operations. > Also a question for the DFKI guys about possibility of using the same scene for multiple view areas (may be impossible due to how webgl works, I?m not sure, hence asked). WebGL is the limitation here, indeed, as it only allows one (active) viewport per renderer instance. What is possible, is to have several XML3D elements and clone the scenegraph in every of the XML3D elements. In order to keep memory overhead low here, one could store shader, transformation and vertex data for meshes in external files and reference them from within the scene graph. Experiments we have done have shown, that compared to the actual scene graph (the tree of group- and mesh-nodes), this externalized data makes the majority of all data. But still: group nodes would have to be duplicated for every XML3D element, and scene graph states synchronized between different XML3D elements. ~Torsten From torsten.spieldenner at dfki.de Wed Oct 9 09:28:20 2013 From: torsten.spieldenner at dfki.de (Torsten Spieldenner) Date: Wed, 09 Oct 2013 09:28:20 +0200 Subject: [Fiware-miwi] FiVES Synchronisation server Message-ID: <52550594.6070403@dfki.de> Hello, the code of our Synchronisation Server approach is now available at: https://github.com/rryk/FiVES This includes implementation of Server and plugins, as well as the example web client. Documentation definitely needs more work, but this will be addressed and added to the GitHub Wiki in the next days. To get an idea how the client communicates with the server, the classes fives_communicator of the web client may be interesting, whereas scene manager takes care of assembling the XML3D scene, using resource manager to retrieve externally stored XML3D files. A new release in master branch, that fixes some bugs from last release, will be made today, or early tomorrow. ~Torsten From kristian.sons at dfki.de Wed Oct 9 09:52:33 2013 From: kristian.sons at dfki.de (Kristian Sons) Date: Wed, 09 Oct 2013 09:52:33 +0200 Subject: [Fiware-miwi] the entity system In-Reply-To: <5230D17F-07DE-45C5-929B-8067240DC0C7@playsign.net> References: <5230D17F-07DE-45C5-929B-8067240DC0C7@playsign.net> Message-ID: <52550B41.20103@dfki.de> Hi, > // Create a new mesh component: > var mymesh = new x.ec.Mesh(); > mymesh.src = "teapot.xml#meshData"; //this can validate the attribute > etc. - i don't know if that's possible with plain DOM setAttribute we have that for all simple data types in XML3D. The general semantic adapted from HTML and SVG: You can set whatever values you want using the generic string-based DOM API. If the string does not evaluate to a valid 'typed' value, the internal typed value is reset to the default value e.g.: document.querySelector("view") 1. var v = document.querySelector("view"); undefined v.fieldOfView = 0.5 0.5 v.getAttribute("fieldOfView") "0.5" v.setAttribute("fieldOfView", "asd") undefined v.fieldOfView 0.785398 For complex types (e.g. XML3DVec), the behavior is similar, but it's not possible to set the value directly (the DOM element 'owns' the reference to the object): var t = document.querySelector("transform"); undefined t.translation = new XML3DVec3(0,0,10); Error: Can't set transform::translation: it's readonly t.translation.set(new XML3DVec3(0,0,10)); undefined t.getAttribute("translation") "0 0 10" t.translation.set(0,20,0); undefined t.getAttribute("translation") "0 20 0" For boolean values and enumerations there are some differences in the behavior. Again, we analyzed and copied the behavior of similar APIs in HTML and SVG DOM elements. Best, Kristian -- _______________________________________________________________________________ Kristian Sons Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI Agenten und Simulierte Realit?t Campus, Geb. D 3 2, Raum 0.77 66123 Saarbr?cken, Germany Phone: +49 681 85775-3833 Phone: +49 681 302-3833 Fax: +49 681 85775--2235 kristian.sons at dfki.de http://www.xml3d.org Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 _______________________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian.sons at dfki.de Wed Oct 9 10:03:40 2013 From: kristian.sons at dfki.de (Kristian Sons) Date: Wed, 09 Oct 2013 10:03:40 +0200 Subject: [Fiware-miwi] XML3D GE review In-Reply-To: <525503A8.1030705@dfki.de> References: <20131007200302.B976B18003E@dionysos.netplaza.fi> <525503A8.1030705@dfki.de> Message-ID: <52550DDC.2050301@dfki.de> Am 09.10.2013 09:20, schrieb Torsten Spieldenner: > >> Also a question for the DFKI guys about possibility of using the same >> scene for multiple view areas (may be impossible due to how webgl >> works, I?m not sure, hence asked). > > WebGL is the limitation here, indeed, as it only allows one (active) > viewport per renderer instance. What is possible, is to have several > XML3D elements and clone the scenegraph in every of the XML3D > elements. In order to keep memory overhead low here, one could store > shader, transformation and vertex data for meshes in external files > and reference them from within the scene graph. Experiments we have > done have shown, that compared to the actual scene graph (the tree of > group- and mesh-nodes), this externalized data makes the majority of > all data. But still: group nodes would have to be duplicated for every > XML3D element, and scene graph states synchronized between different > XML3D elements. To go a little more into detail: All WebGL resource are bound to one context and there is no mechanism to share them across. There is a ongoing discussion to have a sharing mechanism for exactly this use case but also to be able to run e.g. passes in Worker Threads. In XML3D we can share data elements between multiple XML3D contexts. Then the WebGL contexts do not share the buffers, but the TypedArrays the buffers get constructed from. Also all intermediate results form Xflow graphs get shared. As Rorsten explained, then it's required to synchronize the grou/mesh structure of two scenes, but all data for meshes and shaders can be reused. Having said that, we didn't really test this to a large extend ;) An alternative approach would be to use our new Renderpipline mechanism to render the scene from a different view to a FBO, read back the content and store it in a canvas or image. Though this approach is better memory-wise, the readback is very expensive in terms of run-time. Best, Kristian -- _______________________________________________________________________________ Kristian Sons Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI Agenten und Simulierte Realit?t Campus, Geb. D 3 2, Raum 0.77 66123 Saarbr?cken, Germany Phone: +49 681 85775-3833 Phone: +49 681 302-3833 Fax: +49 681 85775?2235 kristian.sons at dfki.de http://www.xml3d.org Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 _______________________________________________________________________________ From torsten.spieldenner at dfki.de Wed Oct 9 11:32:43 2013 From: torsten.spieldenner at dfki.de (Torsten Spieldenner) Date: Wed, 09 Oct 2013 11:32:43 +0200 Subject: [Fiware-miwi] 2DUI Reviewed Message-ID: <525522BB.7060304@dfki.de> Hello, I have just finished the review of the 2D UI GE description: https://docs.google.com/document/d/11ge4ZhyjwWtLjADP7PlGZdeIRrPAL22U64WtInMk-rg/edit?pli=1# I am not sure, if all links to template and GE OpenSpecification pages are correct here, because accessing the Architecture Description from the list given here: http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/WP13_Integration just leads to a copy of the internal draft page. Best Torsten From mach at zhaw.ch Wed Oct 9 13:49:47 2013 From: mach at zhaw.ch (Christof Marti) Date: Wed, 9 Oct 2013 13:49:47 +0200 Subject: [Fiware-miwi] WP13 meeting wrapup & Open Specification documents In-Reply-To: <68b92679854c4b11ba739c8a0fbe8d03@SRV-MAIL-001.zhaw.ch> References: <68b92679854c4b11ba739c8a0fbe8d03@SRV-MAIL-001.zhaw.ch> Message-ID: Hi I updated the OpenSpecification wiki-pages as discussed in todays meeting: renamed XML3D to 3D-UI renamed 2D-UiInput to 2D-UI added template pages for the DisplayAsAService GE I also added and linked the OpenAPI Details-pages (FIWARE.OpenSpecification.Details.) for each GE. This page is (like the Architecture) included in the OpenSpecification page and is used for references to the API-Specifications. To review please use the OpenSpecification-Link (first in list for each GE), which includes all the other pages (Architecture, API Details, Common Glossary). To edit the subpages you can use the direct links. For delivery, all the yellow "REMARKS" sections should be replaced with content. For some documents I updated the structure, if they did not follow the structure of the example pages. For copyright links please use references to the Partner-Wiki-page and not external direct URLs (you can add direct URLs and Logos on the Partner-Page). Example: [[CYBER | Cyberlightning Ltd]] For your companies wiki-page name, check the reference on the partner page (https://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/FI-WARE_Project_Partners). Usually it is the short-name from the DoW. For those who have no partner-page please go to the above overview-page and copy the content from another partner-page (e.g. Telefonica) to your new page and edit the description. Add your logo and link to website. Please also set the correct Owner (partner-link,see above) and Owner-contact (Name of person) in the Open Specification Header. Action Points (until end of this week): Edit, review & fix the OpenSpecification (sub)pages Add the most important terms of your GE to the Glossary Page (FIWARE.Glossary.MiWi) Thank you for your contributions. Best regards ? Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering P.O.Box, CH-8401 Winterthur Office:TD O3.18, Obere Kirchgasse 2 Phone: +41 58 934 70 63 Mail: mach at zhaw.ch Skype: christof-marti Am 09.10.2013 um 09:12 schrieb Marti Christof (mach) : > Hi > > I prepared the agenda/minutes for todays weekly meeting: > https://docs.google.com/document/d/1aqOcoUr-lnQlQ42GZIjUfyP-Iurh28o67SQeZKHpyGo/edit > > Because we hit the Google Hangout limit of 10 participants last week, I was looking for a conferencing system supporting enough participants which also has local access numbers in Finland. We will tryfreeconferencecall.com (up to 96 participants). > > Conference access code: 345446 > Dial In numbers: > ? Finland +358 (0) 9 74790024 > ? Germany +49 (0) 30 255550300 > ? Switzerland +41 (0) 44 595 90 80 > ? Spain +34 911 19 67 50 > Full list of international dial in numbers: > ? https://www.freeconferencecall.com/free-international-conference-call/internationalphonenumbers.aspx > > If required we can also use the Web-Conference option for screen sharing. > > See you > - Christof > ---- > InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > Institut of Applied Information Technology - InIT > Zurich University of Applied Sciences - ZHAW > School of Engineering > P.O.Box, CH-8401 Winterthur > Office:TD O3.18, Obere Kirchgasse 2 > Phone: +41 58 934 70 63 > Mail: mach at zhaw.ch > Skype: christof-marti > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi -------------- next part -------------- An HTML attachment was scrubbed... URL: From tharanga.wijethilake at cyberlightning.com Thu Oct 10 08:36:04 2013 From: tharanga.wijethilake at cyberlightning.com (Tharanga Wijethilake) Date: Thu, 10 Oct 2013 09:36:04 +0300 Subject: [Fiware-miwi] File Upload Fails in wiki Message-ID: Hello Everyone, has anyone tried to upload anything to Wiki. It seems there is some kind of en error (Possibly a permission error) that stops from uploading images there. This has been happening from yesterday evening. ~Tharanga -------------- next part -------------- An HTML attachment was scrubbed... URL: From sami.jylkka at cyberlightning.com Thu Oct 10 08:45:02 2013 From: sami.jylkka at cyberlightning.com (Sami J) Date: Thu, 10 Oct 2013 09:45:02 +0300 Subject: [Fiware-miwi] File Upload Fails in wiki In-Reply-To: References: Message-ID: Hi, I have same problem also, unable to update wiki pictures due this matter. Sami On Thu, Oct 10, 2013 at 9:36 AM, Tharanga Wijethilake < tharanga.wijethilake at cyberlightning.com> wrote: > Hello Everyone, > has anyone tried to upload anything to Wiki. It seems there is some kind > of en error (Possibly a permission error) that stops from uploading images > there. This has been happening from yesterday evening. > > ~Tharanga > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mach at zhaw.ch Thu Oct 10 09:14:26 2013 From: mach at zhaw.ch (Christof Marti) Date: Thu, 10 Oct 2013 09:14:26 +0200 Subject: [Fiware-miwi] File Upload Fails in wiki In-Reply-To: References: Message-ID: <661ECB9B-96C0-4F2E-95C0-F58E2E6BFD42@zhaw.ch> HI all Seems to be a capacity problem with Forge, unfortunately it seems also to affect the wiki. See the following mail from Miguel last Wednesday: I will check with Miguel. ? Christof > Von: Miguel Carrillo > Betreff: [Fiware-wpa] Forge maintenance on Friday afternoon + known problems on the forge > Datum: 8. Oktober 2013 [41] 19:37:28 MESZ > An: "fiware at lists.fi-ware.eu" > Kopie: "fiware-wpl at lists.fi-ware.eu" , "fiware-wpa at lists.fi-ware.eu" > > Dear all, > > Apparently we are experiencing some troubles on the forge due to the lack of disk space allocated to a fraction of the tools we are using. > > These operations will possibly fail so please defer them until Monday: > Uploading files to the "Files" tools > Uploading files to SVN > Fortunately, the "Docs" tool and the wiki should work ok (the info is stored elsewhere) > > We will fix it next Friday. The platform will be down for a while: > Date: Friday, 11 > Start time: 16:00 CET > End time: 20:00 CET > Best regards, > > Miguel > Am 10.10.2013 um 08:45 schrieb Sami J : > Hi, > I have same problem also, unable to update wiki pictures due this matter. > > Sami > > > On Thu, Oct 10, 2013 at 9:36 AM, Tharanga Wijethilake wrote: > Hello Everyone, > has anyone tried to upload anything to Wiki. It seems there is some kind of en error (Possibly a permission error) that stops from uploading images there. This has been happening from yesterday evening. > > ~Tharanga > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi -------------- next part -------------- An HTML attachment was scrubbed... URL: From tharanga.wijethilake at cyberlightning.com Thu Oct 10 09:32:56 2013 From: tharanga.wijethilake at cyberlightning.com (Tharanga Wijethilake) Date: Thu, 10 Oct 2013 10:32:56 +0300 Subject: [Fiware-miwi] File Upload Fails in wiki In-Reply-To: <661ECB9B-96C0-4F2E-95C0-F58E2E6BFD42@zhaw.ch> References: <661ECB9B-96C0-4F2E-95C0-F58E2E6BFD42@zhaw.ch> Message-ID: Thanks for the info Christof, In the mean time we will just refer to the digram in the wiki pages as external links and will fix them once the wiki is fixed... diagrams needed some editing..:) ~Tharanga On Thu, Oct 10, 2013 at 10:14 AM, Christof Marti wrote: > HI all > > Seems to be a capacity problem with Forge, unfortunately it seems also to > affect the wiki. > > See the following mail from Miguel last Wednesday: > > I will check with Miguel. > ? Christof > > *Von: *Miguel Carrillo > *Betreff: **[Fiware-wpa] Forge maintenance on Friday afternoon + known > problems on the forge* > *Datum: *8. Oktober 2013 [41] 19:37:28 MESZ > *An: *"fiware at lists.fi-ware.eu" > *Kopie: *"fiware-wpl at lists.fi-ware.eu" , " > fiware-wpa at lists.fi-ware.eu" > > Dear all, > > Apparently we are experiencing some troubles on the forge due to the lack > of disk space allocated to a fraction of the tools we are using. > > These operations will possibly fail so please defer them until Monday: > > - Uploading files to the "Files" tools > - Uploading files to SVN > > Fortunately, the "Docs" tool and the wiki should work ok (the info is > stored elsewhere) > We will fix it next Friday. The platform will be down for a while: > > - Date: Friday, 11 > - Start time: 16:00 CET > - End time: 20:00 CET > > Best regards, > > Miguel > > > Am 10.10.2013 um 08:45 schrieb Sami J : > > Hi, > I have same problem also, unable to update wiki pictures due this matter. > > Sami > > > On Thu, Oct 10, 2013 at 9:36 AM, Tharanga Wijethilake < > tharanga.wijethilake at cyberlightning.com> wrote: > >> Hello Everyone, >> has anyone tried to upload anything to Wiki. It seems there is some kind >> of en error (Possibly a permission error) that stops from uploading images >> there. This has been happening from yesterday evening. >> >> ~Tharanga >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >> > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tharanga.wijethilake at cyberlightning.com Fri Oct 11 09:11:19 2013 From: tharanga.wijethilake at cyberlightning.com (Tharanga Wijethilake) Date: Fri, 11 Oct 2013 10:11:19 +0300 Subject: [Fiware-miwi] 2D-3D capture review done In-Reply-To: <20131004132843.GZ62563@ee.oulu.fi> References: <20131004132843.GZ62563@ee.oulu.fi> Message-ID: Document Updated as per review. ~Tharanga On Fri, Oct 4, 2013 at 4:28 PM, Erno Kuusela wrote: > as per subject. > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sami.jylkka at cyberlightning.com Fri Oct 11 09:42:15 2013 From: sami.jylkka at cyberlightning.com (Sami J) Date: Fri, 11 Oct 2013 10:42:15 +0300 Subject: [Fiware-miwi] GISDataProvider GE reviewed In-Reply-To: <52525D10.4030308@cie.fi> References: <52525D10.4030308@cie.fi> Message-ID: Thanks Arto, changes are now made: http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/FIWARE.OpenSpecification.MiWi.GISDataProvider Br, Sami On Mon, Oct 7, 2013 at 10:04 AM, Arto Heikkinen wrote: > Hi all, > > The review for the GISDataProvider GE can be found at: > https://docs.google.com/**document/d/**1FeLcsjf15SQBvgukLV8jvNoEq3NwM** > 9Yk_SWd8Ph63x8/edit# > > Br, > Arto Heikkinen > > -- > ______________________________**_________________________ > Arto Heikkinen, Doctoral student, M.Sc. (Eng.) > Center for Internet Excellence (CIE) > P.O. BOX 1001, FIN-90014 University of Oulu, Finland > e-mail: arto.heikkinen at cie.fi, http://www.cie.fi > > ______________________________**_________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/**listinfo/fiware-miwi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lasse.oorni at ludocraft.com Fri Oct 11 11:05:54 2013 From: lasse.oorni at ludocraft.com (=?iso-8859-1?Q?=22Lasse_=D6=F6rni=22?=) Date: Fri, 11 Oct 2013 12:05:54 +0300 Subject: [Fiware-miwi] FIWARE.ArchitectureDescription.MiWi.Synchronization reviewed Message-ID: <5f5a786d8e2e21576722a1d6169a4c11.squirrel@urho.ludocraft.com> Hi, the LudoCraft GE descriptions (Synchronization & VirtualCharacters) have been checked and should be ready for external consumption. Added also the Glossary terms from them and removed all "editorial remark" banners from their OpenSpecification & Details pages. -- Lasse ??rni Game Programmer LudoCraft Ltd. From torsten.spieldenner at dfki.de Fri Oct 11 14:54:04 2013 From: torsten.spieldenner at dfki.de (Torsten Spieldenner) Date: Fri, 11 Oct 2013 14:54:04 +0200 Subject: [Fiware-miwi] FIWARE.OpenSpecification.MiWi.DisplayAsAService ready for review Message-ID: <5257F4EC.9070406@dfki.de> Hello, there is now content ready for review for the GE description of Display as a Service: http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/FIWARE.OpenSpecification.MiWi.DisplayAsAService ~Torsten From jarkko at cyberlightning.com Sat Oct 12 11:12:24 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Sat, 12 Oct 2013 12:12:24 +0300 Subject: [Fiware-miwi] FIWARE.OpenSpecification.MiWi.DisplayAsAService ready for review In-Reply-To: <5257F4EC.9070406@dfki.de> References: <5257F4EC.9070406@dfki.de> Message-ID: Hello, The review is complete and the comments are here: https://docs.google.com/document/d/1pl4Zx7jN4-cQAypk1b8v2fj_szx9Wi9HeBAHO9mG_wc/edit# Let me know when I can re-check the paper. -j On Fri, Oct 11, 2013 at 3:54 PM, Torsten Spieldenner < torsten.spieldenner at dfki.de> wrote: > Hello, > > there is now content ready for review for the GE description of Display as > a Service: > > http://forge.fi-ware.eu/**plugins/mediawiki/wiki/fi-** > ware-private/index.php/FIWARE.**OpenSpecification.MiWi.**DisplayAsAService > > ~Torsten > ______________________________**_________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/**listinfo/fiware-miwi > -- Jarkko Vatjus-Anttila VP, Technology Cyberlightning Ltd. mobile. +358 405245142 email. jarkko at cyberlightning.com Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Wed Oct 16 10:00:46 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Wed, 16 Oct 2013 10:00:46 +0200 Subject: [Fiware-miwi] Meeting today? Message-ID: <525E47AE.3070901@dfki.de> Hi, I have not seen an invitation yet. Are we having a meeting now? Philipp -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From jarkko at cyberlightning.com Wed Oct 16 10:01:46 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Wed, 16 Oct 2013 11:01:46 +0300 Subject: [Fiware-miwi] Meeting today? In-Reply-To: <525E47AE.3070901@dfki.de> References: <525E47AE.3070901@dfki.de> Message-ID: I was wondering exactly the same thing.. So are we? - j 2013/10/16 Philipp Slusallek > Hi, > > I have not seen an invitation yet. Are we having a meeting now? > > Philipp > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -- Jarkko Vatjus-Anttila VP, Technology Cyberlightning Ltd. mobile. +358 405245142 email. jarkko at cyberlightning.com Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mach at zhaw.ch Wed Oct 16 10:01:31 2013 From: mach at zhaw.ch (Christof Marti) Date: Wed, 16 Oct 2013 10:01:31 +0200 Subject: [Fiware-miwi] WP13 weekly meeting Message-ID: Hi Sorry. I am approx 5 min late, starting the meeting at 10:05 I prepared the agenda/minutes for todays weekly meeting: https://docs.google.com/document/d/1xOGiilFcAqU7a5fhffOubc2IUuFB9VI2TwvQ_WoaFiI/edit Conference access code: 345446 Dial In numbers: ? Finland +358 (0) 9 74790024 ? Germany +49 (0) 30 255550300 ? Switzerland +41 (0) 44 595 90 80 ? Spain +34 911 19 67 50 Full list of international dial in numbers: ? https://www.freeconferencecall.com/free-international-conference-call/internationalphonenumbers.aspx See you - Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering P.O.Box, CH-8401 Winterthur Office:TD O3.18, Obere Kirchgasse 2 Phone: +41 58 934 70 63 Mail: mach at zhaw.ch Skype: christof-marti From mach at zhaw.ch Wed Oct 16 15:39:38 2013 From: mach at zhaw.ch (Marti Christof (mach)) Date: Wed, 16 Oct 2013 13:39:38 +0000 Subject: [Fiware-miwi] FI-WARE: Periodic Report M30 (D.1.2.5) - WP13 Message-ID: Hi I need your help to deliver the Periodic Report D.1.2.5 (M30) for WP13. Please complete the attached template with your contribution (1 document per partner). Please describe your main progress, main result, main deviation and main proposed corrective action at the task you are participating. * Task 13.1 ? Web-based 3D and augmented reality browser technologies (DFKI, PLAYSIGN, LUDOCRAFT, ADMINO, CYBER, UOULU) * Task 13.2 ? Web-based 3D and augmented reality backend platform (LUDOCRAFT, ADMINO, CYBER, UOULU) * Task 13.3 ? Advanced Middleware for efficient and QoS/security-aware invocation of services and exchange of messages (ZHAW, DFKI, USAAR-CISPA, EPROS) In the document you can find the section to be completed in each task marked with your partner number & name. Deadline: Monday October 21th, 2013 EOB. Thank you in advance. Please see also the following excerpt about the structure of the template. BR Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering P.O.Box, CH-8401 Winterthur Office:TD O3.18, Obere Kirchgasse 2 Phone: +41 58 934 70 63 Mail: mach at zhaw.ch Skype: christof-marti De: subsidies-bounces at tid.es [mailto:subsidies-bounces at tid.es] En nombre de JAVIER DE PEDRO SANCHEZ Enviado el: martes, 15 de octubre de 2013 15:40 Para: 'fiware-wpl at lists.fi-ware.eu' CC: subsidies at tid.es Asunto: [Subsidies] FI-WARE: Periodic Report M30 (D.1.2.5) Importancia: Alta Dear all, as WPL, I need your contribution to deliver the Periodic Report of M30. We have updated the template due to the following remark of the Commission: [cid:a147f98d-9c63-4923-8aa0-fc36bd24ad8b at zhaw.ch] So, It is very important that the report shows the contribution of each involved partner at task level. We kindly ask you to follow the template where the first section is about the WP as a team, and each task is evaluated by partner. I?m going to send in a particularized e-mail with the template of your WP to you. Please ask to each involved partner their information. As soon as I have the consumption of PM of each partner, I?ll send it to you. Note: xxx= Missing information in the document. ?. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 45089 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: D.1.2.5 - WP13.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 224846 bytes Desc: D.1.2.5 - WP13.docx URL: From mach at zhaw.ch Wed Oct 16 18:23:57 2013 From: mach at zhaw.ch (Marti Christof (mach)) Date: Wed, 16 Oct 2013 16:23:57 +0000 Subject: [Fiware-miwi] FI-WARE WP13 Finalizing the Architecture/OpenSpecification until Friday Message-ID: <9E77F182-3489-4DED-8253-115B6D45273F@zhaw.ch> Dear GE owners At todays WP13 meeting checked again the status of the GE specifications. It looks better, but we still need to fix some points in all GEs. Find my review notes in todays meeting minutes: https://docs.google.com/document/d/1xOGiilFcAqU7a5fhffOubc2IUuFB9VI2TwvQ_WoaFiI/edit Some generic points, which have to be fixed for all GEs: Section Detailed Specifications (Open API-Specification) The already available API-Descriptions should be delivered at least as DRAFT. The descriptions of the APIs should then be on the separately linked wiki-pages. Please check the format of the template page (or the API-specification of the existing public FI-WARE GEs). If no API is available yet or the API specification is in a rough state, each GE has to provide a short specific explanation in this section describing the plan how to progress it. Section Re-utilised Technologies/Specifications References to technologies/specifications we build on-top should be referenced here. Other references can stay in the Reference section. If there are no other references just drop the Reference section. Section Glossary Add all important terms of your GE to the common Glossary page (which is included in all GEs). Some editorial fixes: * For references to other wiki pages use wiki links [[pagename]] and NOT public links [http://...] * Fix the content of the open spec header entries (GE name, owner,?) * Copyright section should contain link to partner description page (e.g. [[LUDOCRAFT]]) (no logo, or direct links) * Check that the page structure conforms to the OpenSpec page structure. * Include all missing graphics. Finally incorporate all the comments from the review documents. Deadline: Friday 18.10.2013 Thanks in advance BR ? Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering P.O.Box, CH-8401 Winterthur Office:TD O3.18, Obere Kirchgasse 2 Phone: +41 58 934 70 63 Mail: mach at zhaw.ch Skype: christof-marti -------------- next part -------------- An HTML attachment was scrubbed... URL: From torsten.spieldenner at dfki.de Thu Oct 17 09:52:31 2013 From: torsten.spieldenner at dfki.de (Torsten Spieldenner) Date: Thu, 17 Oct 2013 09:52:31 +0200 Subject: [Fiware-miwi] Meeting on asset exporter pipeline Message-ID: <525F973F.4010100@dfki.de> Hello, as decided yesterday, we should fix a date for the meeting to dicuss the current state of the asset exporter pipeline for 3D models to be used in the 3D UI GE. I myself won't be able to attend before the week after next week. If my attendence is not urgently needed, I would ask for suggestions for a date (probably before next weekly phone conference), otherwise I'd suggest 28th or 29th of October, preferably in the morning, e.g. 10 am. Regards, Torsten From tomi.sarni at cyberlightning.com Thu Oct 17 14:00:42 2013 From: tomi.sarni at cyberlightning.com (Tomi Sarni) Date: Thu, 17 Oct 2013 15:00:42 +0300 Subject: [Fiware-miwi] RealVirtualIntegration Message-ID: Fixes done based on notes on google docs. I removed editorial remarks, i hope that was ok. The API specification is starting to get in to shape, but i left the "draft" mark there for now as i will likely do some minor editing every now and then. I had some problem with few of the wiki links as [[linking]] did not link to other FI-WARE page but instead created a new page. For reviewers, i did *some* rewriting to the text today and yesterday, perhaps the text is more easily understandable and usable for application or middleware service developers while maintaining some level of technicality. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lachsen at cg.uni-saarland.de Thu Oct 17 14:06:27 2013 From: lachsen at cg.uni-saarland.de (Felix Klein) Date: Thu, 17 Oct 2013 14:06:27 +0200 Subject: [Fiware-miwi] Meeting on asset exporter pipeline In-Reply-To: <525F973F.4010100@dfki.de> References: <525F973F.4010100@dfki.de> Message-ID: 28th and 29th October should be fine by me. Bye Felix On Thu, Oct 17, 2013 at 9:52 AM, Torsten Spieldenner < torsten.spieldenner at dfki.de> wrote: > Hello, > > as decided yesterday, we should fix a date for the meeting to dicuss the > current state of the asset exporter pipeline for 3D models to be used in > the 3D UI GE. > I myself won't be able to attend before the week after next week. If my > attendence is not urgently needed, I would ask for suggestions for a date > (probably before next weekly phone conference), otherwise I'd suggest 28th > or 29th of October, preferably in the morning, e.g. 10 am. > > Regards, > Torsten > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mach at zhaw.ch Thu Oct 17 14:08:17 2013 From: mach at zhaw.ch (Marti Christof (mach)) Date: Thu, 17 Oct 2013 12:08:17 +0000 Subject: [Fiware-miwi] Doodle to find dates for a WP13 F2F in Uoulu Message-ID: <347CE68C-D1EF-4FB1-B5F5-C231FE5A4741@zhaw.ch> Hi I have set up a doodle to find a date for a F2F meeting in Uoulu http://doodle.com/yfc5k2vh56mmg33g The dates until the end of the year are very limited. Possible dates for Philipp are only 11th or 12th of November (with a preference for Monday 11th). Please fill in the Doodle asap (today), so we see if one of these dates fit. BR Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering P.O.Box, CH-8401 Winterthur Office:TD O3.18, Obere Kirchgasse 2 Phone: +41 58 934 70 63 Mail: mach at zhaw.ch Skype: christof-marti From jarkko at cyberlightning.com Thu Oct 17 14:30:22 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Thu, 17 Oct 2013 15:30:22 +0300 Subject: [Fiware-miwi] Meeting on asset exporter pipeline In-Reply-To: References: <525F973F.4010100@dfki.de> Message-ID: This is ok for me and my team as well. I think it would be wise to peek into OpenCollada, for example, to understand it more. We can do that while preparing to discuss about this topic. - j On Thu, Oct 17, 2013 at 3:06 PM, Felix Klein wrote: > 28th and 29th October should be fine by me. > > Bye > > Felix > > > On Thu, Oct 17, 2013 at 9:52 AM, Torsten Spieldenner < > torsten.spieldenner at dfki.de> wrote: > >> Hello, >> >> as decided yesterday, we should fix a date for the meeting to dicuss the >> current state of the asset exporter pipeline for 3D models to be used in >> the 3D UI GE. >> I myself won't be able to attend before the week after next week. If my >> attendence is not urgently needed, I would ask for suggestions for a date >> (probably before next weekly phone conference), otherwise I'd suggest 28th >> or 29th of October, preferably in the morning, e.g. 10 am. >> >> Regards, >> Torsten >> > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -- Jarkko Vatjus-Anttila VP, Technology Cyberlightning Ltd. mobile. +358 405245142 email. jarkko at cyberlightning.com Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lasse.oorni at ludocraft.com Thu Oct 17 14:35:52 2013 From: lasse.oorni at ludocraft.com (=?iso-8859-1?Q?=22Lasse_=D6=F6rni=22?=) Date: Thu, 17 Oct 2013 15:35:52 +0300 Subject: [Fiware-miwi] Meeting on asset exporter pipeline In-Reply-To: References: <525F973F.4010100@dfki.de> Message-ID: <14dd09f4c1101fd46c683eeb5aad3415.squirrel@urho.ludocraft.com> > This is ok for me and my team as well. I think it would be wise to peek > into OpenCollada, for example, to understand it more. We can do that while > preparing to discuss about this topic. > > - j > > > On Thu, Oct 17, 2013 at 3:06 PM, Felix Klein > wrote: > >> 28th and 29th October should be fine by me. Hi, those are fine for me as well. -- Lasse ??rni Game Programmer LudoCraft Ltd. From lasse.oorni at ludocraft.com Thu Oct 17 14:46:42 2013 From: lasse.oorni at ludocraft.com (=?iso-8859-1?Q?=22Lasse_=D6=F6rni=22?=) Date: Thu, 17 Oct 2013 15:46:42 +0300 Subject: [Fiware-miwi] Synchronization & VirtualCharacters Message-ID: <2b22e9f47886b2bef2289f1b95803a94.squirrel@urho.ludocraft.com> Hi, I've added DRAFT level API's (very initial and partial sketches..) to both of those GE documents, and should have addressed the other issues we discussed. Please let me know if further action is required. -- Lasse ??rni Game Programmer LudoCraft Ltd. From sami.jylkka at cyberlightning.com Thu Oct 17 15:05:43 2013 From: sami.jylkka at cyberlightning.com (Sami J) Date: Thu, 17 Oct 2013 16:05:43 +0300 Subject: [Fiware-miwi] FIWARE.OpenSpecification.MiWi.GISDataProvider wiki updated Message-ID: Hi, wiki updated according comments & guidance http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/FIWARE.OpenSpecification.MiWi.GISDataProvider Br, Sami -------------- next part -------------- An HTML attachment was scrubbed... URL: From antti.kokko at adminotech.com Fri Oct 18 07:57:28 2013 From: antti.kokko at adminotech.com (Antti Kokko) Date: Fri, 18 Oct 2013 08:57:28 +0300 Subject: [Fiware-miwi] FIWARE.OpenSpecification.MiWi.2D-UI wiki updated Message-ID: Hello, Wiki page updated: http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/FIWARE.OpenSpecification.MiWi.2D-UI Best, - Antti -------------- next part -------------- An HTML attachment was scrubbed... URL: From torsten.spieldenner at dfki.de Fri Oct 18 13:10:44 2013 From: torsten.spieldenner at dfki.de (Torsten Spieldenner) Date: Fri, 18 Oct 2013 13:10:44 +0200 Subject: [Fiware-miwi] FIWARE.OpenSpecification.MiWi.DisplayAsAService and FIWARE.OpenSpecification.MiWi.3D-UI updated Message-ID: <52611734.5080307@dfki.de> Hello, the Wiki-Pages for the Open Specification descriptions of Display as a Service and XML3D are now updated according to the comments: http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/FIWARE.OpenSpecification.MiWi.DisplayAsAService http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/FIWARE.OpenSpecification.MiWi.3D-UI ~Torsten From ari.okkonen at cie.fi Fri Oct 18 15:16:14 2013 From: ari.okkonen at cie.fi (Ari Okkonen CIE) Date: Fri, 18 Oct 2013 16:16:14 +0300 Subject: [Fiware-miwi] FIWARE.OpenSpecification.MiWi.POIDataProvider wiki updated In-Reply-To: References: Message-ID: <5261349E.3030005@cie.fi> Hello, Wiki page updated: http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/FIWARE.OpenSpecification.MiWi.POIDataProvider BR Ari -- Ari Okkonen CIE, University of Oulu From antti.karhu at cie.fi Fri Oct 18 15:23:26 2013 From: antti.karhu at cie.fi (Antti Karhu) Date: Fri, 18 Oct 2013 16:23:26 +0300 Subject: [Fiware-miwi] FIWARE.OpenSpecification.MiWi.AugmentedReality wiki updated Message-ID: Hey, wiki page updated http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/FIWARE.OpenSpecification.MiWi.AugmentedReality Br, Antti -------------- next part -------------- An HTML attachment was scrubbed... URL: From erno at playsign.net Fri Oct 18 16:15:54 2013 From: erno at playsign.net (Erno Kuusela) Date: Fri, 18 Oct 2013 17:15:54 +0300 Subject: [Fiware-miwi] Doodle for bi-weekly Oulu dev meetup Message-ID: <20131018141554.GA62563@ee.oulu.fi> Hello, Let's make this regular finally, I've set up a doodle with 2 timeslots for each weekday starting from next Tuesday. So vote according to what weekday/time you'd prefer for the biweekly, not just the one day. I'll be closing the poll monday afternoon. Poll is at http://doodle.com/52pawzshe2pc7wue Erno From jarkko at cyberlightning.com Fri Oct 18 20:40:41 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Fri, 18 Oct 2013 21:40:41 +0300 Subject: [Fiware-miwi] FIWARE.OpenSpecification.MiWi.2D3DCapture wiki (almost) updated Message-ID: Hi, I have final changes for the 2D/3D capture locally here, but I cannot save them since forge.fi-ware does not respond and I cannot access the wiki. I will keep polling, and once it is awake again, I will push the changes. -- Jarkko Vatjus-Anttila VP, Technology Cyberlightning Ltd. mobile. +358 405245142 email. jarkko at cyberlightning.com Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jarkko at cyberlightning.com Sat Oct 19 08:15:52 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Sat, 19 Oct 2013 09:15:52 +0300 Subject: [Fiware-miwi] FIWARE.OpenSpecification.MiWi.2D3DCapture wiki (almost) updated In-Reply-To: References: Message-ID: The forge is alive and the open spec is now finally updated: http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/FIWARE.ArchitectureDescription.MiWi.2D-3D-Capture On Fri, Oct 18, 2013 at 9:40 PM, Jarkko Vatjus-Anttila < jarkko at cyberlightning.com> wrote: > Hi, > > I have final changes for the 2D/3D capture locally here, but I cannot save > them since forge.fi-ware does not respond and I cannot access the wiki. I > will keep polling, and once it is awake again, I will push the changes. > > -- > Jarkko Vatjus-Anttila > VP, Technology > Cyberlightning Ltd. > > mobile. +358 405245142 > email. jarkko at cyberlightning.com > > Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. > Get your free evaluation version and buy it now! www.cybersli.de > > www.cyberlightning.com > -- Jarkko Vatjus-Anttila VP, Technology Cyberlightning Ltd. mobile. +358 405245142 email. jarkko at cyberlightning.com Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Sat Oct 19 11:43:58 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Sat, 19 Oct 2013 11:43:58 +0200 Subject: [Fiware-miwi] Doodle for bi-weekly Oulu dev meetup In-Reply-To: <20131018141554.GA62563@ee.oulu.fi> References: <20131018141554.GA62563@ee.oulu.fi> Message-ID: <5262545E.2060404@dfki.de> Hi, I assume this is a Oulu-specific meetup, right? Best, Philipp Am 18.10.2013 16:15, schrieb Erno Kuusela: > Hello, > > Let's make this regular finally, I've set up a doodle with 2 timeslots > for each weekday starting from next Tuesday. So vote according to what > weekday/time you'd prefer for the biweekly, not just the one day. I'll > be closing the poll monday afternoon. > > Poll is at http://doodle.com/52pawzshe2pc7wue > > Erno > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From jarkko at cyberlightning.com Sat Oct 19 11:46:22 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Sat, 19 Oct 2013 12:46:22 +0300 Subject: [Fiware-miwi] Doodle for bi-weekly Oulu dev meetup In-Reply-To: <5262545E.2060404@dfki.de> References: <20131018141554.GA62563@ee.oulu.fi> <5262545E.2060404@dfki.de> Message-ID: Yes Philipp, this is a local Oulu meetup and information sharing meeting. - jarkko On Sat, Oct 19, 2013 at 12:43 PM, Philipp Slusallek < Philipp.Slusallek at dfki.de> wrote: > Hi, > > I assume this is a Oulu-specific meetup, right? > > Best, > > Philipp > > Am 18.10.2013 16:15, schrieb Erno Kuusela: > > Hello, > > > > Let's make this regular finally, I've set up a doodle with 2 timeslots > > for each weekday starting from next Tuesday. So vote according to what > > weekday/time you'd prefer for the biweekly, not just the one day. I'll > > be closing the poll monday afternoon. > > > > Poll is at http://doodle.com/52pawzshe2pc7wue > > > > Erno > > > > _______________________________________________ > > Fiware-miwi mailing list > > Fiware-miwi at lists.fi-ware.eu > > https://lists.fi-ware.eu/listinfo/fiware-miwi > > > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -- Jarkko Vatjus-Anttila VP, Technology Cyberlightning Ltd. mobile. +358 405245142 email. jarkko at cyberlightning.com Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Mon Oct 21 14:25:58 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Mon, 21 Oct 2013 14:25:58 +0200 Subject: [Fiware-miwi] FI-C2 Open Call: Unity + Xml3D In-Reply-To: References: Message-ID: <52651D56.1010805@dfki.de> Hi, FI-CONTENT has its open call right now (http://mediafi.org/open-call/, deadline in early January) and below is a really existing topic that would greatly be appreciated by the project (Exporter from Unity to XML3D). Maybe someone from Oulu would be interested? Of course, you can apply to others as well but this one seems particularly interesting. Feel free to contact me about details and hints for a possible proposal. Best, Philipp -------- Original-Nachricht -------- Betreff: Re: Unity + Xml3D Datum: Mon, 21 Oct 2013 01:10:57 +0100 Von: Mitchell, Kenny An: Philipp Slusallek Kopie (CC): Chino Noris , "fi-content2_wp4 at technicolor.com" , "Vogelgesang, Christian" , XML3D mailing list Hi, I think the main value of Unity is the authoring ecosystem and optimized targeting of builds to multi-platform native execution, which is a useful thing to leverage if possible. For web deployment, Unity requires a Web Player plugin install (except the Qihoo 360 browser which has it pre-installed for ~173m active users in China). Previously, Unity folks worked on a general HTML5/WebGL target, but I think they don't see it in their business model yet and fell off the release schedule? In the meantime, folks have written web, etc. build targets for Unity. http://www.frost.io/html5#editor http://forum.unity3d.com/threads/121983-WEBGL-Unity3D-Exporter http://forum.unity3d.com/threads/179720-Export-HTML5-content-from-Unity-editor I think it'd be hugely valuable for adoption though to have a Xml3d target from Unity with interop for other FI GEs, where Unity scripts can run as js (compatibility issues to address), assets/animations are targeted to Xml3D, etc. outside of their web player as above. Possibly, a large effort (Phase 3/H2020?), but less costly than trying to recreate the authoring community, etc. I reckon. UDK have also looked at targeting HMTL5/WebGL, http://www.unrealengine.com/html5_faq/ also, but not a useable platform target yet. Best, Kenny On 20 Oct 2013, at 11:19, Philipp Slusallek wrote: > Hi, > > Sounds good. Stefan has to respond on the XML3D aspects. > > I should ention that we are experimenting with importaing XML3D scened > into Unity right now. We have the geometry mostly working already (it > took Christian (in CC) a few hours only) but we need to find the propoer > way for the rest. > > Two options come to mind: Emulating the XML3D runtime in Unity or > including a browser plugin in Unity (exists already) and changing the 3D > rendering in XML3D so it maps everything to Unity (inclding animation, > etc). Anything else should then work out of the box. > > This would allow for a much tighter integration between the two options > but definitely needs a idea needs some more thought. Your input would be > highly welcome. > > BTW, for clarification, I am not talking about the short term Hacktons > here :-). > > > Best, > > Philipp > > Am 18.10.2013 11:14, schrieb Chino Noris: >> I talked with Sergi, about making a Hackaton in Barcelona. I think >> there are two aspects here: >> >> 1) The content preparation, in terms of negotiating Unity3D pro >> licenses, defining the challenges, and generating the building blocks is >> an effort that we do once. Using it for more than one Hackaton seems a >> nice re-use of resources. >> >> 2) Support Group. This is the main point. I think we need 3-4 technical >> support people for a hackaton, assuming ~20 participants. This should >> not be a problem for Zurich, as we have DRZ and ETHZ present. In case of >> BCN, we would need to fly some people in. Personally, I'd be happy to go >> there for a weekend and do it. But this has to be discussed. >> >> Do you think this makes sense? >> >> Stefan, one thing to consider is the use of XML3D in the hackatons. This >> is true for both Zurich and Barcelona. DRZ and ETHZ can handle the Unity >> side of things, but we would need your help for xml3D, which would mean >> to send one person. >> >> Budget wise, I thought of the following money distribution: >> - 2K (or maybe 1K only) for NEM Hackaton. >> - 8K for the online competition >> - 5K for Zurich Hackaton >> - 5K for Barcelona Hackaton >> >> Sergi was telling me they could probably stay cheap with the venue and >> dissemination through their contacts with the city administration. >> >> Let me know what you think, >> >> Best, >> Chino >> >> --------------------------------------------------------------- >> >> GioacchinoNoris >> Postdoc Researcher >> +41 44 632 5298 >> >> Disney Research Zurich >> Stampfenbachstrasse 48 >> CH-8006 Zurich >> http://www.disneyresearch.com/ > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From erno at playsign.net Tue Oct 22 05:12:02 2013 From: erno at playsign.net (Erno Kuusela) Date: Tue, 22 Oct 2013 06:12:02 +0300 Subject: [Fiware-miwi] Oulu meet today 13:00 (was Re: Doodle for bi-weekly Oulu dev meetup) In-Reply-To: <20131018141554.GA62563@ee.oulu.fi> References: <20131018141554.GA62563@ee.oulu.fi> Message-ID: <20131022031202.GB62563@ee.oulu.fi> Hello, And the winner is... Tuesday 13:00. Any volunteers for hosting? We can be at the old location (M?kelininkatu 15) if not. Erno From toni at playsign.net Tue Oct 22 08:26:30 2013 From: toni at playsign.net (Toni Alatalo) Date: Tue, 22 Oct 2013 09:26:30 +0300 Subject: [Fiware-miwi] Meeting on asset exporter pipeline In-Reply-To: <14dd09f4c1101fd46c683eeb5aad3415.squirrel@urho.ludocraft.com> References: <525F973F.4010100@dfki.de> <14dd09f4c1101fd46c683eeb5aad3415.squirrel@urho.ludocraft.com> Message-ID: Both these are ok for our team too. After ok results with a visibility / memory management scheme (a kind of a simple paging scene manager, a grid manager suitable for city blocks, adopted from a unity plugin) which allows a theoretically indefinite scene (more info about that separately a bit later, there's a demo online already), we are currently testing how things work with the supposedly efficient CTM format from http://openctm.sourceforge.net . Seems to work well so far and the three.js loader for it uses workers so on-demand loading of scene parts is pretty fluent. We haven't yet gotten it to load textures though from our test city block so current good result is from geometry only -- we are working on the materials part right now. ~Toni On Oct 17, 2013, at 3:35 PM, Lasse ??rni wrote: >> This is ok for me and my team as well. I think it would be wise to peek >> into OpenCollada, for example, to understand it more. We can do that while >> preparing to discuss about this topic. >> >> - j >> >> >> On Thu, Oct 17, 2013 at 3:06 PM, Felix Klein >> wrote: >> >>> 28th and 29th October should be fine by me. > > Hi, > those are fine for me as well. > > -- > Lasse ??rni > Game Programmer > LudoCraft Ltd. > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi From erno at playsign.net Tue Oct 22 08:57:37 2013 From: erno at playsign.net (Erno Kuusela) Date: Tue, 22 Oct 2013 09:57:37 +0300 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: <20131022031202.GB62563@ee.oulu.fi> References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> Message-ID: <20131022065737.GD62563@ee.oulu.fi> Kari from CIE offered to host it this time, so see you there at 13:00. Erno From torsten.spieldenner at dfki.de Tue Oct 22 13:40:22 2013 From: torsten.spieldenner at dfki.de (Torsten Spieldenner) Date: Tue, 22 Oct 2013 13:40:22 +0200 (CEST) Subject: [Fiware-miwi] Meeting on asset exporter pipeline In-Reply-To: References: <525F973F.4010100@dfki.de> <14dd09f4c1101fd46c683eeb5aad3415.squirrel@urho.ludocraft.com> Message-ID: <692737413.30843.1382442022344.JavaMail.open-xchange@ox6.dfki.de> Hello, lets fix October 28th, 10 am for the meeting then. Torsten Toni Alatalo hat am 22. Oktober 2013 um 08:26 geschrieben: > Both these are ok for our team too. > > After ok results with a visibility / memory management scheme (a kind of a > simple paging scene manager, a grid manager suitable for city blocks, adopted > from a unity plugin) which allows a theoretically indefinite scene (more info > about that separately a bit later, there's a demo online already), > > we are currently testing how things work with the supposedly efficient CTM > format from http://openctm.sourceforge.net . Seems to work well so far and the > three.js loader for it uses workers so on-demand loading of scene parts is > pretty fluent. We haven't yet gotten it to load textures though from our test > city block so current good result is from geometry only -- we are working on > the materials part right now. > > ~Toni > > On Oct 17, 2013, at 3:35 PM, Lasse ??rni wrote: > > >> This is ok for me and my team as well. I think it would be wise to peek > >> into OpenCollada, for example, to understand it more. We can do that while > >> preparing to discuss about this topic. > >> > >> - j > >> > >> > >> On Thu, Oct 17, 2013 at 3:06 PM, Felix Klein > >> wrote: > >> > >>> 28th and 29th October should be fine by me. > > > > Hi, > > those are fine for me as well. > > > > -- > > Lasse ??rni > > Game Programmer > > LudoCraft Ltd. > > > > > > _______________________________________________ > > Fiware-miwi mailing list > > Fiware-miwi at lists.fi-ware.eu > > https://lists.fi-ware.eu/listinfo/fiware-miwi > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Tue Oct 22 23:03:16 2013 From: toni at playsign.net (toni at playsign.net) Date: Tue, 22 Oct 2013 21:03:16 +0000 Subject: [Fiware-miwi] =?utf-8?q?13=3A00_meeting_location=3A_CIE_=28Re=3A_?= =?utf-8?q?Oulu_meet_today=0913=3A00=29?= In-Reply-To: <20131022065737.GD62563@ee.oulu.fi> References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi>, <20131022065737.GD62563@ee.oulu.fi> Message-ID: <20131022213628.6BD0218003E@dionysos.netplaza.fi> Just a brief note: we had some interesting preliminary discussion triggered by how the data schema that Ari O. presented for the POI system seemed at least partly similar to what the Real-Virtual interaction work had resulted in too -- and in fact about how the proposed POI schema was basically a version of the entity-component model which we?ve already been using for scenes in realXtend (it is inspired by / modeled after it, Ari told). So it can be much related to the Scene API work in the Synchronization GE too. As the action point we agreed that Ari will organize a specific work session on that. I was now thinking that it perhaps at least partly leads back to the question: how do we define (and implement) component types. I.e. what was mentioned in that entity-system post a few weeks back (with links to reX IComponent etc.). I mean: if functionality such as POIs and realworld interaction make sense as somehow resulting in custom data component types, does it mean that a key part of the framework is a way for those systems to declare their types .. so that it integrates nicely for the whole we want? I?m not sure, too tired to think it through now, but anyhow just wanted to mention that this was one topic that came up. I think Web Components is again something to check - as in XML terms reX Components are xml(3d) elements .. just ones that are usually in a group (according to the reX entity <-> xml3d group mapping). And Web Components are about defining & implementing new elements (as Erno pointed out in a different discussion about xml-html authoring in the session). BTW Thanks Kristian for the great comments in that entity system thread - was really good to learn about the alternative attribute access syntax and the validation in XML3D(.js). ~Toni P.S. for (Christof &) the DFKI folks: I?m sure you understand the rationale of these Oulu meets -- idea is ofc not to exclude you from the talks but just makes sense for us to meet live too as we are in the same city afterall etc -- naturally with the DFKI team you also talk there locally. Perhaps is a good idea that we make notes so that can post e.g. here then (I?m not volunteering though! ?) . Also, the now agreed bi-weekly setup on Tuesdays luckily works so that we can then summarize fresh in the global Wed meetings and continue the talks etc. From: Erno Kuusela Sent: ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM To: Fiware-miwi Kari from CIE offered to host it this time, so see you there at 13:00. Erno _______________________________________________ Fiware-miwi mailing list Fiware-miwi at lists.fi-ware.eu https://lists.fi-ware.eu/listinfo/fiware-miwi -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Wed Oct 23 07:00:36 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Wed, 23 Oct 2013 07:00:36 +0200 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: <20131022213628.6BD0218003E@dionysos.netplaza.fi> References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> Message-ID: <526757F4.60006@dfki.de> Hi, First of all, its certainly a good thing to also meet locally. I was just a bit confused whether that meeting somehow would involve us as well. Summarizing the results briefly for the others would definitely be interesting. I did not get the idea why POIs are similar to ECA. At a very high level I see it, but I am not sure what it buys us. Can someone sketch that picture in some more detail? BTW, what is the status with the Rendering discussion (Three.js vs. xml3d.js)? I still have the feeling that we are doing parallel work here that should probably be avoided. BTW, as part of our shading work (which is shaping up nicely) Felix has been looking lately at a way to describe rendering stages (passes) essentially through Xflow. It is still very experimental but he is using it to implement shadow maps right now. @Felix: Once this has converged into a bit more stable idea, it would be good to post this here to get feedback. The way we discussed it, this approach could form a nice basis for a modular design of advanced rasterization techniques (reflection maps, adv. face rendering, SSAO, lens flare, tone mapping, etc.), and (later) maybe also describe global illumination settings (similar to our work on LightingNetworks some years ago). Best, Philipp Am 22.10.2013 23:03, schrieb toni at playsign.net: > Just a brief note: we had some interesting preliminary discussion > triggered by how the data schema that Ari O. presented for the POI > system seemed at least partly similar to what the Real-Virtual > interaction work had resulted in too -- and in fact about how the > proposed POI schema was basically a version of the entity-component > model which we?ve already been using for scenes in realXtend (it is > inspired by / modeled after it, Ari told). So it can be much related to > the Scene API work in the Synchronization GE too. As the action point we > agreed that Ari will organize a specific work session on that. > I was now thinking that it perhaps at least partly leads back to the > question: how do we define (and implement) component types. I.e. what > was mentioned in that entity-system post a few weeks back (with links > to reX IComponent etc.). I mean: if functionality such as POIs and > realworld interaction make sense as somehow resulting in custom data > component types, does it mean that a key part of the framework is a way > for those systems to declare their types .. so that it integrates nicely > for the whole we want? I?m not sure, too tired to think it through now, > but anyhow just wanted to mention that this was one topic that came up. > I think Web Components is again something to check - as in XML terms reX > Components are xml(3d) elements .. just ones that are usually in a group > (according to the reX entity <-> xml3d group mapping). And Web > Components are about defining & implementing new elements (as Erno > pointed out in a different discussion about xml-html authoring in the > session). > BTW Thanks Kristian for the great comments in that entity system > thread - was really good to learn about the alternative attribute access > syntax and the validation in XML3D(.js). > ~Toni > P.S. for (Christof &) the DFKI folks: I?m sure you understand the > rationale of these Oulu meets -- idea is ofc not to exclude you from the > talks but just makes sense for us to meet live too as we are in the same > city afterall etc -- naturally with the DFKI team you also talk there > locally. Perhaps is a good idea that we make notes so that can post e.g. > here then (I?m not volunteering though! ?) . Also, the now agreed > bi-weekly setup on Tuesdays luckily works so that we can then summarize > fresh in the global Wed meetings and continue the talks etc. > *From:* Erno Kuusela > *Sent:* ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM > *To:* Fiware-miwi > > Kari from CIE offered to host it this time, so see you there at 13:00. > > Erno > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 456 bytes Desc: not available URL: From Philipp.Slusallek at dfki.de Wed Oct 23 07:56:32 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Wed, 23 Oct 2013 07:56:32 +0200 Subject: [Fiware-miwi] Meeting on asset exporter pipeline In-Reply-To: References: <525F973F.4010100@dfki.de> <14dd09f4c1101fd46c683eeb5aad3415.squirrel@urho.ludocraft.com> Message-ID: <52676510.9010304@dfki.de> Hi, Am 22.10.2013 08:26, schrieb Toni Alatalo: > After ok results with a visibility / memory management scheme (a kind of a simple paging scene manager, a grid manager suitable for city blocks, adopted from a unity plugin) which allows a theoretically indefinite scene (more info about that separately a bit later, there's a demo online already), > > we are currently testing how things work with the supposedly efficient CTM format from http://openctm.sourceforge.net . Seems to work well so far and the three.js loader for it uses workers so on-demand loading of scene parts is pretty fluent. We haven't yet gotten it to load textures though from our test city block so current good result is from geometry only -- we are working on the materials part right now. I wonder if we can define a common abstraction for the synchronization on the JS side. Ideally, we could use the JS API of KIARA for that (with modifications where needed). All that would be needed is to "hide" the current communication & protocoll implementation behind that API. As long as we use the same component models, this should then work out nicely and out of the box for our server as well. Later we could then sustitute the KIARA communication behind this once we have the full compatibility of the C/C++ implementation with the JS side (they still use different protocols right now). Since we are support CTM as well with simple URLs directly from XML3D, it should easy to load the same scene in both implementation. Maybe the scene manager from you could made to work similarly well in the XML3D context. Best, Philipp > ~Toni > > On Oct 17, 2013, at 3:35 PM, Lasse ??rni wrote: > >>> This is ok for me and my team as well. I think it would be wise to peek >>> into OpenCollada, for example, to understand it more. We can do that while >>> preparing to discuss about this topic. >>> >>> - j >>> >>> >>> On Thu, Oct 17, 2013 at 3:06 PM, Felix Klein >>> wrote: >>> >>>> 28th and 29th October should be fine by me. >> >> Hi, >> those are fine for me as well. >> >> -- >> Lasse ??rni >> Game Programmer >> LudoCraft Ltd. >> >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From tomi.sarni at cyberlightning.com Wed Oct 23 08:02:44 2013 From: tomi.sarni at cyberlightning.com (Tomi Sarni) Date: Wed, 23 Oct 2013 09:02:44 +0300 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: <526757F4.60006@dfki.de> References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> Message-ID: ->Philipp *I did not get the idea why POIs are similar to ECA. At a very high level I see it, but I am not sure what it buys us. Can someone sketch that picture in some more detail?* Well I suppose it becomes relevant at point when we are combining our GEs together. If the model can be applied in level of scene then down to POI in a scene and further down in sensor level, things can be more easily visualized. Not just in terms of painting 3D models but in terms of handling big data as well, more specifically handling relationships/inheritance. It also makes it easier to design a RESTful API as we have a common structure which to follow and also provides more opportunities for 3rd party developers to make use of the data for their own purposes. For instance ->Toni >From point of sensors, the entity-component becomes device-sensors/actuators. A device may have an unique identifier and IP by which to access it, but it may also contain several actuators and sensors that are components of that device entity. Sensors/actuators themselves are not aware to whom they are interesting to. One client may use the sensor information differently to other client. Sensor/actuator service allows any other service to query using request/response method either by geo-coordinates (circle,square or complex shape queries) or perhaps through type+maxresults and service will return entities and their components from which the reqester can form logical groups(array of entity uuids) and query more detailed information based on that logical group. I guess there needs to be similar thinking done on POI level. I guess POI does not know which scene it belongs to. It is up to scene server to form a logical group of POIs (e.g. restaurants of oulu 3d city model). Then again the problem is that scene needs to wait for POI to query for sensors and form its logical groups before it can pass information to scene. This can lead to long wait times. But this sequencing problem is also something that could be thought. Anyways this is a common problem with everything in web at the moment in my opinnion. Services become intertwined. When a client loads a web page there can be queries to 20 different services for advertisment and other stuff. Web page handles it by painting stuff to the client on receive basis. I think this could be applied in Scene as well. On Wed, Oct 23, 2013 at 8:00 AM, Philipp Slusallek < Philipp.Slusallek at dfki.de> wrote: > Hi, > > First of all, its certainly a good thing to also meet locally. I was just > a bit confused whether that meeting somehow would involve us as well. > Summarizing the results briefly for the others would definitely be > interesting. > > I did not get the idea why POIs are similar to ECA. At a very high level I > see it, but I am not sure what it buys us. Can someone sketch that picture > in some more detail? > > BTW, what is the status with the Rendering discussion (Three.js vs. > xml3d.js)? I still have the feeling that we are doing parallel work here > that should probably be avoided. > > BTW, as part of our shading work (which is shaping up nicely) Felix has > been looking lately at a way to describe rendering stages (passes) > essentially through Xflow. It is still very experimental but he is using it > to implement shadow maps right now. > > @Felix: Once this has converged into a bit more stable idea, it would be > good to post this here to get feedback. The way we discussed it, this > approach could form a nice basis for a modular design of advanced > rasterization techniques (reflection maps, adv. face rendering, SSAO, lens > flare, tone mapping, etc.), and (later) maybe also describe global > illumination settings (similar to our work on LightingNetworks some years > ago). > > > Best, > > Philipp > > Am 22.10.2013 23:03, schrieb toni at playsign.net: > >> Just a brief note: we had some interesting preliminary discussion >> triggered by how the data schema that Ari O. presented for the POI >> system seemed at least partly similar to what the Real-Virtual >> interaction work had resulted in too -- and in fact about how the >> proposed POI schema was basically a version of the entity-component >> model which we?ve already been using for scenes in realXtend (it is >> inspired by / modeled after it, Ari told). So it can be much related to >> the Scene API work in the Synchronization GE too. As the action point we >> agreed that Ari will organize a specific work session on that. >> I was now thinking that it perhaps at least partly leads back to the >> question: how do we define (and implement) component types. I.e. what >> was mentioned in that entity-system post a few weeks back (with links >> to reX IComponent etc.). I mean: if functionality such as POIs and >> realworld interaction make sense as somehow resulting in custom data >> component types, does it mean that a key part of the framework is a way >> for those systems to declare their types .. so that it integrates nicely >> for the whole we want? I?m not sure, too tired to think it through now, >> but anyhow just wanted to mention that this was one topic that came up. >> I think Web Components is again something to check - as in XML terms reX >> Components are xml(3d) elements .. just ones that are usually in a group >> (according to the reX entity <-> xml3d group mapping). And Web >> Components are about defining & implementing new elements (as Erno >> pointed out in a different discussion about xml-html authoring in the >> session). >> BTW Thanks Kristian for the great comments in that entity system >> thread - was really good to learn about the alternative attribute access >> syntax and the validation in XML3D(.js). >> ~Toni >> P.S. for (Christof &) the DFKI folks: I?m sure you understand the >> rationale of these Oulu meets -- idea is ofc not to exclude you from the >> talks but just makes sense for us to meet live too as we are in the same >> city afterall etc -- naturally with the DFKI team you also talk there >> locally. Perhaps is a good idea that we make notes so that can post e.g. >> here then (I?m not volunteering though! ?) . Also, the now agreed >> bi-weekly setup on Tuesdays luckily works so that we can then summarize >> fresh in the global Wed meetings and continue the talks etc. >> *From:* Erno Kuusela >> *Sent:* ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM >> *To:* Fiware-miwi >> >> >> Kari from CIE offered to host it this time, so see you there at 13:00. >> >> Erno >> ______________________________**_________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/**listinfo/fiware-miwi >> >> >> ______________________________**_________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/**listinfo/fiware-miwi >> >> > > -- > > ------------------------------**------------------------------** > ------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------**------------------------------** > --------------- > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Wed Oct 23 09:06:09 2013 From: toni at playsign.net (Toni Alatalo) Date: Wed, 23 Oct 2013 10:06:09 +0300 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: <526757F4.60006@dfki.de> References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> Message-ID: <4DB9CCC3-D0EC-4749-A9BC-9D17244E1651@playsign.net> quick note about the POI point: On Oct 23, 2013, at 8:00 AM, Philipp Slusallek wrote: > I did not get the idea why POIs are similar to ECA. At a very high level I see it, but I am not sure what it buys us. Can someone sketch that picture in some more detail? Another answer to this is that it may very well be that it does not buy anything at all. The conclusion was only that due to the similarities it is worth a check. ~Toni > BTW, what is the status with the Rendering discussion (Three.js vs. xml3d.js)? I still have the feeling that we are doing parallel work here that should probably be avoided. > > BTW, as part of our shading work (which is shaping up nicely) Felix has been looking lately at a way to describe rendering stages (passes) essentially through Xflow. It is still very experimental but he is using it to implement shadow maps right now. > > @Felix: Once this has converged into a bit more stable idea, it would be good to post this here to get feedback. The way we discussed it, this approach could form a nice basis for a modular design of advanced rasterization techniques (reflection maps, adv. face rendering, SSAO, lens flare, tone mapping, etc.), and (later) maybe also describe global illumination settings (similar to our work on LightingNetworks some years ago). > > > Best, > > Philipp > > Am 22.10.2013 23:03, schrieb toni at playsign.net: >> Just a brief note: we had some interesting preliminary discussion >> triggered by how the data schema that Ari O. presented for the POI >> system seemed at least partly similar to what the Real-Virtual >> interaction work had resulted in too -- and in fact about how the >> proposed POI schema was basically a version of the entity-component >> model which we?ve already been using for scenes in realXtend (it is >> inspired by / modeled after it, Ari told). So it can be much related to >> the Scene API work in the Synchronization GE too. As the action point we >> agreed that Ari will organize a specific work session on that. >> I was now thinking that it perhaps at least partly leads back to the >> question: how do we define (and implement) component types. I.e. what >> was mentioned in that entity-system post a few weeks back (with links >> to reX IComponent etc.). I mean: if functionality such as POIs and >> realworld interaction make sense as somehow resulting in custom data >> component types, does it mean that a key part of the framework is a way >> for those systems to declare their types .. so that it integrates nicely >> for the whole we want? I?m not sure, too tired to think it through now, >> but anyhow just wanted to mention that this was one topic that came up. >> I think Web Components is again something to check - as in XML terms reX >> Components are xml(3d) elements .. just ones that are usually in a group >> (according to the reX entity <-> xml3d group mapping). And Web >> Components are about defining & implementing new elements (as Erno >> pointed out in a different discussion about xml-html authoring in the >> session). >> BTW Thanks Kristian for the great comments in that entity system >> thread - was really good to learn about the alternative attribute access >> syntax and the validation in XML3D(.js). >> ~Toni >> P.S. for (Christof &) the DFKI folks: I?m sure you understand the >> rationale of these Oulu meets -- idea is ofc not to exclude you from the >> talks but just makes sense for us to meet live too as we are in the same >> city afterall etc -- naturally with the DFKI team you also talk there >> locally. Perhaps is a good idea that we make notes so that can post e.g. >> here then (I?m not volunteering though! ?) . Also, the now agreed >> bi-weekly setup on Tuesdays luckily works so that we can then summarize >> fresh in the global Wed meetings and continue the talks etc. >> *From:* Erno Kuusela >> *Sent:* ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM >> *To:* Fiware-miwi >> >> Kari from CIE offered to host it this time, so see you there at 13:00. >> >> Erno >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > From toni at playsign.net Wed Oct 23 09:51:21 2013 From: toni at playsign.net (Toni Alatalo) Date: Wed, 23 Oct 2013 10:51:21 +0300 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: <526757F4.60006@dfki.de> References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> Message-ID: <19841C55-EF46-4D01-AD1F-633812458545@playsign.net> On Oct 23, 2013, at 8:00 AM, Philipp Slusallek wrote: > BTW, what is the status with the Rendering discussion (Three.js vs. xml3d.js)? I still have the feeling that we are doing parallel work here that should probably be avoided. I'm not aware of any overlapping work so far -- then again I'm not fully aware what all is up with xml3d.js. For the rendering for 3D UI, my conclusion from the discussion on this list was that it is best to use three.js now for the case of big complex fully featured scenes, i.e. typical realXtend worlds (e.g. Ludocraft's Circus demo or the Chesapeake Bay from the LVM project, a creative commons realXtend example scene). And in miwi in particular for the city model / app now. Basically because that's where we already got the basics working in the August-September work (and had earlier in realXtend web client codebases). That is why we implemented the scalability system on top of that too now -- scalability was the only thing missing. Until yesterday I thought the question was still open regarding XFlow integration. Latest information I got was that there was no hardware acceleration support for XFlow in XML3d.js either so it seemed worth a check whether it's better to implement it for xml3d.js or for three. Yesterday, however, we learned from Cyberlightning that work on XFlow hardware acceleration was already on-going in xml3d.js (I think mostly by DFKI so far?). And that it was decided that work within fi-ware now is limited to that (and we also understood that the functionality will be quite limited by April, or?). This obviously affects the overall situation. At least in an intermediate stage this means that we have two renderers for different purposes: three.js for some apps, without XFlow support, and xml3d.js for others, with XFlow but other things missing. This is certain because that is the case today and probably in the coming weeks at least. For a good final goal I think we can be clever and make an effective roadmap. I don't know yet what it is, though -- certainly to be discussed. The requirements doc -- perhaps by continuing work on it -- hopefully helps. > Philipp ~Toni > > Am 22.10.2013 23:03, schrieb toni at playsign.net: >> Just a brief note: we had some interesting preliminary discussion >> triggered by how the data schema that Ari O. presented for the POI >> system seemed at least partly similar to what the Real-Virtual >> interaction work had resulted in too -- and in fact about how the >> proposed POI schema was basically a version of the entity-component >> model which we?ve already been using for scenes in realXtend (it is >> inspired by / modeled after it, Ari told). So it can be much related to >> the Scene API work in the Synchronization GE too. As the action point we >> agreed that Ari will organize a specific work session on that. >> I was now thinking that it perhaps at least partly leads back to the >> question: how do we define (and implement) component types. I.e. what >> was mentioned in that entity-system post a few weeks back (with links >> to reX IComponent etc.). I mean: if functionality such as POIs and >> realworld interaction make sense as somehow resulting in custom data >> component types, does it mean that a key part of the framework is a way >> for those systems to declare their types .. so that it integrates nicely >> for the whole we want? I?m not sure, too tired to think it through now, >> but anyhow just wanted to mention that this was one topic that came up. >> I think Web Components is again something to check - as in XML terms reX >> Components are xml(3d) elements .. just ones that are usually in a group >> (according to the reX entity <-> xml3d group mapping). And Web >> Components are about defining & implementing new elements (as Erno >> pointed out in a different discussion about xml-html authoring in the >> session). >> BTW Thanks Kristian for the great comments in that entity system >> thread - was really good to learn about the alternative attribute access >> syntax and the validation in XML3D(.js). >> ~Toni >> P.S. for (Christof &) the DFKI folks: I?m sure you understand the >> rationale of these Oulu meets -- idea is ofc not to exclude you from the >> talks but just makes sense for us to meet live too as we are in the same >> city afterall etc -- naturally with the DFKI team you also talk there >> locally. Perhaps is a good idea that we make notes so that can post e.g. >> here then (I?m not volunteering though! ?) . Also, the now agreed >> bi-weekly setup on Tuesdays luckily works so that we can then summarize >> fresh in the global Wed meetings and continue the talks etc. >> *From:* Erno Kuusela >> *Sent:* ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM >> *To:* Fiware-miwi >> >> Kari from CIE offered to host it this time, so see you there at 13:00. >> >> Erno >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > From mach at zhaw.ch Wed Oct 23 09:57:56 2013 From: mach at zhaw.ch (Marti Christof (mach)) Date: Wed, 23 Oct 2013 07:57:56 +0000 Subject: [Fiware-miwi] WP13 weekly meeting Message-ID: <4283C74E-7DE8-499E-8217-AF7B03121D0E@zhaw.ch> Hi I prepared the agenda/minutes for todays weekly meeting: https://docs.google.com/document/d/14M73l4IaF0wdY8Tj0bjHuRisUv1qiImG2WKtt2FhXV8/edit# Conference access code: 345446 Dial In numbers: ? Finland +358 (0) 9 74790024 ? Germany +49 (0) 30 255550300 ? Switzerland +41 (0) 44 595 90 80 ? Spain +34 911 19 67 50 Full list of international dial in numbers: ? https://www.freeconferencecall.com/free-international-conference-call/internationalphonenumbers.aspx See you - Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering P.O.Box, CH-8401 Winterthur Office:TD O3.18, Obere Kirchgasse 2 Phone: +41 58 934 70 63 Mail: mach at zhaw.ch Skype: christof-marti From Philipp.Slusallek at dfki.de Wed Oct 23 10:03:51 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Wed, 23 Oct 2013 10:03:51 +0200 Subject: [Fiware-miwi] WP13 weekly meeting In-Reply-To: <4283C74E-7DE8-499E-8217-AF7B03121D0E@zhaw.ch> References: <4283C74E-7DE8-499E-8217-AF7B03121D0E@zhaw.ch> Message-ID: <526782E7.9030206@dfki.de> Hi, I am at a conference this week and will not be able to join. Best, Philipp Am 23.10.2013 09:57, schrieb Marti Christof (mach): > Hi > > I prepared the agenda/minutes for todays weekly meeting: > https://docs.google.com/document/d/14M73l4IaF0wdY8Tj0bjHuRisUv1qiImG2WKtt2FhXV8/edit# > > Conference access code: 345446 > Dial In numbers: > ? Finland +358 (0) 9 74790024 > ? Germany +49 (0) 30 255550300 > ? Switzerland +41 (0) 44 595 90 80 > ? Spain +34 911 19 67 50 > Full list of international dial in numbers: > ? https://www.freeconferencecall.com/free-international-conference-call/internationalphonenumbers.aspx > > > See you > - Christof > ---- > InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > Institut of Applied Information Technology - InIT > Zurich University of Applied Sciences - ZHAW > School of Engineering > P.O.Box, CH-8401 Winterthur > Office:TD O3.18, Obere Kirchgasse 2 > Phone: +41 58 934 70 63 > Mail: mach at zhaw.ch > Skype: christof-marti > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From mach at zhaw.ch Wed Oct 23 10:09:15 2013 From: mach at zhaw.ch (Marti Christof (mach)) Date: Wed, 23 Oct 2013 10:09:15 +0200 Subject: [Fiware-miwi] WP13 weekly meeting In-Reply-To: <3fa9c2735fcf4c27b62c212bbe7f5da3@SRV-MAIL-001.zhaw.ch> References: <3fa9c2735fcf4c27b62c212bbe7f5da3@SRV-MAIL-001.zhaw.ch> Message-ID: <6B7CA2FA-8C31-416A-885E-497F747E0F82@zhaw.ch> Because we are less then 10 today we can switch to this hangout. https://plus.google.com/hangouts/_/8cd83906e336aee6df3782ac8d13b8d0326b5d81 Am 23.10.2013 um 09:57 schrieb Marti Christof (mach) : > Hi > > I prepared the agenda/minutes for todays weekly meeting: > https://docs.google.com/document/d/14M73l4IaF0wdY8Tj0bjHuRisUv1qiImG2WKtt2FhXV8/edit# > > Conference access code: 345446 > Dial In numbers: > ? Finland +358 (0) 9 74790024 > ? Germany +49 (0) 30 255550300 > ? Switzerland +41 (0) 44 595 90 80 > ? Spain +34 911 19 67 50 > Full list of international dial in numbers: > ? https://www.freeconferencecall.com/free-international-conference-call/internationalphonenumbers.aspx > > > See you > - Christof > ---- > InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > Institut of Applied Information Technology - InIT > Zurich University of Applied Sciences - ZHAW > School of Engineering > P.O.Box, CH-8401 Winterthur > Office:TD O3.18, Obere Kirchgasse 2 > Phone: +41 58 934 70 63 > Mail: mach at zhaw.ch > Skype: christof-marti > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi From lasse.oorni at ludocraft.com Wed Oct 23 12:44:11 2013 From: lasse.oorni at ludocraft.com (=?iso-8859-1?Q?=22Lasse_=D6=F6rni=22?=) Date: Wed, 23 Oct 2013 13:44:11 +0300 Subject: [Fiware-miwi] FMC source file for the diagram Fiware Architecture.png Message-ID: <714a1731c0f58000fbae1c31fedc4447.squirrel@urho.ludocraft.com> Hi, at today's weekly meeting we discussed the WP13 architecture overview page, and came to the conclusion that the architecture diagram (Fiware Architecture.png) on the page https://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/Advanced_Middleware_and_Web_UI_Architecture would need some clarifications / edits. Is there a FMC source available somewhere or can it be uploaded? Thanks. -- Lasse ??rni Game Programmer LudoCraft Ltd. From lasse.oorni at ludocraft.com Wed Oct 23 12:44:09 2013 From: lasse.oorni at ludocraft.com (=?iso-8859-1?Q?=22Lasse_=D6=F6rni=22?=) Date: Wed, 23 Oct 2013 13:44:09 +0300 Subject: [Fiware-miwi] FMC source file for the diagram Fiware Architecture.png Message-ID: <1acf92695cefc76337502c6b046a83aa.squirrel@urho.ludocraft.com> Hi, at today's weekly meeting we discussed the WP13 architecture overview page, and came to the conclusion that the architecture diagram (Fiware Architecture.png) on the page https://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/Advanced_Middleware_and_Web_UI_Architecture would need some clarifications / edits. Is there a FMC source available somewhere or can it be uploaded? Thanks. -- Lasse ??rni Game Programmer LudoCraft Ltd. From mach at zhaw.ch Wed Oct 23 13:32:36 2013 From: mach at zhaw.ch (Marti Christof (mach)) Date: Wed, 23 Oct 2013 11:32:36 +0000 Subject: [Fiware-miwi] WP13 minutes and action points In-Reply-To: <3fa9c2735fcf4c27b62c212bbe7f5da3@SRV-MAIL-001.zhaw.ch> References: <3fa9c2735fcf4c27b62c212bbe7f5da3@SRV-MAIL-001.zhaw.ch> Message-ID: <52E04FCF-16CC-4135-831D-516A05C511EE@zhaw.ch> Hi everybody A short summary containing the most important/urgent parts of todays meeting. (Please see the minutes for more details) WP13 M30 progress reports Still some reports missing! Please send today. Otherwise I have not chance to integrate until Friday. OpenSpecification Most of the issues are fixed. But still there are some parts which are open and have to be fixed asap. Therefore we defined two important urgent action points: AP (today EOB for ALL GE owners): fix open points * Add missing content * Fix open points from last reviews, in the minutes and check * if structure is correct * complete and correct header section * no links to private wiki pages * all images available * Add relevant terms to the Glossary page * Complete section Open API Specification / or add a statement about how to progress in the Detailed Specification document * for DRAFT API-specifications please also add a comment at the beginning of the API document, that this is a draft or even early draft, work in progress and subject to change. * Complete/fix section Re-utilised Technologies/Specification available and OK * Remove the remarks (orange boxes) if the requested section is complete AP (thursday EOB for ALL reviewers): check the above points and give feedback for errors to fix or if everything is ok Architecture Description Most problematic seems to be the Architecture Description: Text is still missing completely @Philipp/Torsten any proposals how to go on here? We discussed the Architecture picture. The basic structure is welcomed but there are some points we might clarify: * Are these all actors? Can they not also access directly the client core? * How to show all the APIs defined in the GEs? * Description and mentioning of protocol (e.g. tundra protocol over websocket) * Add asset pipeline to the pic: e.g. a way from modelling app to scene description We have to add the Middleware GE to the picture Lasse would like to extends server core of picture from a synchronization server view Jonne also has some ideas for clarifications Other contribution to enhance the picture are welcome @Torsten, Can we get the source of the picture? F2F Workshop in Uoulu The next F2F meeting will be in Uoulu at Monday November 11th (9-18) at University of Uoulu / CIE (Japanese Garden Auditorium) Please reserve the date and add your name to the attendees list in the following document: https://docs.google.com/document/d/1UnrOgC5Btyn6AOEGdM6xu4M8itEvssJ4lXHNkrcLzG4/edit We will also add the exact location and some accommodation infos there. Please also add topics you would like to discuss to the ?Proposed topics to discuss" section. Thanks and Best regards ? Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering P.O.Box, CH-8401 Winterthur Office:TD O3.18, Obere Kirchgasse 2 Phone: +41 58 934 70 63 Mail: mach at zhaw.ch Skype: christof-marti Am 23.10.2013 um 09:57 schrieb Marti Christof (mach) >: Hi I prepared the agenda/minutes for todays weekly meeting: https://docs.google.com/document/d/14M73l4IaF0wdY8Tj0bjHuRisUv1qiImG2WKtt2FhXV8/edit# Conference access code: 345446 Dial In numbers: ? Finland +358 (0) 9 74790024 ? Germany +49 (0) 30 255550300 ? Switzerland +41 (0) 44 595 90 80 ? Spain +34 911 19 67 50 Full list of international dial in numbers: ? https://www.freeconferencecall.com/free-international-conference-call/internationalphonenumbers.aspx See you - Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering P.O.Box, CH-8401 Winterthur Office:TD O3.18, Obere Kirchgasse 2 Phone: +41 58 934 70 63 Mail: mach at zhaw.ch Skype: christof-marti _______________________________________________ Fiware-miwi mailing list Fiware-miwi at lists.fi-ware.eu https://lists.fi-ware.eu/listinfo/fiware-miwi -------------- next part -------------- An HTML attachment was scrubbed... URL: From torsten.spieldenner at dfki.de Wed Oct 23 15:25:07 2013 From: torsten.spieldenner at dfki.de (Torsten Spieldenner) Date: Wed, 23 Oct 2013 15:25:07 +0200 (CEST) Subject: [Fiware-miwi] WP13 minutes and action points In-Reply-To: <52E04FCF-16CC-4135-831D-516A05C511EE@zhaw.ch> References: <3fa9c2735fcf4c27b62c212bbe7f5da3@SRV-MAIL-001.zhaw.ch> <52E04FCF-16CC-4135-831D-516A05C511EE@zhaw.ch> Message-ID: <1057310724.33178.1382534707436.JavaMail.open-xchange@ox6.dfki.de> Hello, I adapted the actors from the Winterthur meeting presentation. It seems that uploading the diagram source together with the image did not work, sorry for that. I'll attach it to the mail. Torsten "Marti Christof (mach)" hat am 23. Oktober 2013 um 13:32 geschrieben: > Hi everybody > > A short summary containing the most important/urgent parts of todays meeting. > (Please see the minutes for more details) > > WP13 M30 progress reports > Still some reports missing! Please send today. Otherwise I have not chance to > integrate until Friday. > > OpenSpecification > Most of the issues are fixed. But still there are some parts which are open > and have to be fixed asap. Therefore we defined two important urgent action > points: > > AP (today EOB for ALL GE owners): fix open points > > * Add missing content > * Fix open points from last reviews, in the minutes and check > o if structure is correct > o complete and correct header section > o no links to private wiki pages > o all images available > * Add relevant terms to the Glossary page > * Complete section Open API Specification / or add a statement about how > to progress in the Detailed Specification document > * for DRAFT API-specifications please also add a comment at the beginning > of the API document, that this is a draft or even early draft, work in > progress and subject to change. > * Complete/fix section Re-utilised Technologies/Specification available > and OK > * Remove the remarks (orange boxes) if the requested section is complete > > AP (thursday EOB for ALL reviewers): check the above points and give feedback > for errors to fix or if everything is ok > > Architecture Description > Most problematic seems to be the Architecture Description: > Text is still missing completely @Philipp/Torsten any proposals how to go on > here? > We discussed the Architecture picture. The basic structure is welcomed but > there are some points we might clarify: > * Are these all actors? Can they not also access directly the client > core? > * How to show all the APIs defined in the GEs? > * Description and mentioning of protocol (e.g. tundra protocol over > websocket) > * Add asset pipeline to the pic: e.g. a way from modelling app to scene > description > We have to add the Middleware GE to the picture > Lasse would like to extends server core of picture from a synchronization > server view > Jonne also has some ideas for clarifications > Other contribution to enhance the picture are welcome > @Torsten, Can we get the source of the picture? > > > F2F Workshop in Uoulu > The next F2F meeting will be in Uoulu at Monday November 11th (9-18) at > University of Uoulu / CIE (Japanese Garden Auditorium) > Please reserve the date and add your name to the attendees list in the > following document: > > https://docs.google.com/document/d/1UnrOgC5Btyn6AOEGdM6xu4M8itEvssJ4lXHNkrcLzG4/edit > > > We will also add the exact location and some accommodation infos there. > Please also add topics you would like to discuss to the ? Proposed topics to > discuss" section. > > Thanks and Best regards > ? Christof > ---- > InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > Institut of Applied Information Technology - InIT > Zurich University of Applied Sciences - ZHAW > School of Engineering > P.O.Box, CH-8401 Winterthur > Office:TD O3.18, Obere Kirchgasse 2 > Phone: +41 58 934 70 63 > Mail: mach at zhaw.ch > Skype: christof-marti > > Am 23.10.2013 um 09:57 schrieb Marti Christof (mach) < mach at zhaw.ch > >: > > > > > Hi > > > > I prepared the agenda/minutes for todays weekly meeting: > > > > https://docs.google.com/document/d/14M73l4IaF0wdY8Tj0bjHuRisUv1qiImG2WKtt2FhXV8/edit# > > > > > > Conference access code: 345446 > > Dial In numbers: > > ? Finland +358 (0) 9 74790024 > > ? Germany +49 (0) 30 255550300 > > ? Switzerland +41 (0) 44 595 90 80 > > ? Spain +34 911 19 67 50 > > Full list of international dial in numbers: > > ? > > https://www.freeconferencecall.com/free-international-conference-call/internationalphonenumbers.aspx > > > > > > See you > > - Christof > > ---- > > InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > > Institut of Applied Information Technology - InIT > > Zurich University of Applied Sciences - ZHAW > > School of Engineering > > P.O.Box, CH-8401 Winterthur > > Office:TD O3.18, Obere Kirchgasse 2 > > Phone: +41 58 934 70 63 > > Mail: mach at zhaw.ch > > Skype: christof-marti > > _______________________________________________ > > Fiware-miwi mailing list > > Fiware-miwi at lists.fi-ware.eu > > https://lists.fi-ware.eu/listinfo/fiware-miwi > > > > > _______________________________________________ Fiware-miwi mailing list Fiware-miwi at lists.fi-ware.eu https://lists.fi-ware.eu/listinfo/fiware-miwi -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: architecture.graphml Type: application/octet-stream Size: 35948 bytes Desc: not available URL: From lasse.oorni at ludocraft.com Thu Oct 24 10:32:07 2013 From: lasse.oorni at ludocraft.com (=?iso-8859-1?Q?=22Lasse_=D6=F6rni=22?=) Date: Thu, 24 Oct 2013 11:32:07 +0300 Subject: [Fiware-miwi] WP13 minutes and action points In-Reply-To: <1057310724.33178.1382534707436.JavaMail.open-xchange@ox6.dfki.de> References: <3fa9c2735fcf4c27b62c212bbe7f5da3@SRV-MAIL-001.zhaw.ch> <52E04FCF-16CC-4135-831D-516A05C511EE@zhaw.ch> <1057310724.33178.1382534707436.JavaMail.open-xchange@ox6.dfki.de> Message-ID: > Hello, > > I adapted the actors from the Winterthur meeting presentation. > It seems that uploading the diagram source together with the image did not > work, > sorry for that. > I'll attach it to the mail. > > Torsten Thanks for the diagram! I made an initial edit for the Synchronization part and uploaded to wiki. Attached is the modified graphml file. In the edited image the supporting server components are imagined to be Tundra plugins interfacing with the scene in C++ / JavaScript. This is naturally not true for all of them, but an out-facing Rest API connector remains as well. Everyone: feel free to do further edits and correct the picture. -- Lasse ??rni Game Programmer LudoCraft Ltd. -------------- next part -------------- A non-text attachment was scrubbed... Name: architecture_edited.graphml Type: application/octet-stream Size: 38378 bytes Desc: not available URL: From Philipp.Slusallek at dfki.de Thu Oct 24 15:28:12 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Thu, 24 Oct 2013 15:28:12 +0200 Subject: [Fiware-miwi] XML3D versus three.js (was Re: 13:00 meeting location: CIE (Re: Oulu meet today 13:00)) In-Reply-To: <19841C55-EF46-4D01-AD1F-633812458545@playsign.net> References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <19841C55-EF46-4D01-AD1F-633812458545@playsign.net> Message-ID: <5269206C.1030207@dfki.de> Hi, Well, I think you identified the overlaping quite well :-). The goal of Miwi always has been to provide the tools for declarative 3D in the Web. While we agreed that there might be value (to be evaluated) in adding three.js to XML3D, I am not too happy that we are investing the FI-WARE resources into circumventing the declarative layer completely. When you are saying that there are limitation in XML3D, it would be good to know what they are explicitly and jointly work on removing them. Only if that should fail should we be looking at alternatives. My suggestion of adding a wrapper around the communication is exactly such that we can evaluate XML3D against any three.js version that might be there. There is a lot of novel stuff coming from our side that we will not be able to integrate across this "fork" in our code base, which is a pitty. And again, we would like to know where limitations are in XML3D -- please tell us straight away. I suggest that we start to work on the shared communication layer using the KIARA API (part of a FI-WARE GE) and add the code to make the relevant components work in XML3D. Can someone put together a plan for this. We are happy to help where necessary -- but from my point of view we need to do this as part of the Open Call. Best, Philipp Am 23.10.2013 09:51, schrieb Toni Alatalo: > On Oct 23, 2013, at 8:00 AM, Philipp Slusallek wrote: > >> BTW, what is the status with the Rendering discussion (Three.js vs. xml3d.js)? I still have the feeling that we are doing parallel work here that should probably be avoided. > > I'm not aware of any overlapping work so far -- then again I'm not fully aware what all is up with xml3d.js. > > For the rendering for 3D UI, my conclusion from the discussion on this list was that it is best to use three.js now for the case of big complex fully featured scenes, i.e. typical realXtend worlds (e.g. Ludocraft's Circus demo or the Chesapeake Bay from the LVM project, a creative commons realXtend example scene). And in miwi in particular for the city model / app now. Basically because that's where we already got the basics working in the August-September work (and had earlier in realXtend web client codebases). That is why we implemented the scalability system on top of that too now -- scalability was the only thing missing. > > Until yesterday I thought the question was still open regarding XFlow integration. Latest information I got was that there was no hardware acceleration support for XFlow in XML3d.js either so it seemed worth a check whether it's better to implement it for xml3d.js or for three. > > Yesterday, however, we learned from Cyberlightning that work on XFlow hardware acceleration was already on-going in xml3d.js (I think mostly by DFKI so far?). And that it was decided that work within fi-ware now is limited to that (and we also understood that the functionality will be quite limited by April, or?). > > This obviously affects the overall situation. > > At least in an intermediate stage this means that we have two renderers for different purposes: three.js for some apps, without XFlow support, and xml3d.js for others, with XFlow but other things missing. This is certain because that is the case today and probably in the coming weeks at least. > > For a good final goal I think we can be clever and make an effective roadmap. I don't know yet what it is, though -- certainly to be discussed. The requirements doc -- perhaps by continuing work on it -- hopefully helps. > >> Philipp > > ~Toni > >> >> Am 22.10.2013 23:03, schrieb toni at playsign.net: >>> Just a brief note: we had some interesting preliminary discussion >>> triggered by how the data schema that Ari O. presented for the POI >>> system seemed at least partly similar to what the Real-Virtual >>> interaction work had resulted in too -- and in fact about how the >>> proposed POI schema was basically a version of the entity-component >>> model which we?ve already been using for scenes in realXtend (it is >>> inspired by / modeled after it, Ari told). So it can be much related to >>> the Scene API work in the Synchronization GE too. As the action point we >>> agreed that Ari will organize a specific work session on that. >>> I was now thinking that it perhaps at least partly leads back to the >>> question: how do we define (and implement) component types. I.e. what >>> was mentioned in that entity-system post a few weeks back (with links >>> to reX IComponent etc.). I mean: if functionality such as POIs and >>> realworld interaction make sense as somehow resulting in custom data >>> component types, does it mean that a key part of the framework is a way >>> for those systems to declare their types .. so that it integrates nicely >>> for the whole we want? I?m not sure, too tired to think it through now, >>> but anyhow just wanted to mention that this was one topic that came up. >>> I think Web Components is again something to check - as in XML terms reX >>> Components are xml(3d) elements .. just ones that are usually in a group >>> (according to the reX entity <-> xml3d group mapping). And Web >>> Components are about defining & implementing new elements (as Erno >>> pointed out in a different discussion about xml-html authoring in the >>> session). >>> BTW Thanks Kristian for the great comments in that entity system >>> thread - was really good to learn about the alternative attribute access >>> syntax and the validation in XML3D(.js). >>> ~Toni >>> P.S. for (Christof &) the DFKI folks: I?m sure you understand the >>> rationale of these Oulu meets -- idea is ofc not to exclude you from the >>> talks but just makes sense for us to meet live too as we are in the same >>> city afterall etc -- naturally with the DFKI team you also talk there >>> locally. Perhaps is a good idea that we make notes so that can post e.g. >>> here then (I?m not volunteering though! ?) . Also, the now agreed >>> bi-weekly setup on Tuesdays luckily works so that we can then summarize >>> fresh in the global Wed meetings and continue the talks etc. >>> *From:* Erno Kuusela >>> *Sent:* ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM >>> *To:* Fiware-miwi >>> >>> Kari from CIE offered to host it this time, so see you there at 13:00. >>> >>> Erno >>> _______________________________________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>> >>> >>> _______________________________________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>> >> >> >> -- >> >> ------------------------------------------------------------------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> --------------------------------------------------------------------------- >> > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 456 bytes Desc: not available URL: From kristian.sons at dfki.de Thu Oct 24 15:46:03 2013 From: kristian.sons at dfki.de (Kristian Sons) Date: Thu, 24 Oct 2013 15:46:03 +0200 Subject: [Fiware-miwi] Java lib for writing XML3D content? In-Reply-To: References: <52441772.6040803@dfki.de> Message-ID: <5269249B.5050908@dfki.de> Hi, sorry for the delay. I now published the EMF data structure for XML3D on GitHub: https://github.com/xml3d/xml3d.ecore Besides a master branch, there is a branch called "generated" that (surprisingly) contains the files generated from the model. Don't hesitate to contact me if you run into issues. Best, Kristian Am 27.09.2013 08:03, schrieb Sami J: > Hi Kristian, > and thank you for prompt reply. > I would be glad to check your Java EMF structure, so could you deliver > it to me? In implementation we have 2 approaches; 1. reading whole > xml3D content information directly from database and 2. reading > references to XML3D content stored in JSON format. Reading data from > database is no problem, it is already working and now we are creating > XML3D content creation functionality where ready java lib would be handy. > > Best regards, > Sami > > > On Thu, Sep 26, 2013 at 2:16 PM, Kristian Sons > wrote: > > Dear Sami, > > we have a Java EMF data structure for XML3D that we use for > preprocessing and converting of scene. It has also good support > for external references. However, it's an in-memory data structure > and we never used it for performance critical stuff so we don't > know how the performance is. It might be interesting because it's > convenient and failure-proof. One could switch to a streaming API > (SAX or StAX like) later. > > Also, I recommend to work with external references and to encode > the geometry e.g. in JSON. There is Jackson [1] which is very > convenient for JSON serialization. I think it has also a streaming > API. > > Just contact me if you are interested in the EMF data structure. > > Best, > Kristian > > > [1] http://jackson.codehaus.org/ > > > > Hi, > Do you know if there is Java lib for writing XML3D content? > We'd like to use it in GeoServer W3DS module to create XML3D > content based on data read from database. > > Best regards, > Sami > > > > -- > _______________________________________________________________________________ > > Kristian Sons > Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI > Agenten und Simulierte Realit?t > Campus, Geb. D 3 2, Raum 0.77 > 66123 Saarbr?cken, Germany > > Phone: +49 681 85775-3833 > Phone: +49 681 302-3833 > Fax: +49 681 85775?2235 > kristian.sons at dfki.de > http://www.xml3d.org > > Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster > (Vorsitzender) > Dr. Walter Olthoff > > Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes > Amtsgericht Kaiserslautern, HRB 2313 > _______________________________________________________________________________ > > -- _______________________________________________________________________________ Kristian Sons Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI Agenten und Simulierte Realit?t Campus, Geb. D 3 2, Raum 0.77 66123 Saarbr?cken, Germany Phone: +49 681 85775-3833 Phone: +49 681 302-3833 Fax: +49 681 85775?2235 kristian.sons at dfki.de http://www.xml3d.org Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 _______________________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Thu Oct 24 16:16:12 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Thu, 24 Oct 2013 16:16:12 +0200 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> Message-ID: <52692BAC.8050600@dfki.de> Hi, I am not sure I fully understand what you are saying. But then I might not understand the structure you have in mind from the RealXtend side. Can you elaborate a bit? From my point of view POIs are not necessarily scene elements themselves but rather data sources that get used in the scene. Maybe that is the main difference between us here. From an XML3D POV things could actually be quite "easy". It should be rather simple to directly interface to the IoT GEs of FI-WARE through REST via a new Xflow element. This would then make the data available through elements. Then you can use all the features of Xflow to manipulate the scene based on the data. For example, we are discussing building a set of visualization nodes that implement common visualization metaphors, such as scatter plots, animations, you name it. A new member of the lab starting soon wants to look into this area. For acting on objects we have always used Web services attached to the XML3D objects via DOM events. Eventually, I believe we want a higher level input handling and processing framework but no one knows so far, how this should look like (we have some ideas but they are not well baked, any inpu is highly welcome here). This might or might not reuse some of the Xflow mechanisms. But how to implement RealVirtual Interaction is indeed an intersting discussion. Getting us all on the same page and sharing ideas and implementations is very helpful. Doing this on the same SW platform (without the fork that we currently have) would facilitate a powerful implementation even more. Thanks Philipp Am 23.10.2013 08:02, schrieb Tomi Sarni: > ->Philipp > /I did not get the idea why POIs are similar to ECA. At a very high > level I see it, but I am not sure what it buys us. Can someone sketch > that picture in some more detail?/ > > Well I suppose it becomes relevant at point when we are combining our > GEs together. If the model can be applied in level of scene then down to > POI in a scene and further down in sensor level, things can be > more easily visualized. Not just in terms of painting 3D models but in > terms of handling big data as well, more specifically handling > relationships/inheritance. It also makes it easier > to design a RESTful API as we have a common structure which to follow > and also provides more opportunities for 3rd party developers to make > use of the data for their own purposes. > > For instance > > ->Toni > > From point of sensors, the entity-component becomes > device-sensors/actuators. A device may have an unique identifier and IP > by which to access it, but it may also contain several actuators and > sensors > that are components of that device entity. Sensors/actuators themselves > are not aware to whom they are interesting to. One client may use the > sensor information differently to other client. Sensor/actuator service > allows any other service to query using request/response method either > by geo-coordinates (circle,square or complex shape queries) or perhaps > through type+maxresults and service will return entities and their > components > from which the reqester can form logical groups(array of entity uuids) > and query more detailed information based on that logical group. > > I guess there needs to be similar thinking done on POI level. I guess > POI does not know which scene it belongs to. It is up to scene server to > form a logical group of POIs (e.g. restaurants of oulu 3d city model). Then > again the problem is that scene needs to wait for POI to query for > sensors and form its logical groups before it can pass information to > scene. This can lead to long wait times. But this sequencing problem is > also something > that could be thought. Anyways this is a common problem with everything > in web at the moment in my opinnion. Services become intertwined. When a > client loads a web page there can be queries to 20 different services > for advertisment and other stuff. Web page handles it by painting stuff > to the client on receive basis. I think this could be applied in Scene > as well. > > > > > > On Wed, Oct 23, 2013 at 8:00 AM, Philipp Slusallek > > wrote: > > Hi, > > First of all, its certainly a good thing to also meet locally. I was > just a bit confused whether that meeting somehow would involve us as > well. Summarizing the results briefly for the others would > definitely be interesting. > > I did not get the idea why POIs are similar to ECA. At a very high > level I see it, but I am not sure what it buys us. Can someone > sketch that picture in some more detail? > > BTW, what is the status with the Rendering discussion (Three.js vs. > xml3d.js)? I still have the feeling that we are doing parallel work > here that should probably be avoided. > > BTW, as part of our shading work (which is shaping up nicely) Felix > has been looking lately at a way to describe rendering stages > (passes) essentially through Xflow. It is still very experimental > but he is using it to implement shadow maps right now. > > @Felix: Once this has converged into a bit more stable idea, it > would be good to post this here to get feedback. The way we > discussed it, this approach could form a nice basis for a modular > design of advanced rasterization techniques (reflection maps, adv. > face rendering, SSAO, lens flare, tone mapping, etc.), and (later) > maybe also describe global illumination settings (similar to our > work on LightingNetworks some years ago). > > > Best, > > Philipp > > Am 22.10.2013 23:03, schrieb toni at playsign.net > : > > Just a brief note: we had some interesting preliminary discussion > triggered by how the data schema that Ari O. presented for the POI > system seemed at least partly similar to what the Real-Virtual > interaction work had resulted in too -- and in fact about how the > proposed POI schema was basically a version of the entity-component > model which we?ve already been using for scenes in realXtend (it is > inspired by / modeled after it, Ari told). So it can be much > related to > the Scene API work in the Synchronization GE too. As the action > point we > agreed that Ari will organize a specific work session on that. > I was now thinking that it perhaps at least partly leads back to the > question: how do we define (and implement) component types. I.e. > what > was mentioned in that entity-system post a few weeks back (with > links > to reX IComponent etc.). I mean: if functionality such as POIs and > realworld interaction make sense as somehow resulting in custom data > component types, does it mean that a key part of the framework > is a way > for those systems to declare their types .. so that it > integrates nicely > for the whole we want? I?m not sure, too tired to think it > through now, > but anyhow just wanted to mention that this was one topic that > came up. > I think Web Components is again something to check - as in XML > terms reX > Components are xml(3d) elements .. just ones that are usually in > a group > (according to the reX entity <-> xml3d group mapping). And Web > Components are about defining & implementing new elements (as Erno > pointed out in a different discussion about xml-html authoring > in the > session). > BTW Thanks Kristian for the great comments in that entity system > thread - was really good to learn about the alternative > attribute access > syntax and the validation in XML3D(.js). > ~Toni > P.S. for (Christof &) the DFKI folks: I?m sure you understand the > rationale of these Oulu meets -- idea is ofc not to exclude you > from the > talks but just makes sense for us to meet live too as we are in > the same > city afterall etc -- naturally with the DFKI team you also talk > there > locally. Perhaps is a good idea that we make notes so that can > post e.g. > here then (I?m not volunteering though! ?) . Also, the now agreed > bi-weekly setup on Tuesdays luckily works so that we can then > summarize > fresh in the global Wed meetings and continue the talks etc. > *From:* Erno Kuusela > *Sent:* ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM > *To:* Fiware-miwi > > > Kari from CIE offered to host it this time, so see you there at > 13:00. > > Erno > _________________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/__listinfo/fiware-miwi > > > > _________________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/__listinfo/fiware-miwi > > > > > -- > > ------------------------------__------------------------------__------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------__------------------------------__--------------- > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 456 bytes Desc: not available URL: From Philipp.Slusallek at dfki.de Thu Oct 24 18:24:42 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Thu, 24 Oct 2013 18:24:42 +0200 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <52692BAC.8050600@dfki.de> Message-ID: <526949CA.5080402@dfki.de> Hi, Good discussion! Am 24.10.2013 17:37, schrieb Toni Alatalo: > It was just an observation that the proposed format for the POI data > provider is basically identical to what we've been using in the scene > server too - just with new component types for the POI data. So it > seemed worthwhile to think about the situation. Absolutely. > I do find now that it's also an interesting question how the POI data > integrates to the scene system too - for example if a scene server > queries POI services, does it then only use the data to manipulate > the scene using other non-POI components, or does it often make sense > also to include POI components in the scene so that the clients get > it too automatically with the scene sync and can for example provide > POI specific GUI tools. Ofc clients can query POI services directly > too but this server centric setup is also one scenario and there the > scene integration might make sense. But I would say that there is a clear distinction between the POI data (which you query from some service) and the visualization or representation of the POI data. Maybe you are more talking about the latter here. However, there really is an application dependent mapping from the POI data to its representation. Each application may choose to present the same POI data in very different way and its only this resulting representation that becomes part of the scene. This is essentially the Mapping stage of the well-known Visualization pipeline (http://www.infovis-wiki.net/index.php/Visualization_Pipeline), except that here we also map interaction aspects to an abstract scene description (XML3D) first, which then performs the rendering and interaction. So you can think of this as an additional "Scene" stage between "Mapping" and "Rendering". > I think this is a different topic, but also with real-virtual > interaction for example how to facilitate nice simple authoring of > the e.g. real-virtual object mappings seems a fruitful enough angle > to think a bit, perhaps as a case to help in understanding the entity > system & the different servers etc. For example if there's a > component type 'real world link', the Interface Designer GUI shows it > automatically in the list of components, ppl can just add them to > their scenes and somehow then the system just works.. I am not sure what you are getting at. But it would be great if the Interface Designer would allow to choose such POI mappings from a predegined catalog. It seems that Xflow can be used nicely for generating the mapped scene elements from some input data, e.g. using the same approach we use to provide basic primitives like cubes or spheres in XML3D. Here they are not fixed, build-in tags as in X3D but can actually be added by the developer as it best fits. For generating more complex subgraphs we may have to extend the current Xflow implementation. But its at least a great starting point to experiment with it. Experiments and feedback would be very welcome here. > I don't think these discussions are now hurt by us (currently) having > alternative renderers - the entity system, formats, sync and the > overall architecture is the same anyway. Well, some things only work in one and others only in the other branch. So the above mechanism could not be used to visualize POIs in the three.js branch but we do not have all the features to visualize Oulu (or whatever city) in the XML3D.js branch. This definitely IS greatly limiting how we can combine the GEs into more complex applications -- the untimate goal of the orthogonal design of this chapter. And it does not even work within the same chapter. It will be hard to explain to Juanjo and others from FI-WARE (or the commission for that matter). BTW, I just learned today that there is a FI-WARE smaller review coming up soon. Let's see if we already have to present things there. So far they have not explicitly asked us. Best, Philipp > -Toni > > >> From an XML3D POV things could actually be quite "easy". It should be rather simple to directly interface to the IoT GEs of FI-WARE through REST via a new Xflow element. This would then make the data available through elements. Then you can use all the features of Xflow to manipulate the scene based on the data. For example, we are discussing building a set of visualization nodes that implement common visualization metaphors, such as scatter plots, animations, you name it. A new member of the lab starting soon wants to look into this area. >> >> For acting on objects we have always used Web services attached to the XML3D objects via DOM events. Eventually, I believe we want a higher level input handling and processing framework but no one knows so far, how this should look like (we have some ideas but they are not well baked, any inpu is highly welcome here). This might or might not reuse some of the Xflow mechanisms. >> >> But how to implement RealVirtual Interaction is indeed an intersting discussion. Getting us all on the same page and sharing ideas and implementations is very helpful. Doing this on the same SW platform (without the fork that we currently have) would facilitate a powerful implementation even more. >> >> >> Thanks >> >> Philipp >> >> Am 23.10.2013 08:02, schrieb Tomi Sarni: >>> ->Philipp >>> /I did not get the idea why POIs are similar to ECA. At a very high >>> level I see it, but I am not sure what it buys us. Can someone sketch >>> that picture in some more detail?/ >>> >>> Well I suppose it becomes relevant at point when we are combining our >>> GEs together. If the model can be applied in level of scene then down to >>> POI in a scene and further down in sensor level, things can be >>> more easily visualized. Not just in terms of painting 3D models but in >>> terms of handling big data as well, more specifically handling >>> relationships/inheritance. It also makes it easier >>> to design a RESTful API as we have a common structure which to follow >>> and also provides more opportunities for 3rd party developers to make >>> use of the data for their own purposes. >>> >>> For instance >>> >>> ->Toni >>> >>> From point of sensors, the entity-component becomes >>> device-sensors/actuators. A device may have an unique identifier and IP >>> by which to access it, but it may also contain several actuators and >>> sensors >>> that are components of that device entity. Sensors/actuators themselves >>> are not aware to whom they are interesting to. One client may use the >>> sensor information differently to other client. Sensor/actuator service >>> allows any other service to query using request/response method either >>> by geo-coordinates (circle,square or complex shape queries) or perhaps >>> through type+maxresults and service will return entities and their >>> components >>> from which the reqester can form logical groups(array of entity uuids) >>> and query more detailed information based on that logical group. >>> >>> I guess there needs to be similar thinking done on POI level. I guess >>> POI does not know which scene it belongs to. It is up to scene server to >>> form a logical group of POIs (e.g. restaurants of oulu 3d city model). Then >>> again the problem is that scene needs to wait for POI to query for >>> sensors and form its logical groups before it can pass information to >>> scene. This can lead to long wait times. But this sequencing problem is >>> also something >>> that could be thought. Anyways this is a common problem with everything >>> in web at the moment in my opinnion. Services become intertwined. When a >>> client loads a web page there can be queries to 20 different services >>> for advertisment and other stuff. Web page handles it by painting stuff >>> to the client on receive basis. I think this could be applied in Scene >>> as well. >>> >>> >>> >>> >>> >>> On Wed, Oct 23, 2013 at 8:00 AM, Philipp Slusallek >>> > wrote: >>> >>> Hi, >>> >>> First of all, its certainly a good thing to also meet locally. I was >>> just a bit confused whether that meeting somehow would involve us as >>> well. Summarizing the results briefly for the others would >>> definitely be interesting. >>> >>> I did not get the idea why POIs are similar to ECA. At a very high >>> level I see it, but I am not sure what it buys us. Can someone >>> sketch that picture in some more detail? >>> >>> BTW, what is the status with the Rendering discussion (Three.js vs. >>> xml3d.js)? I still have the feeling that we are doing parallel work >>> here that should probably be avoided. >>> >>> BTW, as part of our shading work (which is shaping up nicely) Felix >>> has been looking lately at a way to describe rendering stages >>> (passes) essentially through Xflow. It is still very experimental >>> but he is using it to implement shadow maps right now. >>> >>> @Felix: Once this has converged into a bit more stable idea, it >>> would be good to post this here to get feedback. The way we >>> discussed it, this approach could form a nice basis for a modular >>> design of advanced rasterization techniques (reflection maps, adv. >>> face rendering, SSAO, lens flare, tone mapping, etc.), and (later) >>> maybe also describe global illumination settings (similar to our >>> work on LightingNetworks some years ago). >>> >>> >>> Best, >>> >>> Philipp >>> >>> Am 22.10.2013 23:03, schrieb toni at playsign.net >>> : >>> >>> Just a brief note: we had some interesting preliminary discussion >>> triggered by how the data schema that Ari O. presented for the POI >>> system seemed at least partly similar to what the Real-Virtual >>> interaction work had resulted in too -- and in fact about how the >>> proposed POI schema was basically a version of the entity-component >>> model which we?ve already been using for scenes in realXtend (it is >>> inspired by / modeled after it, Ari told). So it can be much >>> related to >>> the Scene API work in the Synchronization GE too. As the action >>> point we >>> agreed that Ari will organize a specific work session on that. >>> I was now thinking that it perhaps at least partly leads back to the >>> question: how do we define (and implement) component types. I.e. >>> what >>> was mentioned in that entity-system post a few weeks back (with >>> links >>> to reX IComponent etc.). I mean: if functionality such as POIs and >>> realworld interaction make sense as somehow resulting in custom data >>> component types, does it mean that a key part of the framework >>> is a way >>> for those systems to declare their types .. so that it >>> integrates nicely >>> for the whole we want? I?m not sure, too tired to think it >>> through now, >>> but anyhow just wanted to mention that this was one topic that >>> came up. >>> I think Web Components is again something to check - as in XML >>> terms reX >>> Components are xml(3d) elements .. just ones that are usually in >>> a group >>> (according to the reX entity <-> xml3d group mapping). And Web >>> Components are about defining & implementing new elements (as Erno >>> pointed out in a different discussion about xml-html authoring >>> in the >>> session). >>> BTW Thanks Kristian for the great comments in that entity system >>> thread - was really good to learn about the alternative >>> attribute access >>> syntax and the validation in XML3D(.js). >>> ~Toni >>> P.S. for (Christof &) the DFKI folks: I?m sure you understand the >>> rationale of these Oulu meets -- idea is ofc not to exclude you >>> from the >>> talks but just makes sense for us to meet live too as we are in >>> the same >>> city afterall etc -- naturally with the DFKI team you also talk >>> there >>> locally. Perhaps is a good idea that we make notes so that can >>> post e.g. >>> here then (I?m not volunteering though! ?) . Also, the now agreed >>> bi-weekly setup on Tuesdays luckily works so that we can then >>> summarize >>> fresh in the global Wed meetings and continue the talks etc. >>> *From:* Erno Kuusela >>> *Sent:* ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM >>> *To:* Fiware-miwi >>> >>> >>> Kari from CIE offered to host it this time, so see you there at >>> 13:00. >>> >>> Erno >>> _________________________________________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> https://lists.fi-ware.eu/__listinfo/fiware-miwi >>> >>> >>> >>> _________________________________________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> https://lists.fi-ware.eu/__listinfo/fiware-miwi >>> >>> >>> >>> >>> -- >>> >>> ------------------------------__------------------------------__------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> ------------------------------__------------------------------__--------------- >>> >>> _______________________________________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>> >>> >> >> >> -- >> >> ------------------------------------------------------------------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> --------------------------------------------------------------------------- >> -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 456 bytes Desc: not available URL: From toni.alatalo at gmail.com Thu Oct 24 17:37:16 2013 From: toni.alatalo at gmail.com (Toni Alatalo) Date: Thu, 24 Oct 2013 18:37:16 +0300 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: <52692BAC.8050600@dfki.de> References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <52692BAC.8050600@dfki.de> Message-ID: Just a quick note .. Waiting for folks in a car in a parking place On 24.10.2013, at 17.16, Philipp Slusallek wrote: > I am not sure I fully understand what you are saying. But then I might not understand the structure you have in mind from the RealXtend side. Can you elaborate a bit? > > From my point of view POIs are not necessarily scene elements themselves but rather data sources that get used in the scene. Maybe that is the main difference between us here. No, there is no difference - that is exactly the same. It was just an observation that the proposed format for the POI data provider is basically identical to what we've been using in the scene server too - just with new component types for the POI data. So it seemed worthwhile to think about the situation. I do find now that it's also an interesting question how the POI data integrates to the scene system too - for example if a scene server queries POI services, does it then only use the data to manipulate the scene using other non-POI components, or does it often make sense also to include POI components in the scene so that the clients get it too automatically with the scene sync and can for example provide POI specific GUI tools. Ofc clients can query POI services directly too but this server centric setup is also one scenario and there the scene integration might make sense. I think this is a different topic, but also with real-virtual interaction for example how to facilitate nice simple authoring of the e.g. real-virtual object mappings seems a fruitful enough angle to think a bit, perhaps as a case to help in understanding the entity system & the different servers etc. For example if there's a component type 'real world link', the Interface Designer GUI shows it automatically in the list of components, ppl can just add them to their scenes and somehow then the system just works.. I don't think these discussions are now hurt by us (currently) having alternative renderers - the entity system, formats, sync and the overall architecture is the same anyway. -Toni > From an XML3D POV things could actually be quite "easy". It should be rather simple to directly interface to the IoT GEs of FI-WARE through REST via a new Xflow element. This would then make the data available through elements. Then you can use all the features of Xflow to manipulate the scene based on the data. For example, we are discussing building a set of visualization nodes that implement common visualization metaphors, such as scatter plots, animations, you name it. A new member of the lab starting soon wants to look into this area. > > For acting on objects we have always used Web services attached to the XML3D objects via DOM events. Eventually, I believe we want a higher level input handling and processing framework but no one knows so far, how this should look like (we have some ideas but they are not well baked, any inpu is highly welcome here). This might or might not reuse some of the Xflow mechanisms. > > But how to implement RealVirtual Interaction is indeed an intersting discussion. Getting us all on the same page and sharing ideas and implementations is very helpful. Doing this on the same SW platform (without the fork that we currently have) would facilitate a powerful implementation even more. > > > Thanks > > Philipp > > Am 23.10.2013 08:02, schrieb Tomi Sarni: >> ->Philipp >> /I did not get the idea why POIs are similar to ECA. At a very high >> level I see it, but I am not sure what it buys us. Can someone sketch >> that picture in some more detail?/ >> >> Well I suppose it becomes relevant at point when we are combining our >> GEs together. If the model can be applied in level of scene then down to >> POI in a scene and further down in sensor level, things can be >> more easily visualized. Not just in terms of painting 3D models but in >> terms of handling big data as well, more specifically handling >> relationships/inheritance. It also makes it easier >> to design a RESTful API as we have a common structure which to follow >> and also provides more opportunities for 3rd party developers to make >> use of the data for their own purposes. >> >> For instance >> >> ->Toni >> >> From point of sensors, the entity-component becomes >> device-sensors/actuators. A device may have an unique identifier and IP >> by which to access it, but it may also contain several actuators and >> sensors >> that are components of that device entity. Sensors/actuators themselves >> are not aware to whom they are interesting to. One client may use the >> sensor information differently to other client. Sensor/actuator service >> allows any other service to query using request/response method either >> by geo-coordinates (circle,square or complex shape queries) or perhaps >> through type+maxresults and service will return entities and their >> components >> from which the reqester can form logical groups(array of entity uuids) >> and query more detailed information based on that logical group. >> >> I guess there needs to be similar thinking done on POI level. I guess >> POI does not know which scene it belongs to. It is up to scene server to >> form a logical group of POIs (e.g. restaurants of oulu 3d city model). Then >> again the problem is that scene needs to wait for POI to query for >> sensors and form its logical groups before it can pass information to >> scene. This can lead to long wait times. But this sequencing problem is >> also something >> that could be thought. Anyways this is a common problem with everything >> in web at the moment in my opinnion. Services become intertwined. When a >> client loads a web page there can be queries to 20 different services >> for advertisment and other stuff. Web page handles it by painting stuff >> to the client on receive basis. I think this could be applied in Scene >> as well. >> >> >> >> >> >> On Wed, Oct 23, 2013 at 8:00 AM, Philipp Slusallek >> > wrote: >> >> Hi, >> >> First of all, its certainly a good thing to also meet locally. I was >> just a bit confused whether that meeting somehow would involve us as >> well. Summarizing the results briefly for the others would >> definitely be interesting. >> >> I did not get the idea why POIs are similar to ECA. At a very high >> level I see it, but I am not sure what it buys us. Can someone >> sketch that picture in some more detail? >> >> BTW, what is the status with the Rendering discussion (Three.js vs. >> xml3d.js)? I still have the feeling that we are doing parallel work >> here that should probably be avoided. >> >> BTW, as part of our shading work (which is shaping up nicely) Felix >> has been looking lately at a way to describe rendering stages >> (passes) essentially through Xflow. It is still very experimental >> but he is using it to implement shadow maps right now. >> >> @Felix: Once this has converged into a bit more stable idea, it >> would be good to post this here to get feedback. The way we >> discussed it, this approach could form a nice basis for a modular >> design of advanced rasterization techniques (reflection maps, adv. >> face rendering, SSAO, lens flare, tone mapping, etc.), and (later) >> maybe also describe global illumination settings (similar to our >> work on LightingNetworks some years ago). >> >> >> Best, >> >> Philipp >> >> Am 22.10.2013 23:03, schrieb toni at playsign.net >> : >> >> Just a brief note: we had some interesting preliminary discussion >> triggered by how the data schema that Ari O. presented for the POI >> system seemed at least partly similar to what the Real-Virtual >> interaction work had resulted in too -- and in fact about how the >> proposed POI schema was basically a version of the entity-component >> model which we?ve already been using for scenes in realXtend (it is >> inspired by / modeled after it, Ari told). So it can be much >> related to >> the Scene API work in the Synchronization GE too. As the action >> point we >> agreed that Ari will organize a specific work session on that. >> I was now thinking that it perhaps at least partly leads back to the >> question: how do we define (and implement) component types. I.e. >> what >> was mentioned in that entity-system post a few weeks back (with >> links >> to reX IComponent etc.). I mean: if functionality such as POIs and >> realworld interaction make sense as somehow resulting in custom data >> component types, does it mean that a key part of the framework >> is a way >> for those systems to declare their types .. so that it >> integrates nicely >> for the whole we want? I?m not sure, too tired to think it >> through now, >> but anyhow just wanted to mention that this was one topic that >> came up. >> I think Web Components is again something to check - as in XML >> terms reX >> Components are xml(3d) elements .. just ones that are usually in >> a group >> (according to the reX entity <-> xml3d group mapping). And Web >> Components are about defining & implementing new elements (as Erno >> pointed out in a different discussion about xml-html authoring >> in the >> session). >> BTW Thanks Kristian for the great comments in that entity system >> thread - was really good to learn about the alternative >> attribute access >> syntax and the validation in XML3D(.js). >> ~Toni >> P.S. for (Christof &) the DFKI folks: I?m sure you understand the >> rationale of these Oulu meets -- idea is ofc not to exclude you >> from the >> talks but just makes sense for us to meet live too as we are in >> the same >> city afterall etc -- naturally with the DFKI team you also talk >> there >> locally. Perhaps is a good idea that we make notes so that can >> post e.g. >> here then (I?m not volunteering though! ?) . Also, the now agreed >> bi-weekly setup on Tuesdays luckily works so that we can then >> summarize >> fresh in the global Wed meetings and continue the talks etc. >> *From:* Erno Kuusela >> *Sent:* ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM >> *To:* Fiware-miwi >> >> >> Kari from CIE offered to host it this time, so see you there at >> 13:00. >> >> Erno >> _________________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/__listinfo/fiware-miwi >> >> >> >> _________________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/__listinfo/fiware-miwi >> >> >> >> >> -- >> >> ------------------------------__------------------------------__------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> ------------------------------__------------------------------__--------------- >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >> > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > From toni at playsign.net Thu Oct 24 19:24:08 2013 From: toni at playsign.net (Toni Alatalo) Date: Thu, 24 Oct 2013 20:24:08 +0300 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: <526949CA.5080402@dfki.de> References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <52692BAC.8050600@dfki.de> <526949CA.5080402@dfki.de> Message-ID: On 24 Oct 2013, at 19:24, Philipp Slusallek wrote: > Good discussion! I find so too ? thanks for the questions and comments and all! Now briefly about just one point: > Am 24.10.2013 17:37, schrieb Toni Alatalo: >> integrates to the scene system too - for example if a scene server >> queries POI services, does it then only use the data to manipulate >> the scene using other non-POI components, or does it often make sense >> also to include POI components in the scene so that the clients get >> it too automatically with the scene sync and can for example provide >> POI specific GUI tools. Ofc clients can query POI services directly >> too but this server centric setup is also one scenario and there the >> scene integration might make sense. > But I would say that there is a clear distinction between the POI data (which you query from some service) and the visualization or representation of the POI data. Maybe you are more talking about the latter here. However, there really is an application dependent mapping from the POI data to its representation. Each application may choose to present the same POI data in very different way and its only this resulting representation that becomes part of the scene. No I was not talking about visualization or representations here but the POI data. non-POI in the above tried to refer to the whole which covers visualisations etc :) Your last sentence may help to understand the confusion: in these posts I?ve been using the reX entity system terminology only ? hoping that it is clear to discuss that way and not mix terms (like I?ve tried to do in some other threads). There ?scene? does not refer to a visual / graphical or any other type of scene. It does not refer to e.g. something like what xml3d.js and three.js, or ogre, have as their Scene objects. It simply means the collection of all entities. There it is perfectly valid to any kind of data which does not end up to e.g. the visual scene ? many components are like that. So in the above ?only use the data to manipulate the scene using other non-POI components? was referring to for example creation of Mesh components if some POI is to be visualised that way. The mapping that you were discussing. But my point was not about that but about the POI data itself ? and the example about some end user GUI with a widget that manipulates it. So it then gets automatically synchronised along with all the other data in the application in a collaborative setting etc. Stepping out of the previous terminology, we could perhaps translate: ?scene? -> ?application state? and ?scene server? -> ?synchronization server?. I hope this clarifies something ? my apologies if not.. Cheers, ~Toni P.S. i sent the previous post from a foreign device and accidentally with my gmail address as sender so it didn?t make it to the list ? so thank you for quoting it in full so I don?t think we need to repost that :) > This is essentially the Mapping stage of the well-known Visualization pipeline (http://www.infovis-wiki.net/index.php/Visualization_Pipeline), except that here we also map interaction aspects to an abstract scene description (XML3D) first, which then performs the rendering and interaction. So you can think of this as an additional "Scene" stage between "Mapping" and "Rendering". > >> I think this is a different topic, but also with real-virtual >> interaction for example how to facilitate nice simple authoring of >> the e.g. real-virtual object mappings seems a fruitful enough angle >> to think a bit, perhaps as a case to help in understanding the entity >> system & the different servers etc. For example if there's a >> component type 'real world link', the Interface Designer GUI shows it >> automatically in the list of components, ppl can just add them to >> their scenes and somehow then the system just works.. > > I am not sure what you are getting at. But it would be great if the Interface Designer would allow to choose such POI mappings from a predegined catalog. It seems that Xflow can be used nicely for generating the mapped scene elements from some input data, e.g. using the same approach we use to provide basic primitives like cubes or spheres in XML3D. Here they are not fixed, build-in tags as in X3D but can actually be added by the developer as it best fits. > > For generating more complex subgraphs we may have to extend the current Xflow implementation. But its at least a great starting point to experiment with it. Experiments and feedback would be very welcome here. > >> I don't think these discussions are now hurt by us (currently) having >> alternative renderers - the entity system, formats, sync and the >> overall architecture is the same anyway. > > Well, some things only work in one and others only in the other branch. So the above mechanism could not be used to visualize POIs in the three.js branch but we do not have all the features to visualize Oulu (or whatever city) in the XML3D.js branch. This definitely IS greatly limiting how we can combine the GEs into more complex applications -- the untimate goal of the orthogonal design of this chapter. > > And it does not even work within the same chapter. It will be hard to explain to Juanjo and others from FI-WARE (or the commission for that matter). > > BTW, I just learned today that there is a FI-WARE smaller review coming up soon. Let's see if we already have to present things there. So far they have not explicitly asked us. > > > Best, > > Philipp > >> -Toni >> >> >>> From an XML3D POV things could actually be quite "easy". It should be rather simple to directly interface to the IoT GEs of FI-WARE through REST via a new Xflow element. This would then make the data available through elements. Then you can use all the features of Xflow to manipulate the scene based on the data. For example, we are discussing building a set of visualization nodes that implement common visualization metaphors, such as scatter plots, animations, you name it. A new member of the lab starting soon wants to look into this area. >>> >>> For acting on objects we have always used Web services attached to the XML3D objects via DOM events. Eventually, I believe we want a higher level input handling and processing framework but no one knows so far, how this should look like (we have some ideas but they are not well baked, any inpu is highly welcome here). This might or might not reuse some of the Xflow mechanisms. >>> >>> But how to implement RealVirtual Interaction is indeed an intersting discussion. Getting us all on the same page and sharing ideas and implementations is very helpful. Doing this on the same SW platform (without the fork that we currently have) would facilitate a powerful implementation even more. >>> >>> >>> Thanks >>> >>> Philipp >>> >>> Am 23.10.2013 08:02, schrieb Tomi Sarni: >>>> ->Philipp >>>> /I did not get the idea why POIs are similar to ECA. At a very high >>>> level I see it, but I am not sure what it buys us. Can someone sketch >>>> that picture in some more detail?/ >>>> >>>> Well I suppose it becomes relevant at point when we are combining our >>>> GEs together. If the model can be applied in level of scene then down to >>>> POI in a scene and further down in sensor level, things can be >>>> more easily visualized. Not just in terms of painting 3D models but in >>>> terms of handling big data as well, more specifically handling >>>> relationships/inheritance. It also makes it easier >>>> to design a RESTful API as we have a common structure which to follow >>>> and also provides more opportunities for 3rd party developers to make >>>> use of the data for their own purposes. >>>> >>>> For instance >>>> >>>> ->Toni >>>> >>>> From point of sensors, the entity-component becomes >>>> device-sensors/actuators. A device may have an unique identifier and IP >>>> by which to access it, but it may also contain several actuators and >>>> sensors >>>> that are components of that device entity. Sensors/actuators themselves >>>> are not aware to whom they are interesting to. One client may use the >>>> sensor information differently to other client. Sensor/actuator service >>>> allows any other service to query using request/response method either >>>> by geo-coordinates (circle,square or complex shape queries) or perhaps >>>> through type+maxresults and service will return entities and their >>>> components >>>> from which the reqester can form logical groups(array of entity uuids) >>>> and query more detailed information based on that logical group. >>>> >>>> I guess there needs to be similar thinking done on POI level. I guess >>>> POI does not know which scene it belongs to. It is up to scene server to >>>> form a logical group of POIs (e.g. restaurants of oulu 3d city model). Then >>>> again the problem is that scene needs to wait for POI to query for >>>> sensors and form its logical groups before it can pass information to >>>> scene. This can lead to long wait times. But this sequencing problem is >>>> also something >>>> that could be thought. Anyways this is a common problem with everything >>>> in web at the moment in my opinnion. Services become intertwined. When a >>>> client loads a web page there can be queries to 20 different services >>>> for advertisment and other stuff. Web page handles it by painting stuff >>>> to the client on receive basis. I think this could be applied in Scene >>>> as well. >>>> >>>> >>>> >>>> >>>> >>>> On Wed, Oct 23, 2013 at 8:00 AM, Philipp Slusallek >>>> > wrote: >>>> >>>> Hi, >>>> >>>> First of all, its certainly a good thing to also meet locally. I was >>>> just a bit confused whether that meeting somehow would involve us as >>>> well. Summarizing the results briefly for the others would >>>> definitely be interesting. >>>> >>>> I did not get the idea why POIs are similar to ECA. At a very high >>>> level I see it, but I am not sure what it buys us. Can someone >>>> sketch that picture in some more detail? >>>> >>>> BTW, what is the status with the Rendering discussion (Three.js vs. >>>> xml3d.js)? I still have the feeling that we are doing parallel work >>>> here that should probably be avoided. >>>> >>>> BTW, as part of our shading work (which is shaping up nicely) Felix >>>> has been looking lately at a way to describe rendering stages >>>> (passes) essentially through Xflow. It is still very experimental >>>> but he is using it to implement shadow maps right now. >>>> >>>> @Felix: Once this has converged into a bit more stable idea, it >>>> would be good to post this here to get feedback. The way we >>>> discussed it, this approach could form a nice basis for a modular >>>> design of advanced rasterization techniques (reflection maps, adv. >>>> face rendering, SSAO, lens flare, tone mapping, etc.), and (later) >>>> maybe also describe global illumination settings (similar to our >>>> work on LightingNetworks some years ago). >>>> >>>> >>>> Best, >>>> >>>> Philipp >>>> >>>> Am 22.10.2013 23:03, schrieb toni at playsign.net >>>> : >>>> >>>> Just a brief note: we had some interesting preliminary discussion >>>> triggered by how the data schema that Ari O. presented for the POI >>>> system seemed at least partly similar to what the Real-Virtual >>>> interaction work had resulted in too -- and in fact about how the >>>> proposed POI schema was basically a version of the entity-component >>>> model which we?ve already been using for scenes in realXtend (it is >>>> inspired by / modeled after it, Ari told). So it can be much >>>> related to >>>> the Scene API work in the Synchronization GE too. As the action >>>> point we >>>> agreed that Ari will organize a specific work session on that. >>>> I was now thinking that it perhaps at least partly leads back to the >>>> question: how do we define (and implement) component types. I.e. >>>> what >>>> was mentioned in that entity-system post a few weeks back (with >>>> links >>>> to reX IComponent etc.). I mean: if functionality such as POIs and >>>> realworld interaction make sense as somehow resulting in custom data >>>> component types, does it mean that a key part of the framework >>>> is a way >>>> for those systems to declare their types .. so that it >>>> integrates nicely >>>> for the whole we want? I?m not sure, too tired to think it >>>> through now, >>>> but anyhow just wanted to mention that this was one topic that >>>> came up. >>>> I think Web Components is again something to check - as in XML >>>> terms reX >>>> Components are xml(3d) elements .. just ones that are usually in >>>> a group >>>> (according to the reX entity <-> xml3d group mapping). And Web >>>> Components are about defining & implementing new elements (as Erno >>>> pointed out in a different discussion about xml-html authoring >>>> in the >>>> session). >>>> BTW Thanks Kristian for the great comments in that entity system >>>> thread - was really good to learn about the alternative >>>> attribute access >>>> syntax and the validation in XML3D(.js). >>>> ~Toni >>>> P.S. for (Christof &) the DFKI folks: I?m sure you understand the >>>> rationale of these Oulu meets -- idea is ofc not to exclude you >>>> from the >>>> talks but just makes sense for us to meet live too as we are in >>>> the same >>>> city afterall etc -- naturally with the DFKI team you also talk >>>> there >>>> locally. Perhaps is a good idea that we make notes so that can >>>> post e.g. >>>> here then (I?m not volunteering though! ?) . Also, the now agreed >>>> bi-weekly setup on Tuesdays luckily works so that we can then >>>> summarize >>>> fresh in the global Wed meetings and continue the talks etc. >>>> *From:* Erno Kuusela >>>> *Sent:* ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM >>>> *To:* Fiware-miwi >>>> >>>> >>>> Kari from CIE offered to host it this time, so see you there at >>>> 13:00. >>>> >>>> Erno >>>> _________________________________________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/__listinfo/fiware-miwi >>>> >>>> >>>> >>>> _________________________________________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/__listinfo/fiware-miwi >>>> >>>> >>>> >>>> >>>> -- >>>> >>>> ------------------------------__------------------------------__------------- >>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>> >>>> Gesch?ftsf?hrung: >>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>> Dr. Walter Olthoff >>>> Vorsitzender des Aufsichtsrats: >>>> Prof. Dr. h.c. Hans A. Aukes >>>> >>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>> ------------------------------__------------------------------__--------------- >>>> >>>> _______________________________________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>> >>>> >>> >>> >>> -- >>> >>> ------------------------------------------------------------------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> --------------------------------------------------------------------------- >>> > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Thu Oct 24 20:49:57 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Thu, 24 Oct 2013 20:49:57 +0200 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <52692BAC.8050600@dfki.de> <526949CA.5080402@dfki.de> Message-ID: <52696BD5.8070007@dfki.de> Hi, OK, now I get it. This does make sense -- at least in a local scenario, where the POI data (in this example) needs to be stored somewhere anyway and storing it in a component and then generating the appropriate visual component does make sense. Using web components or a similar mechanism we could actually do the same via the DOM (as discussed for the general ECA sync before). But even then you might actually not want to store all the POI data but only the part that really matter to the application (there may be much more data -- maybe not for POIs but potentially for other things). Also in a distributed scenario, I am not so sure. In that case you might want to do that mapping on the server and only sync the resulting data, maybe with reference back so you can still interact with the original data through a service call. That is the main reason why I in general think of POI data and POI representation as separate entities. Regarding terminology, I think it does make sense to differntiate between the 3D scene and the application state (that is not directly influencing the 3D rendering and interaction). While you store them within the same data entity (but in different components), they still refer to quite different things and are operated on by different parts of you program (e.g. the renderer only ever touches the "scene" data). We do the same within the XML3D core, where we attach renderer-specific data to DOM nodes and I believe three.js also does something similar within its data structures. At the end, you have to store these things somewhere and there are only so many way to implement it. The differences are not really that big. Best, Philipp Am 24.10.2013 19:24, schrieb Toni Alatalo: > On 24 Oct 2013, at 19:24, Philipp Slusallek > wrote: >> Good discussion! > > I find so too ? thanks for the questions and comments and all! Now > briefly about just one point: > >> Am 24.10.2013 17:37, schrieb Toni Alatalo: >>> integrates to the scene system too - for example if a scene server >>> queries POI services, does it then only use the data to manipulate >>> the scene using other non-POI components, or does it often make sense >>> also to include POI components in the scene so that the clients get >>> it too automatically with the scene sync and can for example provide >>> POI specific GUI tools. Ofc clients can query POI services directly >>> too but this server centric setup is also one scenario and there the >>> scene integration might make sense. >> But I would say that there is a clear distinction between the POI data >> (which you query from some service) and the visualization or >> representation of the POI data. Maybe you are more talking about the >> latter here. However, there really is an application dependent mapping >> from the POI data to its representation. Each application may choose >> to present the same POI data in very different way and its only this >> resulting representation that becomes part of the scene. > > No I was not talking about visualization or representations here but the > POI data. > > non-POI in the above tried to refer to the whole which covers > visualisations etc :) > > Your last sentence may help to understand the confusion: in these posts > I?ve been using the reX entity system terminology only ? hoping that it > is clear to discuss that way and not mix terms (like I?ve tried to do in > some other threads). > > There ?scene? does not refer to a visual / graphical or any other type > of scene. It does not refer to e.g. something like what xml3d.js and > three.js, or ogre, have as their Scene objects. > > It simply means the collection of all entities. There it is perfectly > valid to any kind of data which does not end up to e.g. the visual scene > ? many components are like that. > > So in the above ?only use the data to manipulate the scene using other > non-POI components? was referring to for example creation of Mesh > components if some POI is to be visualised that way. The mapping that > you were discussing. > > But my point was not about that but about the POI data itself ? and the > example about some end user GUI with a widget that manipulates it. So it > then gets automatically synchronised along with all the other data in > the application in a collaborative setting etc. > > Stepping out of the previous terminology, we could perhaps translate: > ?scene? -> ?application state? and ?scene server? -> ?synchronization > server?. > > I hope this clarifies something ? my apologies if not.. > > Cheers, > ~Toni > > P.S. i sent the previous post from a foreign device and accidentally > with my gmail address as sender so it didn?t make it to the list ? so > thank you for quoting it in full so I don?t think we need to repost that :) > >> This is essentially the Mapping stage of the well-known Visualization >> pipeline >> (http://www.infovis-wiki.net/index.php/Visualization_Pipeline), except >> that here we also map interaction aspects to an abstract scene >> description (XML3D) first, which then performs the rendering and >> interaction. So you can think of this as an additional "Scene" stage >> between "Mapping" and "Rendering". >> >>> I think this is a different topic, but also with real-virtual >>> interaction for example how to facilitate nice simple authoring of >>> the e.g. real-virtual object mappings seems a fruitful enough angle >>> to think a bit, perhaps as a case to help in understanding the entity >>> system & the different servers etc. For example if there's a >>> component type 'real world link', the Interface Designer GUI shows it >>> automatically in the list of components, ppl can just add them to >>> their scenes and somehow then the system just works.. >> >> I am not sure what you are getting at. But it would be great if the >> Interface Designer would allow to choose such POI mappings from a >> predegined catalog. It seems that Xflow can be used nicely for >> generating the mapped scene elements from some input data, e.g. using >> the same approach we use to provide basic primitives like cubes or >> spheres in XML3D. Here they are not fixed, build-in tags as in X3D but >> can actually be added by the developer as it best fits. >> >> For generating more complex subgraphs we may have to extend the >> current Xflow implementation. But its at least a great starting point >> to experiment with it. Experiments and feedback would be very welcome >> here. >> >>> I don't think these discussions are now hurt by us (currently) having >>> alternative renderers - the entity system, formats, sync and the >>> overall architecture is the same anyway. >> >> Well, some things only work in one and others only in the other >> branch. So the above mechanism could not be used to visualize POIs in >> the three.js branch but we do not have all the features to visualize >> Oulu (or whatever city) in the XML3D.js branch. This definitely IS >> greatly limiting how we can combine the GEs into more complex >> applications -- the untimate goal of the orthogonal design of this >> chapter. >> >> And it does not even work within the same chapter. It will be hard to >> explain to Juanjo and others from FI-WARE (or the commission for that >> matter). >> >> BTW, I just learned today that there is a FI-WARE smaller review >> coming up soon. Let's see if we already have to present things there. >> So far they have not explicitly asked us. >> >> >> Best, >> >> Philipp >> >>> -Toni >>> >>> >>>> From an XML3D POV things could actually be quite "easy". It should >>>> be rather simple to directly interface to the IoT GEs of FI-WARE >>>> through REST via a new Xflow element. This would then make the data >>>> available through elements. Then you can use all the features >>>> of Xflow to manipulate the scene based on the data. For example, we >>>> are discussing building a set of visualization nodes that implement >>>> common visualization metaphors, such as scatter plots, animations, >>>> you name it. A new member of the lab starting soon wants to look >>>> into this area. >>>> >>>> For acting on objects we have always used Web services attached to >>>> the XML3D objects via DOM events. Eventually, I believe we want a >>>> higher level input handling and processing framework but no one >>>> knows so far, how this should look like (we have some ideas but they >>>> are not well baked, any inpu is highly welcome here). This might or >>>> might not reuse some of the Xflow mechanisms. >>>> >>>> But how to implement RealVirtual Interaction is indeed an intersting >>>> discussion. Getting us all on the same page and sharing ideas and >>>> implementations is very helpful. Doing this on the same SW platform >>>> (without the fork that we currently have) would facilitate a >>>> powerful implementation even more. >>>> >>>> >>>> Thanks >>>> >>>> Philipp >>>> >>>> Am 23.10.2013 08:02, schrieb Tomi Sarni: >>>>> ->Philipp >>>>> /I did not get the idea why POIs are similar to ECA. At a very high >>>>> level I see it, but I am not sure what it buys us. Can someone sketch >>>>> that picture in some more detail?/ >>>>> >>>>> Well I suppose it becomes relevant at point when we are combining our >>>>> GEs together. If the model can be applied in level of scene then >>>>> down to >>>>> POI in a scene and further down in sensor level, things can be >>>>> more easily visualized. Not just in terms of painting 3D models but in >>>>> terms of handling big data as well, more specifically handling >>>>> relationships/inheritance. It also makes it easier >>>>> to design a RESTful API as we have a common structure which to follow >>>>> and also provides more opportunities for 3rd party developers to make >>>>> use of the data for their own purposes. >>>>> >>>>> For instance >>>>> >>>>> ->Toni >>>>> >>>>> From point of sensors, the entity-component becomes >>>>> device-sensors/actuators. A device may have an unique identifier and IP >>>>> by which to access it, but it may also contain several actuators and >>>>> sensors >>>>> that are components of that device entity. Sensors/actuators >>>>> themselves >>>>> are not aware to whom they are interesting to. One client may use the >>>>> sensor information differently to other client. Sensor/actuator service >>>>> allows any other service to query using request/response method either >>>>> by geo-coordinates (circle,square or complex shape queries) or perhaps >>>>> through type+maxresults and service will return entities and their >>>>> components >>>>> from which the reqester can form logical groups(array of entity uuids) >>>>> and query more detailed information based on that logical group. >>>>> >>>>> I guess there needs to be similar thinking done on POI level. I guess >>>>> POI does not know which scene it belongs to. It is up to scene >>>>> server to >>>>> form a logical group of POIs (e.g. restaurants of oulu 3d city >>>>> model). Then >>>>> again the problem is that scene needs to wait for POI to query for >>>>> sensors and form its logical groups before it can pass information to >>>>> scene. This can lead to long wait times. But this sequencing problem is >>>>> also something >>>>> that could be thought. Anyways this is a common problem with everything >>>>> in web at the moment in my opinnion. Services become intertwined. >>>>> When a >>>>> client loads a web page there can be queries to 20 different services >>>>> for advertisment and other stuff. Web page handles it by painting stuff >>>>> to the client on receive basis. I think this could be applied in Scene >>>>> as well. >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Wed, Oct 23, 2013 at 8:00 AM, Philipp Slusallek >>>>> >>>>> > wrote: >>>>> >>>>> Hi, >>>>> >>>>> First of all, its certainly a good thing to also meet locally. I was >>>>> just a bit confused whether that meeting somehow would involve us as >>>>> well. Summarizing the results briefly for the others would >>>>> definitely be interesting. >>>>> >>>>> I did not get the idea why POIs are similar to ECA. At a very high >>>>> level I see it, but I am not sure what it buys us. Can someone >>>>> sketch that picture in some more detail? >>>>> >>>>> BTW, what is the status with the Rendering discussion (Three.js vs. >>>>> xml3d.js)? I still have the feeling that we are doing parallel work >>>>> here that should probably be avoided. >>>>> >>>>> BTW, as part of our shading work (which is shaping up nicely) Felix >>>>> has been looking lately at a way to describe rendering stages >>>>> (passes) essentially through Xflow. It is still very experimental >>>>> but he is using it to implement shadow maps right now. >>>>> >>>>> @Felix: Once this has converged into a bit more stable idea, it >>>>> would be good to post this here to get feedback. The way we >>>>> discussed it, this approach could form a nice basis for a modular >>>>> design of advanced rasterization techniques (reflection maps, adv. >>>>> face rendering, SSAO, lens flare, tone mapping, etc.), and (later) >>>>> maybe also describe global illumination settings (similar to our >>>>> work on LightingNetworks some years ago). >>>>> >>>>> >>>>> Best, >>>>> >>>>> Philipp >>>>> >>>>> Am 22.10.2013 23:03, schrieb toni at playsign.net >>>>> >>>>> : >>>>> >>>>> Just a brief note: we had some interesting preliminary >>>>> discussion >>>>> triggered by how the data schema that Ari O. presented for >>>>> the POI >>>>> system seemed at least partly similar to what the Real-Virtual >>>>> interaction work had resulted in too -- and in fact about >>>>> how the >>>>> proposed POI schema was basically a version of the >>>>> entity-component >>>>> model which we?ve already been using for scenes in realXtend >>>>> (it is >>>>> inspired by / modeled after it, Ari told). So it can be much >>>>> related to >>>>> the Scene API work in the Synchronization GE too. As the action >>>>> point we >>>>> agreed that Ari will organize a specific work session on that. >>>>> I was now thinking that it perhaps at least partly leads >>>>> back to the >>>>> question: how do we define (and implement) component types. I.e. >>>>> what >>>>> was mentioned in that entity-system post a few weeks back (with >>>>> links >>>>> to reX IComponent etc.). I mean: if functionality such as >>>>> POIs and >>>>> realworld interaction make sense as somehow resulting in >>>>> custom data >>>>> component types, does it mean that a key part of the framework >>>>> is a way >>>>> for those systems to declare their types .. so that it >>>>> integrates nicely >>>>> for the whole we want? I?m not sure, too tired to think it >>>>> through now, >>>>> but anyhow just wanted to mention that this was one topic that >>>>> came up. >>>>> I think Web Components is again something to check - as in XML >>>>> terms reX >>>>> Components are xml(3d) elements .. just ones that are usually in >>>>> a group >>>>> (according to the reX entity <-> xml3d group mapping). And Web >>>>> Components are about defining & implementing new elements >>>>> (as Erno >>>>> pointed out in a different discussion about xml-html authoring >>>>> in the >>>>> session). >>>>> BTW Thanks Kristian for the great comments in that entity system >>>>> thread - was really good to learn about the alternative >>>>> attribute access >>>>> syntax and the validation in XML3D(.js). >>>>> ~Toni >>>>> P.S. for (Christof &) the DFKI folks: I?m sure you >>>>> understand the >>>>> rationale of these Oulu meets -- idea is ofc not to exclude you >>>>> from the >>>>> talks but just makes sense for us to meet live too as we are in >>>>> the same >>>>> city afterall etc -- naturally with the DFKI team you also talk >>>>> there >>>>> locally. Perhaps is a good idea that we make notes so that can >>>>> post e.g. >>>>> here then (I?m not volunteering though! ?) . Also, the now >>>>> agreed >>>>> bi-weekly setup on Tuesdays luckily works so that we can then >>>>> summarize >>>>> fresh in the global Wed meetings and continue the talks etc. >>>>> *From:* Erno Kuusela >>>>> *Sent:* ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM >>>>> *To:* Fiware-miwi >>>>> >>>>> >>>>> Kari from CIE offered to host it this time, so see you there at >>>>> 13:00. >>>>> >>>>> Erno >>>>> _________________________________________________ >>>>> Fiware-miwi mailing list >>>>> Fiware-miwi at lists.fi-ware.eu >>>>> >>>>> https://lists.fi-ware.eu/__listinfo/fiware-miwi >>>>> >>>>> >>>>> >>>>> _________________________________________________ >>>>> Fiware-miwi mailing list >>>>> Fiware-miwi at lists.fi-ware.eu >>>>> >>>>> https://lists.fi-ware.eu/__listinfo/fiware-miwi >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> >>>>> ------------------------------__------------------------------__------------- >>>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>>> >>>>> Gesch?ftsf?hrung: >>>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>>> Dr. Walter Olthoff >>>>> Vorsitzender des Aufsichtsrats: >>>>> Prof. Dr. h.c. Hans A. Aukes >>>>> >>>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>>> ------------------------------__------------------------------__--------------- >>>>> >>>>> _______________________________________________ >>>>> Fiware-miwi mailing list >>>>> Fiware-miwi at lists.fi-ware.eu >>>>> >>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>> >>>>> >>>> >>>> >>>> -- >>>> >>>> ------------------------------------------------------------------------- >>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>> >>>> Gesch?ftsf?hrung: >>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>> Dr. Walter Olthoff >>>> Vorsitzender des Aufsichtsrats: >>>> Prof. Dr. h.c. Hans A. Aukes >>>> >>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>> --------------------------------------------------------------------------- >>>> >> >> >> -- >> >> ------------------------------------------------------------------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> --------------------------------------------------------------------------- >> > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 456 bytes Desc: not available URL: From toni at playsign.net Fri Oct 25 05:02:40 2013 From: toni at playsign.net (Toni Alatalo) Date: Fri, 25 Oct 2013 06:02:40 +0300 Subject: [Fiware-miwi] declarative layer & scalability work (Re: XML3D versus three.js (was Re: 13:00 meeting location: CIE (Re: Oulu meet today 13:00))) In-Reply-To: <5269206C.1030207@dfki.de> References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <19841C55-EF46-4D01-AD1F-633812458545@playsign.net> <5269206C.1030207@dfki.de> Message-ID: On 24 Oct 2013, at 16:28, Philipp Slusallek wrote: Continuing the, so far apparently successful, technique of clarifying a single point at a time a note about scene declarations and description of the scalability work: > I am not too happy that we are investing the FI-WARE resources into circumventing the declarative layer completely. We are not doing that. realXtend has had a declarative layer for the past 4-5 years(*) and we totally depend on it ? that?s not going away. The situation is totally the opposite: it is assumed to always be there. There?s absolutely no work done anywhere to circumvent it somehow. [insert favourite 7th way of saying this]. In my view the case with the current work on scene rendering scalability is this: We already have all the basics implemented and tested in some form - realXtend web client implementations (e.g. ?WebTundra? in form of Chiru-Webclient on github, and other works) have complete entity systems integrated with networking and rendering. XML3d.js is the reference implementation for XML3d parsing, rendering etc. But one of the identified key parts missing was managing larger complex scenes. And that is a pretty hard requirement from the Intelligent City use case which has been the candidate for the main integrated larger use case. IIRC scalability was also among the original requirements and proposals. Also Kristian stated here that he finds it a good area to work on now so the basic motivation for the work seemed clear. So we tackled this straight on by first testing the behaviour of loading & unloading scene parts and then proceeded to implement a simple but effective scene manager. We?re documenting that separately so I won?t go into details here. So far it works even surprisingly well which has been a huge relief during the past couple of days ? not only for us on the platform dev side but also for the modelling and application companies working with the city model here (I demoed the first version in a live meet on Wed), we?ll post demo links soon (within days) as soon as can confirm a bit more that the results seem conclusive. Now in general for the whole 3D UI and nearby GEs I think we have most of the parts (and the rest are coming) and ?just? need to integrate.. The point here is that in that work the focus is on the memory management of the rendering and the efficiency & non-blockingness of loading geometry data and textures for display. In my understanding that is orthogonal to scene declaration formats ? or networking for that matter. In any case we get geometry and texture data to load and manage. An analogue (just to illustrate, not a real case): When someone works on improving the CPU process scheduler in Linux kernel he/she does not touch file system code. That does not mean that the improved scheduler proposes to remove file system support from Linux. Also, it is not investing resources into circumventing (your term) file systems ? even if in the scheduler dev it is practical to just create competing processes from code, and not load applications to execute from the file system. It is absolutely clear for the scheduler developer how filesystems are a part of the big picture but they are just not relevant to the task at hand. Again I hope this clarifies what?s going on. Please note that I?m /not/ addressing renderer alternatives and selection here *at all* ? only the relationship of the declarative layer and of the scalability work that you seemed to bring up in the sentence quoted in the beginning. > I suggest that we start to work on the shared communication layer using the KIARA API (part of a FI-WARE GE) and add the code to make the relevant components work in XML3D. Can someone put together a plan for this. We are happy to help where necessary -- but from my point of view we need to do this as part of the Open Call. I?m sorry I don?t get how this is related. Then again I was not in the KIARA session that one Wed morning ? Erno and Lasse were so I can talk with them to get an understanding. Now I can?t find a thought-path from renderer to networking here yet.. :o Also, I do need to (re-)read all these posts ? so far have had mostly little timeslots to quickly clarify some basic miscommunications (like the poi data vs. poi data derived visualisations topic in the other thread, and the case with the declarative layer & scalability work in this one). I?m mostly not working at all this Friday though (am with kids) and also in general only work on fi-ware 50% of my work time (though I don?t mind when both the share and the total times are more, this is business development!) so it can take a while from my part. > Philipp Cheers, ~Toni (*) "realXtend has had a declarative layer for the past 4-5 years(*)": in the very beginning in 2007-2008 we didn?t have it in the same way, due to how the first prototype was based on Opensimulator and Second Life (tm) viewer. Only way to create a scene was to, in technical terms, to send object creation commands over UDP to the server. Or write code to run in the server. That is how Second Life was originally built: people use the GUI client to build the worlds one object at a time and there was no support for importing nor exporting objects or scenes (people did write scripts to generate objects etc.). For us that was a terrible nightmare (ask anyone from Ludocraft who worked on the Beneath the Waves demo scene for reX 0.3 ? I was fortunate enough to not be involved in that period). As a remedy to that insanity I first implemented importing from Ogre?s very simple .scene (?dotScene?) format in the new Naali viewer (which later became the Tundra codebase). Then we could finally bring full scenes from Blender and Max. We were still using Opensimulator as the server then and after my client-side prototype Mikko Pallari implemented dotScene import to the server side and we got an ok production solution. Nowadays Opensimulator has OAR files and likewise the community totally depends on those. On reX side, Jukka Jyl?nki & Lasse wrote Tundra and we switched to it and the TXML & TBIN support there which still seem ok as machine authored formats. We do support Ogre dotScene import in current Tundra too. And even Linden (the Second Life company) has gotten to support COLLADA import, I think mostly meant for single objects but IIRC works for scenes too. Now XML3d seems like a good next step to get a human friendly (and perhaps just a more sane way to use xml in general) declarative format. It actually addresses an issue I created in our tracker 2 years ago, "xmlifying txml? https://github.com/realXtend/naali/issues/215 .. the draft in the gist linked from there is a bit more like xml3d than txml. I?m very happy that you?ve already made xml3d so we didn?t have to try to invent it :) > Am 23.10.2013 09:51, schrieb Toni Alatalo: >> On Oct 23, 2013, at 8:00 AM, Philipp Slusallek wrote: >> >>> BTW, what is the status with the Rendering discussion (Three.js vs. xml3d.js)? I still have the feeling that we are doing parallel work here that should probably be avoided. >> >> I'm not aware of any overlapping work so far -- then again I'm not fully aware what all is up with xml3d.js. >> >> For the rendering for 3D UI, my conclusion from the discussion on this list was that it is best to use three.js now for the case of big complex fully featured scenes, i.e. typical realXtend worlds (e.g. Ludocraft's Circus demo or the Chesapeake Bay from the LVM project, a creative commons realXtend example scene). And in miwi in particular for the city model / app now. Basically because that's where we already got the basics working in the August-September work (and had earlier in realXtend web client codebases). That is why we implemented the scalability system on top of that too now -- scalability was the only thing missing. >> >> Until yesterday I thought the question was still open regarding XFlow integration. Latest information I got was that there was no hardware acceleration support for XFlow in XML3d.js either so it seemed worth a check whether it's better to implement it for xml3d.js or for three. >> >> Yesterday, however, we learned from Cyberlightning that work on XFlow hardware acceleration was already on-going in xml3d.js (I think mostly by DFKI so far?). And that it was decided that work within fi-ware now is limited to that (and we also understood that the functionality will be quite limited by April, or?). >> >> This obviously affects the overall situation. >> >> At least in an intermediate stage this means that we have two renderers for different purposes: three.js for some apps, without XFlow support, and xml3d.js for others, with XFlow but other things missing. This is certain because that is the case today and probably in the coming weeks at least. >> >> For a good final goal I think we can be clever and make an effective roadmap. I don't know yet what it is, though -- certainly to be discussed. The requirements doc -- perhaps by continuing work on it -- hopefully helps. >> >>> Philipp >> >> ~Toni >> >>> >>> Am 22.10.2013 23:03, schrieb toni at playsign.net: >>>> Just a brief note: we had some interesting preliminary discussion >>>> triggered by how the data schema that Ari O. presented for the POI >>>> system seemed at least partly similar to what the Real-Virtual >>>> interaction work had resulted in too -- and in fact about how the >>>> proposed POI schema was basically a version of the entity-component >>>> model which we?ve already been using for scenes in realXtend (it is >>>> inspired by / modeled after it, Ari told). So it can be much related to >>>> the Scene API work in the Synchronization GE too. As the action point we >>>> agreed that Ari will organize a specific work session on that. >>>> I was now thinking that it perhaps at least partly leads back to the >>>> question: how do we define (and implement) component types. I.e. what >>>> was mentioned in that entity-system post a few weeks back (with links >>>> to reX IComponent etc.). I mean: if functionality such as POIs and >>>> realworld interaction make sense as somehow resulting in custom data >>>> component types, does it mean that a key part of the framework is a way >>>> for those systems to declare their types .. so that it integrates nicely >>>> for the whole we want? I?m not sure, too tired to think it through now, >>>> but anyhow just wanted to mention that this was one topic that came up. >>>> I think Web Components is again something to check - as in XML terms reX >>>> Components are xml(3d) elements .. just ones that are usually in a group >>>> (according to the reX entity <-> xml3d group mapping). And Web >>>> Components are about defining & implementing new elements (as Erno >>>> pointed out in a different discussion about xml-html authoring in the >>>> session). >>>> BTW Thanks Kristian for the great comments in that entity system >>>> thread - was really good to learn about the alternative attribute access >>>> syntax and the validation in XML3D(.js). >>>> ~Toni >>>> P.S. for (Christof &) the DFKI folks: I?m sure you understand the >>>> rationale of these Oulu meets -- idea is ofc not to exclude you from the >>>> talks but just makes sense for us to meet live too as we are in the same >>>> city afterall etc -- naturally with the DFKI team you also talk there >>>> locally. Perhaps is a good idea that we make notes so that can post e.g. >>>> here then (I?m not volunteering though! ?) . Also, the now agreed >>>> bi-weekly setup on Tuesdays luckily works so that we can then summarize >>>> fresh in the global Wed meetings and continue the talks etc. >>>> *From:* Erno Kuusela >>>> *Sent:* ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM >>>> *To:* Fiware-miwi >>>> >>>> Kari from CIE offered to host it this time, so see you there at 13:00. >>>> >>>> Erno >>>> _______________________________________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>> >>>> >>>> _______________________________________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>> >>> >>> >>> -- >>> >>> ------------------------------------------------------------------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> --------------------------------------------------------------------------- >>> >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > From tomi.sarni at cyberlightning.com Fri Oct 25 07:52:23 2013 From: tomi.sarni at cyberlightning.com (Tomi Sarni) Date: Fri, 25 Oct 2013 08:52:23 +0300 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: <52696BD5.8070007@dfki.de> References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <52692BAC.8050600@dfki.de> <526949CA.5080402@dfki.de> <52696BD5.8070007@dfki.de> Message-ID: *Following is completely on theoretical level:* To mix things a little further i've been thinking about a possibility to store visual representation of sensors within the sensors themselves. Many sensor types allow HTTP POST/GET or even PUT/DELETE methods (wrapped in SNMP/CoAP communication protocols for instance) which in theory would allow sensor subscribers to also publish information in sensors (e.g. upload an xml3d model). This approach could be useful in cases where these sensors would have different purposes of use. But the sensor may have very little space to use for the model from up 8-18 KB. Also the web service can attach the models to IDs through use of data base. This is really just a pointer, perhaps there would be use-cases where the sensor visualization could be stored within the sensor itself, i think specifically some AR solutions could benefit from this. But do not let this mix up things, this perhaps reinforces the fact that there need to be overlaying middleware services that attach visual representation based on their own needs. One service could use different 3d representation for temperature sensor than another one. On Thu, Oct 24, 2013 at 9:49 PM, Philipp Slusallek < Philipp.Slusallek at dfki.de> wrote: > Hi, > > OK, now I get it. This does make sense -- at least in a local scenario, > where the POI data (in this example) needs to be stored somewhere anyway > and storing it in a component and then generating the appropriate visual > component does make sense. Using web components or a similar mechanism we > could actually do the same via the DOM (as discussed for the general ECA > sync before). > > But even then you might actually not want to store all the POI data but > only the part that really matter to the application (there may be much more > data -- maybe not for POIs but potentially for other things). > > Also in a distributed scenario, I am not so sure. In that case you might > want to do that mapping on the server and only sync the resulting data, > maybe with reference back so you can still interact with the original data > through a service call. That is the main reason why I in general think of > POI data and POI representation as separate entities. > > Regarding terminology, I think it does make sense to differntiate between > the 3D scene and the application state (that is not directly influencing > the 3D rendering and interaction). While you store them within the same > data entity (but in different components), they still refer to quite > different things and are operated on by different parts of you program > (e.g. the renderer only ever touches the "scene" data). We do the same > within the XML3D core, where we attach renderer-specific data to DOM nodes > and I believe three.js also does something similar within its data > structures. At the end, you have to store these things somewhere and there > are only so many way to implement it. The differences are not really that > big. > > > Best, > > Philipp > > Am 24.10.2013 19:24, schrieb Toni Alatalo: > >> On 24 Oct 2013, at 19:24, Philipp Slusallek > >> wrote: >> >>> Good discussion! >>> >> >> I find so too ? thanks for the questions and comments and all! Now >> briefly about just one point: >> >> Am 24.10.2013 17:37, schrieb Toni Alatalo: >>> >>>> integrates to the scene system too - for example if a scene server >>>> queries POI services, does it then only use the data to manipulate >>>> the scene using other non-POI components, or does it often make sense >>>> also to include POI components in the scene so that the clients get >>>> it too automatically with the scene sync and can for example provide >>>> POI specific GUI tools. Ofc clients can query POI services directly >>>> too but this server centric setup is also one scenario and there the >>>> scene integration might make sense. >>>> >>> But I would say that there is a clear distinction between the POI data >>> (which you query from some service) and the visualization or >>> representation of the POI data. Maybe you are more talking about the >>> latter here. However, there really is an application dependent mapping >>> from the POI data to its representation. Each application may choose >>> to present the same POI data in very different way and its only this >>> resulting representation that becomes part of the scene. >>> >> >> No I was not talking about visualization or representations here but the >> POI data. >> >> non-POI in the above tried to refer to the whole which covers >> visualisations etc :) >> >> Your last sentence may help to understand the confusion: in these posts >> I?ve been using the reX entity system terminology only ? hoping that it >> is clear to discuss that way and not mix terms (like I?ve tried to do in >> some other threads). >> >> There ?scene? does not refer to a visual / graphical or any other type >> of scene. It does not refer to e.g. something like what xml3d.js and >> three.js, or ogre, have as their Scene objects. >> >> It simply means the collection of all entities. There it is perfectly >> valid to any kind of data which does not end up to e.g. the visual scene >> ? many components are like that. >> >> So in the above ?only use the data to manipulate the scene using other >> non-POI components? was referring to for example creation of Mesh >> components if some POI is to be visualised that way. The mapping that >> you were discussing. >> >> But my point was not about that but about the POI data itself ? and the >> example about some end user GUI with a widget that manipulates it. So it >> then gets automatically synchronised along with all the other data in >> the application in a collaborative setting etc. >> >> Stepping out of the previous terminology, we could perhaps translate: >> ?scene? -> ?application state? and ?scene server? -> ?synchronization >> server?. >> >> I hope this clarifies something ? my apologies if not.. >> >> Cheers, >> ~Toni >> >> P.S. i sent the previous post from a foreign device and accidentally >> with my gmail address as sender so it didn?t make it to the list ? so >> thank you for quoting it in full so I don?t think we need to repost that >> :) >> >> This is essentially the Mapping stage of the well-known Visualization >>> pipeline >>> (http://www.infovis-wiki.net/**index.php/Visualization_**Pipeline), >>> except >>> that here we also map interaction aspects to an abstract scene >>> description (XML3D) first, which then performs the rendering and >>> interaction. So you can think of this as an additional "Scene" stage >>> between "Mapping" and "Rendering". >>> >>> I think this is a different topic, but also with real-virtual >>>> interaction for example how to facilitate nice simple authoring of >>>> the e.g. real-virtual object mappings seems a fruitful enough angle >>>> to think a bit, perhaps as a case to help in understanding the entity >>>> system & the different servers etc. For example if there's a >>>> component type 'real world link', the Interface Designer GUI shows it >>>> automatically in the list of components, ppl can just add them to >>>> their scenes and somehow then the system just works.. >>>> >>> >>> I am not sure what you are getting at. But it would be great if the >>> Interface Designer would allow to choose such POI mappings from a >>> predegined catalog. It seems that Xflow can be used nicely for >>> generating the mapped scene elements from some input data, e.g. using >>> the same approach we use to provide basic primitives like cubes or >>> spheres in XML3D. Here they are not fixed, build-in tags as in X3D but >>> can actually be added by the developer as it best fits. >>> >>> For generating more complex subgraphs we may have to extend the >>> current Xflow implementation. But its at least a great starting point >>> to experiment with it. Experiments and feedback would be very welcome >>> here. >>> >>> I don't think these discussions are now hurt by us (currently) having >>>> alternative renderers - the entity system, formats, sync and the >>>> overall architecture is the same anyway. >>>> >>> >>> Well, some things only work in one and others only in the other >>> branch. So the above mechanism could not be used to visualize POIs in >>> the three.js branch but we do not have all the features to visualize >>> Oulu (or whatever city) in the XML3D.js branch. This definitely IS >>> greatly limiting how we can combine the GEs into more complex >>> applications -- the untimate goal of the orthogonal design of this >>> chapter. >>> >>> And it does not even work within the same chapter. It will be hard to >>> explain to Juanjo and others from FI-WARE (or the commission for that >>> matter). >>> >>> BTW, I just learned today that there is a FI-WARE smaller review >>> coming up soon. Let's see if we already have to present things there. >>> So far they have not explicitly asked us. >>> >>> >>> Best, >>> >>> Philipp >>> >>> -Toni >>>> >>>> >>>> From an XML3D POV things could actually be quite "easy". It should >>>>> be rather simple to directly interface to the IoT GEs of FI-WARE >>>>> through REST via a new Xflow element. This would then make the data >>>>> available through elements. Then you can use all the features >>>>> of Xflow to manipulate the scene based on the data. For example, we >>>>> are discussing building a set of visualization nodes that implement >>>>> common visualization metaphors, such as scatter plots, animations, >>>>> you name it. A new member of the lab starting soon wants to look >>>>> into this area. >>>>> >>>>> For acting on objects we have always used Web services attached to >>>>> the XML3D objects via DOM events. Eventually, I believe we want a >>>>> higher level input handling and processing framework but no one >>>>> knows so far, how this should look like (we have some ideas but they >>>>> are not well baked, any inpu is highly welcome here). This might or >>>>> might not reuse some of the Xflow mechanisms. >>>>> >>>>> But how to implement RealVirtual Interaction is indeed an intersting >>>>> discussion. Getting us all on the same page and sharing ideas and >>>>> implementations is very helpful. Doing this on the same SW platform >>>>> (without the fork that we currently have) would facilitate a >>>>> powerful implementation even more. >>>>> >>>>> >>>>> Thanks >>>>> >>>>> Philipp >>>>> >>>>> Am 23.10.2013 08:02, schrieb Tomi Sarni: >>>>> >>>>>> ->Philipp >>>>>> /I did not get the idea why POIs are similar to ECA. At a very high >>>>>> level I see it, but I am not sure what it buys us. Can someone sketch >>>>>> that picture in some more detail?/ >>>>>> >>>>>> Well I suppose it becomes relevant at point when we are combining our >>>>>> GEs together. If the model can be applied in level of scene then >>>>>> down to >>>>>> POI in a scene and further down in sensor level, things can be >>>>>> more easily visualized. Not just in terms of painting 3D models but in >>>>>> terms of handling big data as well, more specifically handling >>>>>> relationships/inheritance. It also makes it easier >>>>>> to design a RESTful API as we have a common structure which to follow >>>>>> and also provides more opportunities for 3rd party developers to make >>>>>> use of the data for their own purposes. >>>>>> >>>>>> For instance >>>>>> >>>>>> ->Toni >>>>>> >>>>>> From point of sensors, the entity-component becomes >>>>>> device-sensors/actuators. A device may have an unique identifier and >>>>>> IP >>>>>> by which to access it, but it may also contain several actuators and >>>>>> sensors >>>>>> that are components of that device entity. Sensors/actuators >>>>>> themselves >>>>>> are not aware to whom they are interesting to. One client may use the >>>>>> sensor information differently to other client. Sensor/actuator >>>>>> service >>>>>> allows any other service to query using request/response method either >>>>>> by geo-coordinates (circle,square or complex shape queries) or perhaps >>>>>> through type+maxresults and service will return entities and their >>>>>> components >>>>>> from which the reqester can form logical groups(array of entity uuids) >>>>>> and query more detailed information based on that logical group. >>>>>> >>>>>> I guess there needs to be similar thinking done on POI level. I guess >>>>>> POI does not know which scene it belongs to. It is up to scene >>>>>> server to >>>>>> form a logical group of POIs (e.g. restaurants of oulu 3d city >>>>>> model). Then >>>>>> again the problem is that scene needs to wait for POI to query for >>>>>> sensors and form its logical groups before it can pass information to >>>>>> scene. This can lead to long wait times. But this sequencing problem >>>>>> is >>>>>> also something >>>>>> that could be thought. Anyways this is a common problem with >>>>>> everything >>>>>> in web at the moment in my opinnion. Services become intertwined. >>>>>> When a >>>>>> client loads a web page there can be queries to 20 different services >>>>>> for advertisment and other stuff. Web page handles it by painting >>>>>> stuff >>>>>> to the client on receive basis. I think this could be applied in Scene >>>>>> as well. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Wed, Oct 23, 2013 at 8:00 AM, Philipp Slusallek >>>>>> >>>>>> > >>>>>> >> >>>>>> wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> First of all, its certainly a good thing to also meet locally. I >>>>>> was >>>>>> just a bit confused whether that meeting somehow would involve us >>>>>> as >>>>>> well. Summarizing the results briefly for the others would >>>>>> definitely be interesting. >>>>>> >>>>>> I did not get the idea why POIs are similar to ECA. At a very high >>>>>> level I see it, but I am not sure what it buys us. Can someone >>>>>> sketch that picture in some more detail? >>>>>> >>>>>> BTW, what is the status with the Rendering discussion (Three.js vs. >>>>>> xml3d.js)? I still have the feeling that we are doing parallel work >>>>>> here that should probably be avoided. >>>>>> >>>>>> BTW, as part of our shading work (which is shaping up nicely) Felix >>>>>> has been looking lately at a way to describe rendering stages >>>>>> (passes) essentially through Xflow. It is still very experimental >>>>>> but he is using it to implement shadow maps right now. >>>>>> >>>>>> @Felix: Once this has converged into a bit more stable idea, it >>>>>> would be good to post this here to get feedback. The way we >>>>>> discussed it, this approach could form a nice basis for a modular >>>>>> design of advanced rasterization techniques (reflection maps, adv. >>>>>> face rendering, SSAO, lens flare, tone mapping, etc.), and (later) >>>>>> maybe also describe global illumination settings (similar to our >>>>>> work on LightingNetworks some years ago). >>>>>> >>>>>> >>>>>> Best, >>>>>> >>>>>> Philipp >>>>>> >>>>>> Am 22.10.2013 23:03, schrieb toni at playsign.net >>>>>> >>>>>> : >>>>>> >>>>>> Just a brief note: we had some interesting preliminary >>>>>> discussion >>>>>> triggered by how the data schema that Ari O. presented for >>>>>> the POI >>>>>> system seemed at least partly similar to what the Real-Virtual >>>>>> interaction work had resulted in too -- and in fact about >>>>>> how the >>>>>> proposed POI schema was basically a version of the >>>>>> entity-component >>>>>> model which we?ve already been using for scenes in realXtend >>>>>> (it is >>>>>> inspired by / modeled after it, Ari told). So it can be much >>>>>> related to >>>>>> the Scene API work in the Synchronization GE too. As the action >>>>>> point we >>>>>> agreed that Ari will organize a specific work session on that. >>>>>> I was now thinking that it perhaps at least partly leads >>>>>> back to the >>>>>> question: how do we define (and implement) component types. >>>>>> I.e. >>>>>> what >>>>>> was mentioned in that entity-system post a few weeks back (with >>>>>> links >>>>>> to reX IComponent etc.). I mean: if functionality such as >>>>>> POIs and >>>>>> realworld interaction make sense as somehow resulting in >>>>>> custom data >>>>>> component types, does it mean that a key part of the framework >>>>>> is a way >>>>>> for those systems to declare their types .. so that it >>>>>> integrates nicely >>>>>> for the whole we want? I?m not sure, too tired to think it >>>>>> through now, >>>>>> but anyhow just wanted to mention that this was one topic that >>>>>> came up. >>>>>> I think Web Components is again something to check - as in XML >>>>>> terms reX >>>>>> Components are xml(3d) elements .. just ones that are usually >>>>>> in >>>>>> a group >>>>>> (according to the reX entity <-> xml3d group mapping). And Web >>>>>> Components are about defining & implementing new elements >>>>>> (as Erno >>>>>> pointed out in a different discussion about xml-html authoring >>>>>> in the >>>>>> session). >>>>>> BTW Thanks Kristian for the great comments in that entity >>>>>> system >>>>>> thread - was really good to learn about the alternative >>>>>> attribute access >>>>>> syntax and the validation in XML3D(.js). >>>>>> ~Toni >>>>>> P.S. for (Christof &) the DFKI folks: I?m sure you >>>>>> understand the >>>>>> rationale of these Oulu meets -- idea is ofc not to exclude you >>>>>> from the >>>>>> talks but just makes sense for us to meet live too as we are in >>>>>> the same >>>>>> city afterall etc -- naturally with the DFKI team you also talk >>>>>> there >>>>>> locally. Perhaps is a good idea that we make notes so that can >>>>>> post e.g. >>>>>> here then (I?m not volunteering though! ?) . Also, the now >>>>>> agreed >>>>>> bi-weekly setup on Tuesdays luckily works so that we can then >>>>>> summarize >>>>>> fresh in the global Wed meetings and continue the talks etc. >>>>>> *From:* Erno Kuusela >>>>>> *Sent:* ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM >>>>>> *To:* Fiware-miwi >>>>>> >>>>>> >>>>>> Kari from CIE offered to host it this time, so see you there at >>>>>> 13:00. >>>>>> >>>>>> Erno >>>>>> ______________________________**___________________ >>>>>> Fiware-miwi mailing list >>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>> > >>>>>> >>>>>> > >>>>>> https://lists.fi-ware.eu/__**listinfo/fiware-miwi >>>>>> >>>>>> > >>>>>> >>>>>> >>>>>> ______________________________**___________________ >>>>>> Fiware-miwi mailing list >>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>> > >>>>>> >>>>>> > >>>>>> https://lists.fi-ware.eu/__**listinfo/fiware-miwi >>>>>> >>>>>> > >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> >>>>>> ------------------------------**__----------------------------** >>>>>> --__------------- >>>>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>>>> >>>>>> Gesch?ftsf?hrung: >>>>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>>>> Dr. Walter Olthoff >>>>>> Vorsitzender des Aufsichtsrats: >>>>>> Prof. Dr. h.c. Hans A. Aukes >>>>>> >>>>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>>>> ------------------------------**__----------------------------** >>>>>> --__--------------- >>>>>> >>>>>> ______________________________**_________________ >>>>>> Fiware-miwi mailing list >>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>> > >>>>>> >>>>>> > >>>>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> >>>>> ------------------------------**------------------------------** >>>>> ------------- >>>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>>> >>>>> Gesch?ftsf?hrung: >>>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>>> Dr. Walter Olthoff >>>>> Vorsitzender des Aufsichtsrats: >>>>> Prof. Dr. h.c. Hans A. Aukes >>>>> >>>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>>> ------------------------------**------------------------------** >>>>> --------------- >>>>> >>>>> >>>> >>> >>> -- >>> >>> ------------------------------**------------------------------** >>> ------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> ------------------------------**------------------------------** >>> --------------- >>> >>> >> >> > > -- > > ------------------------------**------------------------------** > ------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------**------------------------------** > --------------- > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Fri Oct 25 08:29:05 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Fri, 25 Oct 2013 08:29:05 +0200 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <52692BAC.8050600@dfki.de> <526949CA.5080402@dfki.de> <52696BD5.8070007@dfki.de> Message-ID: <526A0FB1.9000709@dfki.de> Hi Tomi, Yes, this is definitely an interesting option and when sensors offer REST-ful interfaces, it should be almost trivial to add (once a suitable and standardized way of how to find that data is specified. At least it would provide a kind of default visualization in case no other is available. It becomes more of an issue when we talk about interactivity, when the visual representation needs to react to user input in a way that is consistent with the application and calls functionality in the application. In other words, you have to do a mapping from the sensor to the application at some point along the pipeline (and back for actions to be performed by an actuator). Either we specify the sensor type through some semantic means (a simple tag in the simplest case, a full RDF/a graph in the best case) and let the application choose how to represent it or we need to find a way to map generic behavior of a default object to application functionality. The first seems much easier to me as application functionality is likely to vary much more than sensor functionality. And semantic sensor description have been worked on for a long time and are available on the market. Of course, there are hybrid methods as well: A simple one would be to include a URI/URL to a default model in the semantic sensor description that then gets loaded either from the sensor through REST (given some namespace there) or via the Web (again using some namespace or search strategy). Then the application can always inject its own mapping to what it thinks is the best mapping. Best, Philipp Am 25.10.2013 07:52, schrieb Tomi Sarni: > *Following is completely on theoretical level:* > To mix things a little further i've been thinking about a possibility to > store visual representation of sensors within the sensors themselves. > Many sensor types allow HTTP POST/GET or even PUT/DELETE methods > (wrapped in SNMP/CoAP communication protocols for instance) which in > theory would allow sensor subscribers to also publish information in > sensors (e.g. upload an xml3d model). This approach could be useful in > cases where these sensors would have different purposes of use. But the > sensor may have very little space to use for the model from up 8-18 KB. > Also the web service can attach the models to IDs through use of data > base. This is really just a pointer, perhaps there would be use-cases > where the sensor visualization could be stored within the sensor itself, > i think specifically some AR solutions could benefit from this. But do > not let this mix up things, this perhaps reinforces the fact that there > need to be overlaying middleware services that attach visual > representation based on their own needs. One service could use different > 3d representation for temperature sensor than another one. > > > > > On Thu, Oct 24, 2013 at 9:49 PM, Philipp Slusallek > > wrote: > > Hi, > > OK, now I get it. This does make sense -- at least in a local > scenario, where the POI data (in this example) needs to be stored > somewhere anyway and storing it in a component and then generating > the appropriate visual component does make sense. Using web > components or a similar mechanism we could actually do the same via > the DOM (as discussed for the general ECA sync before). > > But even then you might actually not want to store all the POI data > but only the part that really matter to the application (there may > be much more data -- maybe not for POIs but potentially for other > things). > > Also in a distributed scenario, I am not so sure. In that case you > might want to do that mapping on the server and only sync the > resulting data, maybe with reference back so you can still interact > with the original data through a service call. That is the main > reason why I in general think of POI data and POI representation as > separate entities. > > Regarding terminology, I think it does make sense to differntiate > between the 3D scene and the application state (that is not directly > influencing the 3D rendering and interaction). While you store them > within the same data entity (but in different components), they > still refer to quite different things and are operated on by > different parts of you program (e.g. the renderer only ever touches > the "scene" data). We do the same within the XML3D core, where we > attach renderer-specific data to DOM nodes and I believe three.js > also does something similar within its data structures. At the end, > you have to store these things somewhere and there are only so many > way to implement it. The differences are not really that big. > > > Best, > > Philipp > > Am 24.10.2013 19:24, schrieb Toni Alatalo: > > On 24 Oct 2013, at 19:24, Philipp Slusallek > > >> wrote: > > Good discussion! > > > I find so too ? thanks for the questions and comments and all! Now > briefly about just one point: > > Am 24.10.2013 17:37, schrieb Toni Alatalo: > > integrates to the scene system too - for example if a > scene server > queries POI services, does it then only use the data to > manipulate > the scene using other non-POI components, or does it > often make sense > also to include POI components in the scene so that the > clients get > it too automatically with the scene sync and can for > example provide > POI specific GUI tools. Ofc clients can query POI > services directly > too but this server centric setup is also one scenario > and there the > scene integration might make sense. > > But I would say that there is a clear distinction between > the POI data > (which you query from some service) and the visualization or > representation of the POI data. Maybe you are more talking > about the > latter here. However, there really is an application > dependent mapping > from the POI data to its representation. Each application > may choose > to present the same POI data in very different way and its > only this > resulting representation that becomes part of the scene. > > > No I was not talking about visualization or representations here > but the > POI data. > > non-POI in the above tried to refer to the whole which covers > visualisations etc :) > > Your last sentence may help to understand the confusion: in > these posts > I?ve been using the reX entity system terminology only ? hoping > that it > is clear to discuss that way and not mix terms (like I?ve tried > to do in > some other threads). > > There ?scene? does not refer to a visual / graphical or any > other type > of scene. It does not refer to e.g. something like what xml3d.js and > three.js, or ogre, have as their Scene objects. > > It simply means the collection of all entities. There it is > perfectly > valid to any kind of data which does not end up to e.g. the > visual scene > ? many components are like that. > > So in the above ?only use the data to manipulate the scene using > other > non-POI components? was referring to for example creation of Mesh > components if some POI is to be visualised that way. The mapping > that > you were discussing. > > But my point was not about that but about the POI data itself ? > and the > example about some end user GUI with a widget that manipulates > it. So it > then gets automatically synchronised along with all the other > data in > the application in a collaborative setting etc. > > Stepping out of the previous terminology, we could perhaps > translate: > ?scene? -> ?application state? and ?scene server? -> > ?synchronization > server?. > > I hope this clarifies something ? my apologies if not.. > > Cheers, > ~Toni > > P.S. i sent the previous post from a foreign device and accidentally > with my gmail address as sender so it didn?t make it to the list > ? so > thank you for quoting it in full so I don?t think we need to > repost that :) > > This is essentially the Mapping stage of the well-known > Visualization > pipeline > (http://www.infovis-wiki.net/__index.php/Visualization___Pipeline > ), > except > that here we also map interaction aspects to an abstract scene > description (XML3D) first, which then performs the rendering and > interaction. So you can think of this as an additional > "Scene" stage > between "Mapping" and "Rendering". > > I think this is a different topic, but also with > real-virtual > interaction for example how to facilitate nice simple > authoring of > the e.g. real-virtual object mappings seems a fruitful > enough angle > to think a bit, perhaps as a case to help in > understanding the entity > system & the different servers etc. For example if there's a > component type 'real world link', the Interface Designer > GUI shows it > automatically in the list of components, ppl can just > add them to > their scenes and somehow then the system just works.. > > > I am not sure what you are getting at. But it would be great > if the > Interface Designer would allow to choose such POI mappings > from a > predegined catalog. It seems that Xflow can be used nicely for > generating the mapped scene elements from some input data, > e.g. using > the same approach we use to provide basic primitives like > cubes or > spheres in XML3D. Here they are not fixed, build-in tags as > in X3D but > can actually be added by the developer as it best fits. > > For generating more complex subgraphs we may have to extend the > current Xflow implementation. But its at least a great > starting point > to experiment with it. Experiments and feedback would be > very welcome > here. > > I don't think these discussions are now hurt by us > (currently) having > alternative renderers - the entity system, formats, sync > and the > overall architecture is the same anyway. > > > Well, some things only work in one and others only in the other > branch. So the above mechanism could not be used to > visualize POIs in > the three.js branch but we do not have all the features to > visualize > Oulu (or whatever city) in the XML3D.js branch. This > definitely IS > greatly limiting how we can combine the GEs into more complex > applications -- the untimate goal of the orthogonal design > of this > chapter. > > And it does not even work within the same chapter. It will > be hard to > explain to Juanjo and others from FI-WARE (or the commission > for that > matter). > > BTW, I just learned today that there is a FI-WARE smaller review > coming up soon. Let's see if we already have to present > things there. > So far they have not explicitly asked us. > > > Best, > > Philipp > > -Toni > > > From an XML3D POV things could actually be quite > "easy". It should > be rather simple to directly interface to the IoT > GEs of FI-WARE > through REST via a new Xflow element. This would > then make the data > available through elements. Then you can use > all the features > of Xflow to manipulate the scene based on the data. > For example, we > are discussing building a set of visualization nodes > that implement > common visualization metaphors, such as scatter > plots, animations, > you name it. A new member of the lab starting soon > wants to look > into this area. > > For acting on objects we have always used Web > services attached to > the XML3D objects via DOM events. Eventually, I > believe we want a > higher level input handling and processing framework > but no one > knows so far, how this should look like (we have > some ideas but they > are not well baked, any inpu is highly welcome > here). This might or > might not reuse some of the Xflow mechanisms. > > But how to implement RealVirtual Interaction is > indeed an intersting > discussion. Getting us all on the same page and > sharing ideas and > implementations is very helpful. Doing this on the > same SW platform > (without the fork that we currently have) would > facilitate a > powerful implementation even more. > > > Thanks > > Philipp > > Am 23.10.2013 08:02, schrieb Tomi Sarni: > > ->Philipp > /I did not get the idea why POIs are similar to > ECA. At a very high > level I see it, but I am not sure what it buys > us. Can someone sketch > that picture in some more detail?/ > > Well I suppose it becomes relevant at point when > we are combining our > GEs together. If the model can be applied in > level of scene then > down to > POI in a scene and further down in sensor level, > things can be > more easily visualized. Not just in terms of > painting 3D models but in > terms of handling big data as well, more > specifically handling > relationships/inheritance. It also makes it easier > to design a RESTful API as we have a common > structure which to follow > and also provides more opportunities for 3rd > party developers to make > use of the data for their own purposes. > > For instance > > ->Toni > > From point of sensors, the entity-component becomes > device-sensors/actuators. A device may have an > unique identifier and IP > by which to access it, but it may also contain > several actuators and > sensors > that are components of that device entity. > Sensors/actuators > themselves > are not aware to whom they are interesting to. > One client may use the > sensor information differently to other client. > Sensor/actuator service > allows any other service to query using > request/response method either > by geo-coordinates (circle,square or complex > shape queries) or perhaps > through type+maxresults and service will return > entities and their > components > from which the reqester can form logical > groups(array of entity uuids) > and query more detailed information based on > that logical group. > > I guess there needs to be similar thinking done > on POI level. I guess > POI does not know which scene it belongs to. It > is up to scene > server to > form a logical group of POIs (e.g. restaurants > of oulu 3d city > model). Then > again the problem is that scene needs to wait > for POI to query for > sensors and form its logical groups before it > can pass information to > scene. This can lead to long wait times. But > this sequencing problem is > also something > that could be thought. Anyways this is a common > problem with everything > in web at the moment in my opinnion. Services > become intertwined. > When a > client loads a web page there can be queries to > 20 different services > for advertisment and other stuff. Web page > handles it by painting stuff > to the client on receive basis. I think this > could be applied in Scene > as well. > > > > > > On Wed, Oct 23, 2013 at 8:00 AM, Philipp Slusallek > > > > >> wrote: > > Hi, > > First of all, its certainly a good thing to > also meet locally. I was > just a bit confused whether that meeting > somehow would involve us as > well. Summarizing the results briefly for > the others would > definitely be interesting. > > I did not get the idea why POIs are similar > to ECA. At a very high > level I see it, but I am not sure what it > buys us. Can someone > sketch that picture in some more detail? > > BTW, what is the status with the Rendering > discussion (Three.js vs. > xml3d.js)? I still have the feeling that we > are doing parallel work > here that should probably be avoided. > > BTW, as part of our shading work (which is > shaping up nicely) Felix > has been looking lately at a way to describe > rendering stages > (passes) essentially through Xflow. It is > still very experimental > but he is using it to implement shadow maps > right now. > > @Felix: Once this has converged into a bit > more stable idea, it > would be good to post this here to get > feedback. The way we > discussed it, this approach could form a > nice basis for a modular > design of advanced rasterization techniques > (reflection maps, adv. > face rendering, SSAO, lens flare, tone > mapping, etc.), and (later) > maybe also describe global illumination > settings (similar to our > work on LightingNetworks some years ago). > > > Best, > > Philipp > > Am 22.10.2013 23:03, schrieb > toni at playsign.net > > > >: > > Just a brief note: we had some > interesting preliminary > discussion > triggered by how the data schema that > Ari O. presented for > the POI > system seemed at least partly similar to > what the Real-Virtual > interaction work had resulted in too -- > and in fact about > how the > proposed POI schema was basically a > version of the > entity-component > model which we?ve already been using for > scenes in realXtend > (it is > inspired by / modeled after it, Ari > told). So it can be much > related to > the Scene API work in the > Synchronization GE too. As the action > point we > agreed that Ari will organize a specific > work session on that. > I was now thinking that it perhaps at > least partly leads > back to the > question: how do we define (and > implement) component types. I.e. > what > was mentioned in that entity-system post > a few weeks back (with > links > to reX IComponent etc.). I mean: if > functionality such as > POIs and > realworld interaction make sense as > somehow resulting in > custom data > component types, does it mean that a key > part of the framework > is a way > for those systems to declare their types > .. so that it > integrates nicely > for the whole we want? I?m not sure, too > tired to think it > through now, > but anyhow just wanted to mention that > this was one topic that > came up. > I think Web Components is again > something to check - as in XML > terms reX > Components are xml(3d) elements .. just > ones that are usually in > a group > (according to the reX entity <-> xml3d > group mapping). And Web > Components are about defining & > implementing new elements > (as Erno > pointed out in a different discussion > about xml-html authoring > in the > session). > BTW Thanks Kristian for the great > comments in that entity system > thread - was really good to learn about > the alternative > attribute access > syntax and the validation in XML3D(.js). > ~Toni > P.S. for (Christof &) the DFKI folks: > I?m sure you > understand the > rationale of these Oulu meets -- idea is > ofc not to exclude you > from the > talks but just makes sense for us to > meet live too as we are in > the same > city afterall etc -- naturally with the > DFKI team you also talk > there > locally. Perhaps is a good idea that we > make notes so that can > post e.g. > here then (I?m not volunteering though! > ?) . Also, the now > agreed > bi-weekly setup on Tuesdays luckily > works so that we can then > summarize > fresh in the global Wed meetings and > continue the talks etc. > *From:* Erno Kuusela > *Sent:* ?Tuesday?, ?October? ?22?, ?2013 > ?9?:?57? ?AM > *To:* Fiware-miwi > > > Kari from CIE offered to host it this > time, so see you there at > 13:00. > > Erno > > ___________________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > > https://lists.fi-ware.eu/____listinfo/fiware-miwi > > > > > > > ___________________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > > https://lists.fi-ware.eu/____listinfo/fiware-miwi > > > > > > > -- > > > ------------------------------____----------------------------__--__------------- > Deutsches Forschungszentrum f?r K?nstliche > Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang > Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: > 19/673/0060/3 > > ------------------------------____----------------------------__--__--------------- > > > _________________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > > https://lists.fi-ware.eu/__listinfo/fiware-miwi > > > > > > -- > > ------------------------------__------------------------------__------------- > Deutsches Forschungszentrum f?r K?nstliche > Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster > (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------__------------------------------__--------------- > > > > > -- > > ------------------------------__------------------------------__------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz > (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------__------------------------------__--------------- > > > > > > -- > > ------------------------------__------------------------------__------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------__------------------------------__--------------- > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 456 bytes Desc: not available URL: From tomi.sarni at cyberlightning.com Fri Oct 25 09:05:12 2013 From: tomi.sarni at cyberlightning.com (Tomi Sarni) Date: Fri, 25 Oct 2013 10:05:12 +0300 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: <526A0FB1.9000709@dfki.de> References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <52692BAC.8050600@dfki.de> <526949CA.5080402@dfki.de> <52696BD5.8070007@dfki.de> <526A0FB1.9000709@dfki.de> Message-ID: *It becomes more of an issue when we talk about interactivity, when the visual representation needs to react to user input in a way that is consistent with the application and calls functionality in the application. In other words, you have to do a mapping from the sensor to the application at some point along the pipeline (and back for actions to be performed by an actuator).* Currently when a client polls a device(containing sensor and/or actuators) it will receive all interaction options that available for the particular sensor or actuator. These options can be then accessed by a HTTP POST method from the service. So there is the logical mapping. I can see your point though, in a way it would seem logical to have that XML3D model to contain states (e.g. button up and button down 3d model states), and i have no idea whether this is supported by XML3D, as i have been busy on server/sensor side. This way when a sensor is being accesses by HTTP POST call to change state to either on or off for instance, the XML3D model could contain transition logic to change appearance from one state to another. Alternatively there can be two models for two states. When the actuator is being queried it will return model that corresponds to its current state. On Fri, Oct 25, 2013 at 9:29 AM, Philipp Slusallek < Philipp.Slusallek at dfki.de> wrote: > Hi Tomi, > > Yes, this is definitely an interesting option and when sensors offer > REST-ful interfaces, it should be almost trivial to add (once a suitable > and standardized way of how to find that data is specified. At least it > would provide a kind of default visualization in case no other is available. > > It becomes more of an issue when we talk about interactivity, when the > visual representation needs to react to user input in a way that is > consistent with the application and calls functionality in the application. > In other words, you have to do a mapping from the sensor to the application > at some point along the pipeline (and back for actions to be performed by > an actuator). > > Either we specify the sensor type through some semantic means (a simple > tag in the simplest case, a full RDF/a graph in the best case) and let the > application choose how to represent it or we need to find a way to map > generic behavior of a default object to application functionality. The > first seems much easier to me as application functionality is likely to > vary much more than sensor functionality. And semantic sensor description > have been worked on for a long time and are available on the market. > > Of course, there are hybrid methods as well: A simple one would be to > include a URI/URL to a default model in the semantic sensor description > that then gets loaded either from the sensor through REST (given some > namespace there) or via the Web (again using some namespace or search > strategy). Then the application can always inject its own mapping to what > it thinks is the best mapping. > > > Best, > > Philipp > > Am 25.10.2013 07:52, schrieb Tomi Sarni: > >> *Following is completely on theoretical level:* >> >> To mix things a little further i've been thinking about a possibility to >> store visual representation of sensors within the sensors themselves. >> Many sensor types allow HTTP POST/GET or even PUT/DELETE methods >> (wrapped in SNMP/CoAP communication protocols for instance) which in >> theory would allow sensor subscribers to also publish information in >> sensors (e.g. upload an xml3d model). This approach could be useful in >> cases where these sensors would have different purposes of use. But the >> sensor may have very little space to use for the model from up 8-18 KB. >> Also the web service can attach the models to IDs through use of data >> base. This is really just a pointer, perhaps there would be use-cases >> where the sensor visualization could be stored within the sensor itself, >> i think specifically some AR solutions could benefit from this. But do >> not let this mix up things, this perhaps reinforces the fact that there >> need to be overlaying middleware services that attach visual >> representation based on their own needs. One service could use different >> 3d representation for temperature sensor than another one. >> >> >> >> >> On Thu, Oct 24, 2013 at 9:49 PM, Philipp Slusallek >> >> >> wrote: >> >> Hi, >> >> OK, now I get it. This does make sense -- at least in a local >> scenario, where the POI data (in this example) needs to be stored >> somewhere anyway and storing it in a component and then generating >> the appropriate visual component does make sense. Using web >> components or a similar mechanism we could actually do the same via >> the DOM (as discussed for the general ECA sync before). >> >> But even then you might actually not want to store all the POI data >> but only the part that really matter to the application (there may >> be much more data -- maybe not for POIs but potentially for other >> things). >> >> Also in a distributed scenario, I am not so sure. In that case you >> might want to do that mapping on the server and only sync the >> resulting data, maybe with reference back so you can still interact >> with the original data through a service call. That is the main >> reason why I in general think of POI data and POI representation as >> separate entities. >> >> Regarding terminology, I think it does make sense to differntiate >> between the 3D scene and the application state (that is not directly >> influencing the 3D rendering and interaction). While you store them >> within the same data entity (but in different components), they >> still refer to quite different things and are operated on by >> different parts of you program (e.g. the renderer only ever touches >> the "scene" data). We do the same within the XML3D core, where we >> attach renderer-specific data to DOM nodes and I believe three.js >> also does something similar within its data structures. At the end, >> you have to store these things somewhere and there are only so many >> way to implement it. The differences are not really that big. >> >> >> Best, >> >> Philipp >> >> Am 24.10.2013 19:24, schrieb Toni Alatalo: >> >> On 24 Oct 2013, at 19:24, Philipp Slusallek >> >> > >> >> >> >>> >> wrote: >> >> Good discussion! >> >> >> I find so too ? thanks for the questions and comments and all! Now >> briefly about just one point: >> >> Am 24.10.2013 17:37, schrieb Toni Alatalo: >> >> integrates to the scene system too - for example if a >> scene server >> queries POI services, does it then only use the data to >> manipulate >> the scene using other non-POI components, or does it >> often make sense >> also to include POI components in the scene so that the >> clients get >> it too automatically with the scene sync and can for >> example provide >> POI specific GUI tools. Ofc clients can query POI >> services directly >> too but this server centric setup is also one scenario >> and there the >> scene integration might make sense. >> >> But I would say that there is a clear distinction between >> the POI data >> (which you query from some service) and the visualization or >> representation of the POI data. Maybe you are more talking >> about the >> latter here. However, there really is an application >> dependent mapping >> from the POI data to its representation. Each application >> may choose >> to present the same POI data in very different way and its >> only this >> resulting representation that becomes part of the scene. >> >> >> No I was not talking about visualization or representations here >> but the >> POI data. >> >> non-POI in the above tried to refer to the whole which covers >> visualisations etc :) >> >> Your last sentence may help to understand the confusion: in >> these posts >> I?ve been using the reX entity system terminology only ? hoping >> that it >> is clear to discuss that way and not mix terms (like I?ve tried >> to do in >> some other threads). >> >> There ?scene? does not refer to a visual / graphical or any >> other type >> of scene. It does not refer to e.g. something like what xml3d.js >> and >> three.js, or ogre, have as their Scene objects. >> >> It simply means the collection of all entities. There it is >> perfectly >> valid to any kind of data which does not end up to e.g. the >> visual scene >> ? many components are like that. >> >> So in the above ?only use the data to manipulate the scene using >> other >> non-POI components? was referring to for example creation of Mesh >> components if some POI is to be visualised that way. The mapping >> that >> you were discussing. >> >> But my point was not about that but about the POI data itself ? >> and the >> example about some end user GUI with a widget that manipulates >> it. So it >> then gets automatically synchronised along with all the other >> data in >> the application in a collaborative setting etc. >> >> Stepping out of the previous terminology, we could perhaps >> translate: >> ?scene? -> ?application state? and ?scene server? -> >> ?synchronization >> server?. >> >> I hope this clarifies something ? my apologies if not.. >> >> Cheers, >> ~Toni >> >> P.S. i sent the previous post from a foreign device and >> accidentally >> with my gmail address as sender so it didn?t make it to the list >> ? so >> thank you for quoting it in full so I don?t think we need to >> repost that :) >> >> This is essentially the Mapping stage of the well-known >> Visualization >> pipeline >> (http://www.infovis-wiki.net/_**_index.php/Visualization___** >> Pipeline >> > Pipeline >> >), >> >> except >> that here we also map interaction aspects to an abstract scene >> description (XML3D) first, which then performs the rendering >> and >> interaction. So you can think of this as an additional >> "Scene" stage >> between "Mapping" and "Rendering". >> >> I think this is a different topic, but also with >> real-virtual >> interaction for example how to facilitate nice simple >> authoring of >> the e.g. real-virtual object mappings seems a fruitful >> enough angle >> to think a bit, perhaps as a case to help in >> understanding the entity >> system & the different servers etc. For example if >> there's a >> component type 'real world link', the Interface Designer >> GUI shows it >> automatically in the list of components, ppl can just >> add them to >> their scenes and somehow then the system just works.. >> >> >> I am not sure what you are getting at. But it would be great >> if the >> Interface Designer would allow to choose such POI mappings >> from a >> predegined catalog. It seems that Xflow can be used nicely for >> generating the mapped scene elements from some input data, >> e.g. using >> the same approach we use to provide basic primitives like >> cubes or >> spheres in XML3D. Here they are not fixed, build-in tags as >> in X3D but >> can actually be added by the developer as it best fits. >> >> For generating more complex subgraphs we may have to extend >> the >> current Xflow implementation. But its at least a great >> starting point >> to experiment with it. Experiments and feedback would be >> very welcome >> here. >> >> I don't think these discussions are now hurt by us >> (currently) having >> alternative renderers - the entity system, formats, sync >> and the >> overall architecture is the same anyway. >> >> >> Well, some things only work in one and others only in the >> other >> branch. So the above mechanism could not be used to >> visualize POIs in >> the three.js branch but we do not have all the features to >> visualize >> Oulu (or whatever city) in the XML3D.js branch. This >> definitely IS >> greatly limiting how we can combine the GEs into more complex >> applications -- the untimate goal of the orthogonal design >> of this >> chapter. >> >> And it does not even work within the same chapter. It will >> be hard to >> explain to Juanjo and others from FI-WARE (or the commission >> for that >> matter). >> >> BTW, I just learned today that there is a FI-WARE smaller >> review >> coming up soon. Let's see if we already have to present >> things there. >> So far they have not explicitly asked us. >> >> >> Best, >> >> Philipp >> >> -Toni >> >> >> From an XML3D POV things could actually be quite >> "easy". It should >> be rather simple to directly interface to the IoT >> GEs of FI-WARE >> through REST via a new Xflow element. This would >> then make the data >> available through elements. Then you can use >> all the features >> of Xflow to manipulate the scene based on the data. >> For example, we >> are discussing building a set of visualization nodes >> that implement >> common visualization metaphors, such as scatter >> plots, animations, >> you name it. A new member of the lab starting soon >> wants to look >> into this area. >> >> For acting on objects we have always used Web >> services attached to >> the XML3D objects via DOM events. Eventually, I >> believe we want a >> higher level input handling and processing framework >> but no one >> knows so far, how this should look like (we have >> some ideas but they >> are not well baked, any inpu is highly welcome >> here). This might or >> might not reuse some of the Xflow mechanisms. >> >> But how to implement RealVirtual Interaction is >> indeed an intersting >> discussion. Getting us all on the same page and >> sharing ideas and >> implementations is very helpful. Doing this on the >> same SW platform >> (without the fork that we currently have) would >> facilitate a >> powerful implementation even more. >> >> >> Thanks >> >> Philipp >> >> Am 23.10.2013 08:02, schrieb Tomi Sarni: >> >> ->Philipp >> /I did not get the idea why POIs are similar to >> ECA. At a very high >> level I see it, but I am not sure what it buys >> us. Can someone sketch >> that picture in some more detail?/ >> >> Well I suppose it becomes relevant at point when >> we are combining our >> GEs together. If the model can be applied in >> level of scene then >> down to >> POI in a scene and further down in sensor level, >> things can be >> more easily visualized. Not just in terms of >> painting 3D models but in >> terms of handling big data as well, more >> specifically handling >> relationships/inheritance. It also makes it easier >> to design a RESTful API as we have a common >> structure which to follow >> and also provides more opportunities for 3rd >> party developers to make >> use of the data for their own purposes. >> >> For instance >> >> ->Toni >> >> From point of sensors, the entity-component >> becomes >> device-sensors/actuators. A device may have an >> unique identifier and IP >> by which to access it, but it may also contain >> several actuators and >> sensors >> that are components of that device entity. >> Sensors/actuators >> themselves >> are not aware to whom they are interesting to. >> One client may use the >> sensor information differently to other client. >> Sensor/actuator service >> allows any other service to query using >> request/response method either >> by geo-coordinates (circle,square or complex >> shape queries) or perhaps >> through type+maxresults and service will return >> entities and their >> components >> from which the reqester can form logical >> groups(array of entity uuids) >> and query more detailed information based on >> that logical group. >> >> I guess there needs to be similar thinking done >> on POI level. I guess >> POI does not know which scene it belongs to. It >> is up to scene >> server to >> form a logical group of POIs (e.g. restaurants >> of oulu 3d city >> model). Then >> again the problem is that scene needs to wait >> for POI to query for >> sensors and form its logical groups before it >> can pass information to >> scene. This can lead to long wait times. But >> this sequencing problem is >> also something >> that could be thought. Anyways this is a common >> problem with everything >> in web at the moment in my opinnion. Services >> become intertwined. >> When a >> client loads a web page there can be queries to >> 20 different services >> for advertisment and other stuff. Web page >> handles it by painting stuff >> to the client on receive basis. I think this >> could be applied in Scene >> as well. >> >> >> >> >> >> On Wed, Oct 23, 2013 at 8:00 AM, Philipp Slusallek >> > >> > >> >> >> >> >> >> >> >>> >> wrote: >> >> Hi, >> >> First of all, its certainly a good thing to >> also meet locally. I was >> just a bit confused whether that meeting >> somehow would involve us as >> well. Summarizing the results briefly for >> the others would >> definitely be interesting. >> >> I did not get the idea why POIs are similar >> to ECA. At a very high >> level I see it, but I am not sure what it >> buys us. Can someone >> sketch that picture in some more detail? >> >> BTW, what is the status with the Rendering >> discussion (Three.js vs. >> xml3d.js)? I still have the feeling that we >> are doing parallel work >> here that should probably be avoided. >> >> BTW, as part of our shading work (which is >> shaping up nicely) Felix >> has been looking lately at a way to describe >> rendering stages >> (passes) essentially through Xflow. It is >> still very experimental >> but he is using it to implement shadow maps >> right now. >> >> @Felix: Once this has converged into a bit >> more stable idea, it >> would be good to post this here to get >> feedback. The way we >> discussed it, this approach could form a >> nice basis for a modular >> design of advanced rasterization techniques >> (reflection maps, adv. >> face rendering, SSAO, lens flare, tone >> mapping, etc.), and (later) >> maybe also describe global illumination >> settings (similar to our >> work on LightingNetworks some years ago). >> >> >> Best, >> >> Philipp >> >> Am 22.10.2013 23:03, schrieb >> toni at playsign.net >> > > >> >> > >: >> >> Just a brief note: we had some >> interesting preliminary >> discussion >> triggered by how the data schema that >> Ari O. presented for >> the POI >> system seemed at least partly similar to >> what the Real-Virtual >> interaction work had resulted in too -- >> and in fact about >> how the >> proposed POI schema was basically a >> version of the >> entity-component >> model which we?ve already been using for >> scenes in realXtend >> (it is >> inspired by / modeled after it, Ari >> told). So it can be much >> related to >> the Scene API work in the >> Synchronization GE too. As the action >> point we >> agreed that Ari will organize a specific >> work session on that. >> I was now thinking that it perhaps at >> least partly leads >> back to the >> question: how do we define (and >> implement) component types. I.e. >> what >> was mentioned in that entity-system post >> a few weeks back (with >> links >> to reX IComponent etc.). I mean: if >> functionality such as >> POIs and >> realworld interaction make sense as >> somehow resulting in >> custom data >> component types, does it mean that a key >> part of the framework >> is a way >> for those systems to declare their types >> .. so that it >> integrates nicely >> for the whole we want? I?m not sure, too >> tired to think it >> through now, >> but anyhow just wanted to mention that >> this was one topic that >> came up. >> I think Web Components is again >> something to check - as in XML >> terms reX >> Components are xml(3d) elements .. just >> ones that are usually in >> a group >> (according to the reX entity <-> xml3d >> group mapping). And Web >> Components are about defining & >> implementing new elements >> (as Erno >> pointed out in a different discussion >> about xml-html authoring >> in the >> session). >> BTW Thanks Kristian for the great >> comments in that entity system >> thread - was really good to learn about >> the alternative >> attribute access >> syntax and the validation in XML3D(.js). >> ~Toni >> P.S. for (Christof &) the DFKI folks: >> I?m sure you >> understand the >> rationale of these Oulu meets -- idea is >> ofc not to exclude you >> from the >> talks but just makes sense for us to >> meet live too as we are in >> the same >> city afterall etc -- naturally with the >> DFKI team you also talk >> there >> locally. Perhaps is a good idea that we >> make notes so that can >> post e.g. >> here then (I?m not volunteering though! >> ?) . Also, the now >> agreed >> bi-weekly setup on Tuesdays luckily >> works so that we can then >> summarize >> fresh in the global Wed meetings and >> continue the talks etc. >> *From:* Erno Kuusela >> *Sent:* ?Tuesday?, ?October? ?22?, ?2013 >> ?9?:?57? ?AM >> *To:* Fiware-miwi >> >> >> Kari from CIE offered to host it this >> time, so see you there at >> 13:00. >> >> Erno >> >> ______________________________** >> _____________________ >> >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> >> > >> >> >> >> >> >> >> >> >> https://lists.fi-ware.eu/____** >> listinfo/fiware-miwi < >> https://lists.fi-ware.eu/__**listinfo/fiware-miwi >> > >> >> > listinfo/fiware-miwi < >> https://lists.fi-ware.eu/**listinfo/fiware-miwi >> >> >> >> >> >> ______________________________** >> _____________________ >> >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> >> > >> >> >> >> >> >> >> >> >> https://lists.fi-ware.eu/____** >> listinfo/fiware-miwi < >> https://lists.fi-ware.eu/__**listinfo/fiware-miwi >> > >> >> >> > listinfo/fiware-miwi < >> https://lists.fi-ware.eu/**listinfo/fiware-miwi >> >> >> >> >> >> -- >> >> >> ------------------------------** >> ____--------------------------**--__--__------------- >> >> Deutsches Forschungszentrum f?r K?nstliche >> Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 >> Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang >> Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB >> 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: >> 19/673/0060/3 >> >> ------------------------------** >> ____--------------------------**--__--__--------------- >> >> >> >> ______________________________** >> ___________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> >> > >> >> >> >> >> >> >> >> >> >> https://lists.fi-ware.eu/__**listinfo/fiware-miwi >> >> > >> >> >> >> >> -- >> >> ------------------------------** >> __----------------------------**--__------------- >> Deutsches Forschungszentrum f?r K?nstliche >> Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster >> (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> ------------------------------** >> __----------------------------**--__--------------- >> >> >> >> >> >> -- >> >> ------------------------------** >> __----------------------------**--__------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz >> (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> ------------------------------** >> __----------------------------**--__--------------- >> >> >> >> >> >> >> -- >> >> ------------------------------**__----------------------------** >> --__------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> ------------------------------**__----------------------------** >> --__--------------- >> >> ______________________________**_________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> > >> https://lists.fi-ware.eu/**listinfo/fiware-miwi >> >> >> > > -- > > ------------------------------**------------------------------** > ------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------**------------------------------** > --------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomi.sarni at cyberlightning.com Fri Oct 25 09:08:15 2013 From: tomi.sarni at cyberlightning.com (Tomi Sarni) Date: Fri, 25 Oct 2013 10:08:15 +0300 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <52692BAC.8050600@dfki.de> <526949CA.5080402@dfki.de> <52696BD5.8070007@dfki.de> <526A0FB1.9000709@dfki.de> Message-ID: By transition logic i mean it would be more efficient storage space wise to use existing graphical model and give mathematical instructions on how to animate it to achieve different states (analogue to button up and down). Storage efficiency is crucial in sensor level. On Fri, Oct 25, 2013 at 10:05 AM, Tomi Sarni wrote: > *It becomes more of an issue when we talk about interactivity, when the > visual representation needs to react to user input in a way that is > consistent with the application and calls functionality in the application. > In other words, you have to do a mapping from the sensor to the application > at some point along the pipeline (and back for actions to be performed by > an actuator).* > > Currently when a client polls a device(containing sensor and/or actuators) > it will receive all interaction options that available for the particular > sensor or actuator. These options can be then accessed by a HTTP POST > method from the service. So there is the logical mapping. I can see your > point though, in a way it would seem logical to have that XML3D model to > contain states (e.g. button up and button down 3d model states), and i have > no idea whether this is supported by XML3D, as i have been busy on > server/sensor side. This way when a sensor is being accesses by HTTP POST > call to change state to either on or off for instance, the XML3D model > could contain transition logic to change appearance from one state to > another. Alternatively there can be two models for two states. When the > actuator is being queried it will return model that corresponds to its > current state. > > > > > > On Fri, Oct 25, 2013 at 9:29 AM, Philipp Slusallek < > Philipp.Slusallek at dfki.de> wrote: > >> Hi Tomi, >> >> Yes, this is definitely an interesting option and when sensors offer >> REST-ful interfaces, it should be almost trivial to add (once a suitable >> and standardized way of how to find that data is specified. At least it >> would provide a kind of default visualization in case no other is available. >> >> It becomes more of an issue when we talk about interactivity, when the >> visual representation needs to react to user input in a way that is >> consistent with the application and calls functionality in the application. >> In other words, you have to do a mapping from the sensor to the application >> at some point along the pipeline (and back for actions to be performed by >> an actuator). >> >> Either we specify the sensor type through some semantic means (a simple >> tag in the simplest case, a full RDF/a graph in the best case) and let the >> application choose how to represent it or we need to find a way to map >> generic behavior of a default object to application functionality. The >> first seems much easier to me as application functionality is likely to >> vary much more than sensor functionality. And semantic sensor description >> have been worked on for a long time and are available on the market. >> >> Of course, there are hybrid methods as well: A simple one would be to >> include a URI/URL to a default model in the semantic sensor description >> that then gets loaded either from the sensor through REST (given some >> namespace there) or via the Web (again using some namespace or search >> strategy). Then the application can always inject its own mapping to what >> it thinks is the best mapping. >> >> >> Best, >> >> Philipp >> >> Am 25.10.2013 07:52, schrieb Tomi Sarni: >> >>> *Following is completely on theoretical level:* >>> >>> To mix things a little further i've been thinking about a possibility to >>> store visual representation of sensors within the sensors themselves. >>> Many sensor types allow HTTP POST/GET or even PUT/DELETE methods >>> (wrapped in SNMP/CoAP communication protocols for instance) which in >>> theory would allow sensor subscribers to also publish information in >>> sensors (e.g. upload an xml3d model). This approach could be useful in >>> cases where these sensors would have different purposes of use. But the >>> sensor may have very little space to use for the model from up 8-18 KB. >>> Also the web service can attach the models to IDs through use of data >>> base. This is really just a pointer, perhaps there would be use-cases >>> where the sensor visualization could be stored within the sensor itself, >>> i think specifically some AR solutions could benefit from this. But do >>> not let this mix up things, this perhaps reinforces the fact that there >>> need to be overlaying middleware services that attach visual >>> representation based on their own needs. One service could use different >>> 3d representation for temperature sensor than another one. >>> >>> >>> >>> >>> On Thu, Oct 24, 2013 at 9:49 PM, Philipp Slusallek >>> >> >>> wrote: >>> >>> Hi, >>> >>> OK, now I get it. This does make sense -- at least in a local >>> scenario, where the POI data (in this example) needs to be stored >>> somewhere anyway and storing it in a component and then generating >>> the appropriate visual component does make sense. Using web >>> components or a similar mechanism we could actually do the same via >>> the DOM (as discussed for the general ECA sync before). >>> >>> But even then you might actually not want to store all the POI data >>> but only the part that really matter to the application (there may >>> be much more data -- maybe not for POIs but potentially for other >>> things). >>> >>> Also in a distributed scenario, I am not so sure. In that case you >>> might want to do that mapping on the server and only sync the >>> resulting data, maybe with reference back so you can still interact >>> with the original data through a service call. That is the main >>> reason why I in general think of POI data and POI representation as >>> separate entities. >>> >>> Regarding terminology, I think it does make sense to differntiate >>> between the 3D scene and the application state (that is not directly >>> influencing the 3D rendering and interaction). While you store them >>> within the same data entity (but in different components), they >>> still refer to quite different things and are operated on by >>> different parts of you program (e.g. the renderer only ever touches >>> the "scene" data). We do the same within the XML3D core, where we >>> attach renderer-specific data to DOM nodes and I believe three.js >>> also does something similar within its data structures. At the end, >>> you have to store these things somewhere and there are only so many >>> way to implement it. The differences are not really that big. >>> >>> >>> Best, >>> >>> Philipp >>> >>> Am 24.10.2013 19:24, schrieb Toni Alatalo: >>> >>> On 24 Oct 2013, at 19:24, Philipp Slusallek >>> >>> > >>> >>> >>> >>> >>> wrote: >>> >>> Good discussion! >>> >>> >>> I find so too ? thanks for the questions and comments and all! >>> Now >>> briefly about just one point: >>> >>> Am 24.10.2013 17:37, schrieb Toni Alatalo: >>> >>> integrates to the scene system too - for example if a >>> scene server >>> queries POI services, does it then only use the data to >>> manipulate >>> the scene using other non-POI components, or does it >>> often make sense >>> also to include POI components in the scene so that the >>> clients get >>> it too automatically with the scene sync and can for >>> example provide >>> POI specific GUI tools. Ofc clients can query POI >>> services directly >>> too but this server centric setup is also one scenario >>> and there the >>> scene integration might make sense. >>> >>> But I would say that there is a clear distinction between >>> the POI data >>> (which you query from some service) and the visualization or >>> representation of the POI data. Maybe you are more talking >>> about the >>> latter here. However, there really is an application >>> dependent mapping >>> from the POI data to its representation. Each application >>> may choose >>> to present the same POI data in very different way and its >>> only this >>> resulting representation that becomes part of the scene. >>> >>> >>> No I was not talking about visualization or representations here >>> but the >>> POI data. >>> >>> non-POI in the above tried to refer to the whole which covers >>> visualisations etc :) >>> >>> Your last sentence may help to understand the confusion: in >>> these posts >>> I?ve been using the reX entity system terminology only ? hoping >>> that it >>> is clear to discuss that way and not mix terms (like I?ve tried >>> to do in >>> some other threads). >>> >>> There ?scene? does not refer to a visual / graphical or any >>> other type >>> of scene. It does not refer to e.g. something like what xml3d.js >>> and >>> three.js, or ogre, have as their Scene objects. >>> >>> It simply means the collection of all entities. There it is >>> perfectly >>> valid to any kind of data which does not end up to e.g. the >>> visual scene >>> ? many components are like that. >>> >>> So in the above ?only use the data to manipulate the scene using >>> other >>> non-POI components? was referring to for example creation of Mesh >>> components if some POI is to be visualised that way. The mapping >>> that >>> you were discussing. >>> >>> But my point was not about that but about the POI data itself ? >>> and the >>> example about some end user GUI with a widget that manipulates >>> it. So it >>> then gets automatically synchronised along with all the other >>> data in >>> the application in a collaborative setting etc. >>> >>> Stepping out of the previous terminology, we could perhaps >>> translate: >>> ?scene? -> ?application state? and ?scene server? -> >>> ?synchronization >>> server?. >>> >>> I hope this clarifies something ? my apologies if not.. >>> >>> Cheers, >>> ~Toni >>> >>> P.S. i sent the previous post from a foreign device and >>> accidentally >>> with my gmail address as sender so it didn?t make it to the list >>> ? so >>> thank you for quoting it in full so I don?t think we need to >>> repost that :) >>> >>> This is essentially the Mapping stage of the well-known >>> Visualization >>> pipeline >>> (http://www.infovis-wiki.net/_**_index.php/Visualization___* >>> *Pipeline >>> >> Pipeline >>> >), >>> >>> except >>> that here we also map interaction aspects to an abstract >>> scene >>> description (XML3D) first, which then performs the rendering >>> and >>> interaction. So you can think of this as an additional >>> "Scene" stage >>> between "Mapping" and "Rendering". >>> >>> I think this is a different topic, but also with >>> real-virtual >>> interaction for example how to facilitate nice simple >>> authoring of >>> the e.g. real-virtual object mappings seems a fruitful >>> enough angle >>> to think a bit, perhaps as a case to help in >>> understanding the entity >>> system & the different servers etc. For example if >>> there's a >>> component type 'real world link', the Interface Designer >>> GUI shows it >>> automatically in the list of components, ppl can just >>> add them to >>> their scenes and somehow then the system just works.. >>> >>> >>> I am not sure what you are getting at. But it would be great >>> if the >>> Interface Designer would allow to choose such POI mappings >>> from a >>> predegined catalog. It seems that Xflow can be used nicely >>> for >>> generating the mapped scene elements from some input data, >>> e.g. using >>> the same approach we use to provide basic primitives like >>> cubes or >>> spheres in XML3D. Here they are not fixed, build-in tags as >>> in X3D but >>> can actually be added by the developer as it best fits. >>> >>> For generating more complex subgraphs we may have to extend >>> the >>> current Xflow implementation. But its at least a great >>> starting point >>> to experiment with it. Experiments and feedback would be >>> very welcome >>> here. >>> >>> I don't think these discussions are now hurt by us >>> (currently) having >>> alternative renderers - the entity system, formats, sync >>> and the >>> overall architecture is the same anyway. >>> >>> >>> Well, some things only work in one and others only in the >>> other >>> branch. So the above mechanism could not be used to >>> visualize POIs in >>> the three.js branch but we do not have all the features to >>> visualize >>> Oulu (or whatever city) in the XML3D.js branch. This >>> definitely IS >>> greatly limiting how we can combine the GEs into more complex >>> applications -- the untimate goal of the orthogonal design >>> of this >>> chapter. >>> >>> And it does not even work within the same chapter. It will >>> be hard to >>> explain to Juanjo and others from FI-WARE (or the commission >>> for that >>> matter). >>> >>> BTW, I just learned today that there is a FI-WARE smaller >>> review >>> coming up soon. Let's see if we already have to present >>> things there. >>> So far they have not explicitly asked us. >>> >>> >>> Best, >>> >>> Philipp >>> >>> -Toni >>> >>> >>> From an XML3D POV things could actually be quite >>> "easy". It should >>> be rather simple to directly interface to the IoT >>> GEs of FI-WARE >>> through REST via a new Xflow element. This would >>> then make the data >>> available through elements. Then you can use >>> all the features >>> of Xflow to manipulate the scene based on the data. >>> For example, we >>> are discussing building a set of visualization nodes >>> that implement >>> common visualization metaphors, such as scatter >>> plots, animations, >>> you name it. A new member of the lab starting soon >>> wants to look >>> into this area. >>> >>> For acting on objects we have always used Web >>> services attached to >>> the XML3D objects via DOM events. Eventually, I >>> believe we want a >>> higher level input handling and processing framework >>> but no one >>> knows so far, how this should look like (we have >>> some ideas but they >>> are not well baked, any inpu is highly welcome >>> here). This might or >>> might not reuse some of the Xflow mechanisms. >>> >>> But how to implement RealVirtual Interaction is >>> indeed an intersting >>> discussion. Getting us all on the same page and >>> sharing ideas and >>> implementations is very helpful. Doing this on the >>> same SW platform >>> (without the fork that we currently have) would >>> facilitate a >>> powerful implementation even more. >>> >>> >>> Thanks >>> >>> Philipp >>> >>> Am 23.10.2013 08:02, schrieb Tomi Sarni: >>> >>> ->Philipp >>> /I did not get the idea why POIs are similar to >>> ECA. At a very high >>> level I see it, but I am not sure what it buys >>> us. Can someone sketch >>> that picture in some more detail?/ >>> >>> Well I suppose it becomes relevant at point when >>> we are combining our >>> GEs together. If the model can be applied in >>> level of scene then >>> down to >>> POI in a scene and further down in sensor level, >>> things can be >>> more easily visualized. Not just in terms of >>> painting 3D models but in >>> terms of handling big data as well, more >>> specifically handling >>> relationships/inheritance. It also makes it >>> easier >>> to design a RESTful API as we have a common >>> structure which to follow >>> and also provides more opportunities for 3rd >>> party developers to make >>> use of the data for their own purposes. >>> >>> For instance >>> >>> ->Toni >>> >>> From point of sensors, the entity-component >>> becomes >>> device-sensors/actuators. A device may have an >>> unique identifier and IP >>> by which to access it, but it may also contain >>> several actuators and >>> sensors >>> that are components of that device entity. >>> Sensors/actuators >>> themselves >>> are not aware to whom they are interesting to. >>> One client may use the >>> sensor information differently to other client. >>> Sensor/actuator service >>> allows any other service to query using >>> request/response method either >>> by geo-coordinates (circle,square or complex >>> shape queries) or perhaps >>> through type+maxresults and service will return >>> entities and their >>> components >>> from which the reqester can form logical >>> groups(array of entity uuids) >>> and query more detailed information based on >>> that logical group. >>> >>> I guess there needs to be similar thinking done >>> on POI level. I guess >>> POI does not know which scene it belongs to. It >>> is up to scene >>> server to >>> form a logical group of POIs (e.g. restaurants >>> of oulu 3d city >>> model). Then >>> again the problem is that scene needs to wait >>> for POI to query for >>> sensors and form its logical groups before it >>> can pass information to >>> scene. This can lead to long wait times. But >>> this sequencing problem is >>> also something >>> that could be thought. Anyways this is a common >>> problem with everything >>> in web at the moment in my opinnion. Services >>> become intertwined. >>> When a >>> client loads a web page there can be queries to >>> 20 different services >>> for advertisment and other stuff. Web page >>> handles it by painting stuff >>> to the client on receive basis. I think this >>> could be applied in Scene >>> as well. >>> >>> >>> >>> >>> >>> On Wed, Oct 23, 2013 at 8:00 AM, Philipp >>> Slusallek >>> >> >>> > >>> >>> >>> >> >>> >>> >>> >>> >>> wrote: >>> >>> Hi, >>> >>> First of all, its certainly a good thing to >>> also meet locally. I was >>> just a bit confused whether that meeting >>> somehow would involve us as >>> well. Summarizing the results briefly for >>> the others would >>> definitely be interesting. >>> >>> I did not get the idea why POIs are similar >>> to ECA. At a very high >>> level I see it, but I am not sure what it >>> buys us. Can someone >>> sketch that picture in some more detail? >>> >>> BTW, what is the status with the Rendering >>> discussion (Three.js vs. >>> xml3d.js)? I still have the feeling that we >>> are doing parallel work >>> here that should probably be avoided. >>> >>> BTW, as part of our shading work (which is >>> shaping up nicely) Felix >>> has been looking lately at a way to describe >>> rendering stages >>> (passes) essentially through Xflow. It is >>> still very experimental >>> but he is using it to implement shadow maps >>> right now. >>> >>> @Felix: Once this has converged into a bit >>> more stable idea, it >>> would be good to post this here to get >>> feedback. The way we >>> discussed it, this approach could form a >>> nice basis for a modular >>> design of advanced rasterization techniques >>> (reflection maps, adv. >>> face rendering, SSAO, lens flare, tone >>> mapping, etc.), and (later) >>> maybe also describe global illumination >>> settings (similar to our >>> work on LightingNetworks some years ago). >>> >>> >>> Best, >>> >>> Philipp >>> >>> Am 22.10.2013 23:03, schrieb >>> toni at playsign.net >>> >> > >>> >>> >> >: >>> >>> Just a brief note: we had some >>> interesting preliminary >>> discussion >>> triggered by how the data schema that >>> Ari O. presented for >>> the POI >>> system seemed at least partly similar to >>> what the Real-Virtual >>> interaction work had resulted in too -- >>> and in fact about >>> how the >>> proposed POI schema was basically a >>> version of the >>> entity-component >>> model which we?ve already been using for >>> scenes in realXtend >>> (it is >>> inspired by / modeled after it, Ari >>> told). So it can be much >>> related to >>> the Scene API work in the >>> Synchronization GE too. As the action >>> point we >>> agreed that Ari will organize a specific >>> work session on that. >>> I was now thinking that it perhaps at >>> least partly leads >>> back to the >>> question: how do we define (and >>> implement) component types. I.e. >>> what >>> was mentioned in that entity-system post >>> a few weeks back (with >>> links >>> to reX IComponent etc.). I mean: if >>> functionality such as >>> POIs and >>> realworld interaction make sense as >>> somehow resulting in >>> custom data >>> component types, does it mean that a key >>> part of the framework >>> is a way >>> for those systems to declare their types >>> .. so that it >>> integrates nicely >>> for the whole we want? I?m not sure, too >>> tired to think it >>> through now, >>> but anyhow just wanted to mention that >>> this was one topic that >>> came up. >>> I think Web Components is again >>> something to check - as in XML >>> terms reX >>> Components are xml(3d) elements .. just >>> ones that are usually in >>> a group >>> (according to the reX entity <-> xml3d >>> group mapping). And Web >>> Components are about defining & >>> implementing new elements >>> (as Erno >>> pointed out in a different discussion >>> about xml-html authoring >>> in the >>> session). >>> BTW Thanks Kristian for the great >>> comments in that entity system >>> thread - was really good to learn about >>> the alternative >>> attribute access >>> syntax and the validation in XML3D(.js). >>> ~Toni >>> P.S. for (Christof &) the DFKI folks: >>> I?m sure you >>> understand the >>> rationale of these Oulu meets -- idea is >>> ofc not to exclude you >>> from the >>> talks but just makes sense for us to >>> meet live too as we are in >>> the same >>> city afterall etc -- naturally with the >>> DFKI team you also talk >>> there >>> locally. Perhaps is a good idea that we >>> make notes so that can >>> post e.g. >>> here then (I?m not volunteering though! >>> ?) . Also, the now >>> agreed >>> bi-weekly setup on Tuesdays luckily >>> works so that we can then >>> summarize >>> fresh in the global Wed meetings and >>> continue the talks etc. >>> *From:* Erno Kuusela >>> *Sent:* ?Tuesday?, ?October? ?22?, ?2013 >>> ?9?:?57? ?AM >>> *To:* Fiware-miwi >>> >>> >>> Kari from CIE offered to host it this >>> time, so see you there at >>> 13:00. >>> >>> Erno >>> >>> ______________________________** >>> _____________________ >>> >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> >>> > >>> >>> >>> >> >>> >>> >>> >> >>> https://lists.fi-ware.eu/____** >>> listinfo/fiware-miwi < >>> https://lists.fi-ware.eu/__**listinfo/fiware-miwi >>> > >>> >>> >> listinfo/fiware-miwi < >>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>> >> >>> >>> >>> >>> ______________________________** >>> _____________________ >>> >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> >>> > >>> >>> >>> >> >>> >>> >>> >> >>> https://lists.fi-ware.eu/____** >>> listinfo/fiware-miwi < >>> https://lists.fi-ware.eu/__**listinfo/fiware-miwi >>> > >>> >>> >>> >> listinfo/fiware-miwi < >>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>> >> >>> >>> >>> >>> -- >>> >>> >>> ------------------------------** >>> ____--------------------------**--__--__------------- >>> >>> Deutsches Forschungszentrum f?r K?nstliche >>> Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 >>> Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang >>> Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB >>> 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: >>> 19/673/0060/3 >>> >>> ------------------------------** >>> ____--------------------------**--__--__--------------- >>> >>> >>> >>> ______________________________** >>> ___________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> >>> > >>> >>> >>> >> >>> >>> >>> >>> >> >>> https://lists.fi-ware.eu/__** >>> listinfo/fiware-miwi >>> >>> > >>> >>> >>> >>> >>> -- >>> >>> ------------------------------** >>> __----------------------------**--__------------- >>> Deutsches Forschungszentrum f?r K?nstliche >>> Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster >>> (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: >>> 19/673/0060/3 >>> ------------------------------** >>> __----------------------------**--__--------------- >>> >>> >>> >>> >>> >>> -- >>> >>> ------------------------------** >>> __----------------------------**--__------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz >>> (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> ------------------------------** >>> __----------------------------**--__--------------- >>> >>> >>> >>> >>> >>> >>> -- >>> >>> ------------------------------**__----------------------------** >>> --__------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> ------------------------------**__----------------------------** >>> --__--------------- >>> >>> ______________________________**_________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> > >>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>> >>> >>> >> >> -- >> >> ------------------------------**------------------------------** >> ------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> ------------------------------**------------------------------** >> --------------- >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Fri Oct 25 09:56:17 2013 From: toni at playsign.net (Toni Alatalo) Date: Fri, 25 Oct 2013 10:56:17 +0300 Subject: [Fiware-miwi] a canonical custom component example: Door (Re: 13:00 meeting location: CIE (Re: Oulu meet today 13:00)) In-Reply-To: References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <52692BAC.8050600@dfki.de> <526949CA.5080402@dfki.de> <52696BD5.8070007@dfki.de> <526A0FB1.9000709@dfki.de> Message-ID: <69BDA86F-68AF-47DC-B715-D27F9267B54A@playsign.net> On 25 Oct 2013, at 10:05, Tomi Sarni wrote: > Currently when a client polls a device(containing sensor and/or actuators) it will receive all interaction options that available for the particular sensor or actuator. These options can be then accessed by a HTTP POST method from the service. So there is the logical mapping. I can see your point though, in a way it would seem logical to have that XML3D model to contain states (e.g. button up and button down 3d model states), and i have no idea whether this is supported by XML3D, as i have been busy on server/sensor side. This way when a sensor is being accesses by HTTP POST call to change state to either on or off for instance, the XML3D model could contain transition logic to change appearance from one state to another. Alternatively there can be two models for two states. When the actuator is being queried it will return model that corresponds to its current state. Having arbitrary custom state is exactly what the entity system in realXtend is for, and applying the mapping between reX EC model & xml3d we have now, it would be how we use the xml3d format as well. The ?x? in xml is for ?extensible? (and we can nowadays read the X in reX the same way, though it originally refers to extending reality, not extensible virtual worlds :) The first script + custom component test&demo I made with Naali/Tundra was a Door. It was implemented by defining a Door component like this: Door: bool: opened bool: locked The functionality was implemented with a script that listened clicks/touches on the object. If it was closed, but was not opened, it opened upon touch. If it was already open, it was always closed. If it was closed by locked, it did not do anything. It could be locked/unlocked with a GUI button. When hovering with the mouse cursor over the door the cursor type depended on the state: different icon was used based on whether it was closed-unlocked, closed-locked or opened (to communicate the action that would happen in advance to the user). The demo scene + code for that is in the tundra 1 branch, https://github.com/realXtend/naali/blob/tundra/bin/scenes/Door/door.coffee .. the port to tundra2 was not completed (yet?) so it?s not in current ? would still be a nice demo of using custom data + scripting I think, feel free to port it anyone! The model is a nicely modelled & rigged accordion door (haitariovi) even which animates. Next planned step was to allow use of mouse wheel to slide (animate) the door partially to test streaming sync and not just boolean toggles, I think I tested somehow a little too then. So in the human friendly xml format (i.e. xml3d) that example would look like this: Whereas the same as TXML, the relevant parts copy-pasted from https://github.com/realXtend/naali/blob/tundra/bin/scenes/Door/door.txml is: (note that this is tundra1 txml so a bit different from current 2.x series!) You can see how it uses DynamicComponent -- that?s due to the weakness of the JS API in Tundra currently which was discussed in an earlier thread here on MIWI: ideally it would say (EC_)Door in that TXML. In the xml3d example it is assumed that we have a way to register custom handlers for component types without separate script-references. That?s how I originally implemented the system in Tundra 0.x times too and how the original door implementation worked (the component registry and sandboxed JS Api exposing was first implemented in Python which was made optional later, and the later made EC_Script mechanism made the component type handler registry quite redundant - it was nicer though to only need to add one component to make an object a Door for example .. currently I think it goes ok so that the handler script creates the data component it needs so that users don?t need to add two manually). In current Tundra you?d get the EC_Door working just like that in C++ implementing the IComponent interface. ~Toni > On Fri, Oct 25, 2013 at 9:29 AM, Philipp Slusallek wrote: > Hi Tomi, > > Yes, this is definitely an interesting option and when sensors offer REST-ful interfaces, it should be almost trivial to add (once a suitable and standardized way of how to find that data is specified. At least it would provide a kind of default visualization in case no other is available. > > It becomes more of an issue when we talk about interactivity, when the visual representation needs to react to user input in a way that is consistent with the application and calls functionality in the application. In other words, you have to do a mapping from the sensor to the application at some point along the pipeline (and back for actions to be performed by an actuator). > > Either we specify the sensor type through some semantic means (a simple tag in the simplest case, a full RDF/a graph in the best case) and let the application choose how to represent it or we need to find a way to map generic behavior of a default object to application functionality. The first seems much easier to me as application functionality is likely to vary much more than sensor functionality. And semantic sensor description have been worked on for a long time and are available on the market. > > Of course, there are hybrid methods as well: A simple one would be to include a URI/URL to a default model in the semantic sensor description that then gets loaded either from the sensor through REST (given some namespace there) or via the Web (again using some namespace or search strategy). Then the application can always inject its own mapping to what it thinks is the best mapping. > > > Best, > > Philipp > > Am 25.10.2013 07:52, schrieb Tomi Sarni: > *Following is completely on theoretical level:* > > To mix things a little further i've been thinking about a possibility to > store visual representation of sensors within the sensors themselves. > Many sensor types allow HTTP POST/GET or even PUT/DELETE methods > (wrapped in SNMP/CoAP communication protocols for instance) which in > theory would allow sensor subscribers to also publish information in > sensors (e.g. upload an xml3d model). This approach could be useful in > cases where these sensors would have different purposes of use. But the > sensor may have very little space to use for the model from up 8-18 KB. > Also the web service can attach the models to IDs through use of data > base. This is really just a pointer, perhaps there would be use-cases > where the sensor visualization could be stored within the sensor itself, > i think specifically some AR solutions could benefit from this. But do > not let this mix up things, this perhaps reinforces the fact that there > need to be overlaying middleware services that attach visual > representation based on their own needs. One service could use different > 3d representation for temperature sensor than another one. > > > > > On Thu, Oct 24, 2013 at 9:49 PM, Philipp Slusallek > > wrote: > > Hi, > > OK, now I get it. This does make sense -- at least in a local > scenario, where the POI data (in this example) needs to be stored > somewhere anyway and storing it in a component and then generating > the appropriate visual component does make sense. Using web > components or a similar mechanism we could actually do the same via > the DOM (as discussed for the general ECA sync before). > > But even then you might actually not want to store all the POI data > but only the part that really matter to the application (there may > be much more data -- maybe not for POIs but potentially for other > things). > > Also in a distributed scenario, I am not so sure. In that case you > might want to do that mapping on the server and only sync the > resulting data, maybe with reference back so you can still interact > with the original data through a service call. That is the main > reason why I in general think of POI data and POI representation as > separate entities. > > Regarding terminology, I think it does make sense to differntiate > between the 3D scene and the application state (that is not directly > influencing the 3D rendering and interaction). While you store them > within the same data entity (but in different components), they > still refer to quite different things and are operated on by > different parts of you program (e.g. the renderer only ever touches > the "scene" data). We do the same within the XML3D core, where we > attach renderer-specific data to DOM nodes and I believe three.js > also does something similar within its data structures. At the end, > you have to store these things somewhere and there are only so many > way to implement it. The differences are not really that big. > > > Best, > > Philipp > > Am 24.10.2013 19:24, schrieb Toni Alatalo: > > On 24 Oct 2013, at 19:24, Philipp Slusallek > > > >> wrote: > > Good discussion! > > > I find so too ? thanks for the questions and comments and all! Now > briefly about just one point: > > Am 24.10.2013 17:37, schrieb Toni Alatalo: > > integrates to the scene system too - for example if a > scene server > queries POI services, does it then only use the data to > manipulate > the scene using other non-POI components, or does it > often make sense > also to include POI components in the scene so that the > clients get > it too automatically with the scene sync and can for > example provide > POI specific GUI tools. Ofc clients can query POI > services directly > too but this server centric setup is also one scenario > and there the > scene integration might make sense. > > But I would say that there is a clear distinction between > the POI data > (which you query from some service) and the visualization or > representation of the POI data. Maybe you are more talking > about the > latter here. However, there really is an application > dependent mapping > from the POI data to its representation. Each application > may choose > to present the same POI data in very different way and its > only this > resulting representation that becomes part of the scene. > > > No I was not talking about visualization or representations here > but the > POI data. > > non-POI in the above tried to refer to the whole which covers > visualisations etc :) > > Your last sentence may help to understand the confusion: in > these posts > I?ve been using the reX entity system terminology only ? hoping > that it > is clear to discuss that way and not mix terms (like I?ve tried > to do in > some other threads). > > There ?scene? does not refer to a visual / graphical or any > other type > of scene. It does not refer to e.g. something like what xml3d.js and > three.js, or ogre, have as their Scene objects. > > It simply means the collection of all entities. There it is > perfectly > valid to any kind of data which does not end up to e.g. the > visual scene > ? many components are like that. > > So in the above ?only use the data to manipulate the scene using > other > non-POI components? was referring to for example creation of Mesh > components if some POI is to be visualised that way. The mapping > that > you were discussing. > > But my point was not about that but about the POI data itself ? > and the > example about some end user GUI with a widget that manipulates > it. So it > then gets automatically synchronised along with all the other > data in > the application in a collaborative setting etc. > > Stepping out of the previous terminology, we could perhaps > translate: > ?scene? -> ?application state? and ?scene server? -> > ?synchronization > server?. > > I hope this clarifies something ? my apologies if not.. > > Cheers, > ~Toni > > P.S. i sent the previous post from a foreign device and accidentally > with my gmail address as sender so it didn?t make it to the list > ? so > thank you for quoting it in full so I don?t think we need to > repost that :) > > This is essentially the Mapping stage of the well-known > Visualization > pipeline > (http://www.infovis-wiki.net/__index.php/Visualization___Pipeline > ), > > except > that here we also map interaction aspects to an abstract scene > description (XML3D) first, which then performs the rendering and > interaction. So you can think of this as an additional > "Scene" stage > between "Mapping" and "Rendering". > > I think this is a different topic, but also with > real-virtual > interaction for example how to facilitate nice simple > authoring of > the e.g. real-virtual object mappings seems a fruitful > enough angle > to think a bit, perhaps as a case to help in > understanding the entity > system & the different servers etc. For example if there's a > component type 'real world link', the Interface Designer > GUI shows it > automatically in the list of components, ppl can just > add them to > their scenes and somehow then the system just works.. > > > I am not sure what you are getting at. But it would be great > if the > Interface Designer would allow to choose such POI mappings > from a > predegined catalog. It seems that Xflow can be used nicely for > generating the mapped scene elements from some input data, > e.g. using > the same approach we use to provide basic primitives like > cubes or > spheres in XML3D. Here they are not fixed, build-in tags as > in X3D but > can actually be added by the developer as it best fits. > > For generating more complex subgraphs we may have to extend the > current Xflow implementation. But its at least a great > starting point > to experiment with it. Experiments and feedback would be > very welcome > here. > > I don't think these discussions are now hurt by us > (currently) having > alternative renderers - the entity system, formats, sync > and the > overall architecture is the same anyway. > > > Well, some things only work in one and others only in the other > branch. So the above mechanism could not be used to > visualize POIs in > the three.js branch but we do not have all the features to > visualize > Oulu (or whatever city) in the XML3D.js branch. This > definitely IS > greatly limiting how we can combine the GEs into more complex > applications -- the untimate goal of the orthogonal design > of this > chapter. > > And it does not even work within the same chapter. It will > be hard to > explain to Juanjo and others from FI-WARE (or the commission > for that > matter). > > BTW, I just learned today that there is a FI-WARE smaller review > coming up soon. Let's see if we already have to present > things there. > So far they have not explicitly asked us. > > > Best, > > Philipp > > -Toni > > > From an XML3D POV things could actually be quite > "easy". It should > be rather simple to directly interface to the IoT > GEs of FI-WARE > through REST via a new Xflow element. This would > then make the data > available through elements. Then you can use > all the features > of Xflow to manipulate the scene based on the data. > For example, we > are discussing building a set of visualization nodes > that implement > common visualization metaphors, such as scatter > plots, animations, > you name it. A new member of the lab starting soon > wants to look > into this area. > > For acting on objects we have always used Web > services attached to > the XML3D objects via DOM events. Eventually, I > believe we want a > higher level input handling and processing framework > but no one > knows so far, how this should look like (we have > some ideas but they > are not well baked, any inpu is highly welcome > here). This might or > might not reuse some of the Xflow mechanisms. > > But how to implement RealVirtual Interaction is > indeed an intersting > discussion. Getting us all on the same page and > sharing ideas and > implementations is very helpful. Doing this on the > same SW platform > (without the fork that we currently have) would > facilitate a > powerful implementation even more. > > > Thanks > > Philipp > > Am 23.10.2013 08:02, schrieb Tomi Sarni: > > ->Philipp > /I did not get the idea why POIs are similar to > ECA. At a very high > level I see it, but I am not sure what it buys > us. Can someone sketch > that picture in some more detail?/ > > Well I suppose it becomes relevant at point when > we are combining our > GEs together. If the model can be applied in > level of scene then > down to > POI in a scene and further down in sensor level, > things can be > more easily visualized. Not just in terms of > painting 3D models but in > terms of handling big data as well, more > specifically handling > relationships/inheritance. It also makes it easier > to design a RESTful API as we have a common > structure which to follow > and also provides more opportunities for 3rd > party developers to make > use of the data for their own purposes. > > For instance > > ->Toni > > From point of sensors, the entity-component becomes > device-sensors/actuators. A device may have an > unique identifier and IP > by which to access it, but it may also contain > several actuators and > sensors > that are components of that device entity. > Sensors/actuators > themselves > are not aware to whom they are interesting to. > One client may use the > sensor information differently to other client. > Sensor/actuator service > allows any other service to query using > request/response method either > by geo-coordinates (circle,square or complex > shape queries) or perhaps > through type+maxresults and service will return > entities and their > components > from which the reqester can form logical > groups(array of entity uuids) > and query more detailed information based on > that logical group. > > I guess there needs to be similar thinking done > on POI level. I guess > POI does not know which scene it belongs to. It > is up to scene > server to > form a logical group of POIs (e.g. restaurants > of oulu 3d city > model). Then > again the problem is that scene needs to wait > for POI to query for > sensors and form its logical groups before it > can pass information to > scene. This can lead to long wait times. But > this sequencing problem is > also something > that could be thought. Anyways this is a common > problem with everything > in web at the moment in my opinnion. Services > become intertwined. > When a > client loads a web page there can be queries to > 20 different services > for advertisment and other stuff. Web page > handles it by painting stuff > to the client on receive basis. I think this > could be applied in Scene > as well. > > > > > > On Wed, Oct 23, 2013 at 8:00 AM, Philipp Slusallek > > > > > >> wrote: > > Hi, > > First of all, its certainly a good thing to > also meet locally. I was > just a bit confused whether that meeting > somehow would involve us as > well. Summarizing the results briefly for > the others would > definitely be interesting. > > I did not get the idea why POIs are similar > to ECA. At a very high > level I see it, but I am not sure what it > buys us. Can someone > sketch that picture in some more detail? > > BTW, what is the status with the Rendering > discussion (Three.js vs. > xml3d.js)? I still have the feeling that we > are doing parallel work > here that should probably be avoided. > > BTW, as part of our shading work (which is > shaping up nicely) Felix > has been looking lately at a way to describe > rendering stages > (passes) essentially through Xflow. It is > still very experimental > but he is using it to implement shadow maps > right now. > > @Felix: Once this has converged into a bit > more stable idea, it > would be good to post this here to get > feedback. The way we > discussed it, this approach could form a > nice basis for a modular > design of advanced rasterization techniques > (reflection maps, adv. > face rendering, SSAO, lens flare, tone > mapping, etc.), and (later) > maybe also describe global illumination > settings (similar to our > work on LightingNetworks some years ago). > > > Best, > > Philipp > > Am 22.10.2013 23:03, schrieb > toni at playsign.net > > > > >: > > Just a brief note: we had some > interesting preliminary > discussion > triggered by how the data schema that > Ari O. presented for > the POI > system seemed at least partly similar to > what the Real-Virtual > interaction work had resulted in too -- > and in fact about > how the > proposed POI schema was basically a > version of the > entity-component > model which we?ve already been using for > scenes in realXtend > (it is > inspired by / modeled after it, Ari > told). So it can be much > related to > the Scene API work in the > Synchronization GE too. As the action > point we > agreed that Ari will organize a specific > work session on that. > I was now thinking that it perhaps at > least partly leads > back to the > question: how do we define (and > implement) component types. I.e. > what > was mentioned in that entity-system post > a few weeks back (with > links > to reX IComponent etc.). I mean: if > functionality such as > POIs and > realworld interaction make sense as > somehow resulting in > custom data > component types, does it mean that a key > part of the framework > is a way > for those systems to declare their types > .. so that it > integrates nicely > for the whole we want? I?m not sure, too > tired to think it > through now, > but anyhow just wanted to mention that > this was one topic that > came up. > I think Web Components is again > something to check - as in XML > terms reX > Components are xml(3d) elements .. just > ones that are usually in > a group > (according to the reX entity <-> xml3d > group mapping). And Web > Components are about defining & > implementing new elements > (as Erno > pointed out in a different discussion > about xml-html authoring > in the > session). > BTW Thanks Kristian for the great > comments in that entity system > thread - was really good to learn about > the alternative > attribute access > syntax and the validation in XML3D(.js). > ~Toni > P.S. for (Christof &) the DFKI folks: > I?m sure you > understand the > rationale of these Oulu meets -- idea is > ofc not to exclude you > from the > talks but just makes sense for us to > meet live too as we are in > the same > city afterall etc -- naturally with the > DFKI team you also talk > there > locally. Perhaps is a good idea that we > make notes so that can > post e.g. > here then (I?m not volunteering though! > ?) . Also, the now > agreed > bi-weekly setup on Tuesdays luckily > works so that we can then > summarize > fresh in the global Wed meetings and > continue the talks etc. > *From:* Erno Kuusela > *Sent:* ?Tuesday?, ?October? ?22?, ?2013 > ?9?:?57? ?AM > *To:* Fiware-miwi > > > Kari from CIE offered to host it this > time, so see you there at > 13:00. > > Erno > > ___________________________________________________ > > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > > https://lists.fi-ware.eu/____listinfo/fiware-miwi > > > > > > > ___________________________________________________ > > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > > https://lists.fi-ware.eu/____listinfo/fiware-miwi > > > > > > > > -- > > > ------------------------------____----------------------------__--__------------- > > Deutsches Forschungszentrum f?r K?nstliche > Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang > Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: > 19/673/0060/3 > > ------------------------------____----------------------------__--__--------------- > > > > _________________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > > > https://lists.fi-ware.eu/__listinfo/fiware-miwi > > > > > > -- > > ------------------------------__------------------------------__------------- > Deutsches Forschungszentrum f?r K?nstliche > Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster > (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------__------------------------------__--------------- > > > > > > -- > > ------------------------------__------------------------------__------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz > (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------__------------------------------__--------------- > > > > > > > -- > > ------------------------------__------------------------------__------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------__------------------------------__--------------- > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Fri Oct 25 10:28:52 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Fri, 25 Oct 2013 10:28:52 +0200 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <52692BAC.8050600@dfki.de> <526949CA.5080402@dfki.de> <52696BD5.8070007@dfki.de> <526A0FB1.9000709@dfki.de> Message-ID: <526A2BC4.7080005@dfki.de> Hi, With interaction I mean the user interaction. Yes, it eventually gets mapped to REST (or such) calls to the device. But how you map the device functionality to user interaction is a big step where different applicatios will have very different assumptions and interaction metaphors. Mapping them all to ageneric sensor model seems very difficult. Using a sematic annotation avoid having to create such a mapping when you design the sensor, avoid having to store the model on each sonsor, and pushes the mapping to the software/application side,which is (in my opinion) in a much better option to decide on that mapping. A fallback mapping may still be provided by the sensor for the most basic cases. Best, Philipp Am 25.10.2013 09:05, schrieb Tomi Sarni: > /It becomes more of an issue when we talk about interactivity, when the > visual representation needs to react to user input in a way that is > consistent with the application and calls functionality in the > application. In other words, you have to do a mapping from the sensor to > the application at some point along the pipeline (and back for actions > to be performed by an actuator)./ > > Currently when a client polls a device(containing sensor and/or > actuators) it will receive all interaction options that available for > the particular sensor or actuator. These options can be then accessed > by a HTTP POST method from the service. So there is the logical mapping. > I can see your point though, in a way it would seem logical to have that > XML3D model to contain states (e.g. button up and button down 3d model > states), and i have no idea whether this is supported by XML3D, as i > have been busy on server/sensor side. This way when a sensor is being > accesses by HTTP POST call to change state to either on or off for > instance, the XML3D model could contain transition logic to change > appearance from one state to another. Alternatively there can be two > models for two states. When the actuator is being queried it will return > model that corresponds to its current state. > > > > > > On Fri, Oct 25, 2013 at 9:29 AM, Philipp Slusallek > > wrote: > > Hi Tomi, > > Yes, this is definitely an interesting option and when sensors offer > REST-ful interfaces, it should be almost trivial to add (once a > suitable and standardized way of how to find that data is specified. > At least it would provide a kind of default visualization in case no > other is available. > > It becomes more of an issue when we talk about interactivity, when > the visual representation needs to react to user input in a way that > is consistent with the application and calls functionality in the > application. In other words, you have to do a mapping from the > sensor to the application at some point along the pipeline (and back > for actions to be performed by an actuator). > > Either we specify the sensor type through some semantic means (a > simple tag in the simplest case, a full RDF/a graph in the best > case) and let the application choose how to represent it or we need > to find a way to map generic behavior of a default object to > application functionality. The first seems much easier to me as > application functionality is likely to vary much more than sensor > functionality. And semantic sensor description have been worked on > for a long time and are available on the market. > > Of course, there are hybrid methods as well: A simple one would be > to include a URI/URL to a default model in the semantic sensor > description that then gets loaded either from the sensor through > REST (given some namespace there) or via the Web (again using some > namespace or search strategy). Then the application can always > inject its own mapping to what it thinks is the best mapping. > > > Best, > > Philipp > > Am 25.10.2013 07:52, schrieb Tomi Sarni: > > *Following is completely on theoretical level:* > > To mix things a little further i've been thinking about a > possibility to > store visual representation of sensors within the sensors > themselves. > Many sensor types allow HTTP POST/GET or even PUT/DELETE methods > (wrapped in SNMP/CoAP communication protocols for instance) which in > theory would allow sensor subscribers to also publish > information in > sensors (e.g. upload an xml3d model). This approach could be > useful in > cases where these sensors would have different purposes of use. > But the > sensor may have very little space to use for the model from up > 8-18 KB. > Also the web service can attach the models to IDs through use of > data > base. This is really just a pointer, perhaps there would be > use-cases > where the sensor visualization could be stored within the sensor > itself, > i think specifically some AR solutions could benefit from this. > But do > not let this mix up things, this perhaps reinforces the fact > that there > need to be overlaying middleware services that attach visual > representation based on their own needs. One service could use > different > 3d representation for temperature sensor than another one. > > > > > On Thu, Oct 24, 2013 at 9:49 PM, Philipp Slusallek > > >> wrote: > > Hi, > > OK, now I get it. This does make sense -- at least in a local > scenario, where the POI data (in this example) needs to be > stored > somewhere anyway and storing it in a component and then > generating > the appropriate visual component does make sense. Using web > components or a similar mechanism we could actually do the > same via > the DOM (as discussed for the general ECA sync before). > > But even then you might actually not want to store all the > POI data > but only the part that really matter to the application > (there may > be much more data -- maybe not for POIs but potentially for > other > things). > > Also in a distributed scenario, I am not so sure. In that > case you > might want to do that mapping on the server and only sync the > resulting data, maybe with reference back so you can still > interact > with the original data through a service call. That is the main > reason why I in general think of POI data and POI > representation as > separate entities. > > Regarding terminology, I think it does make sense to > differntiate > between the 3D scene and the application state (that is not > directly > influencing the 3D rendering and interaction). While you > store them > within the same data entity (but in different components), they > still refer to quite different things and are operated on by > different parts of you program (e.g. the renderer only ever > touches > the "scene" data). We do the same within the XML3D core, > where we > attach renderer-specific data to DOM nodes and I believe > three.js > also does something similar within its data structures. At > the end, > you have to store these things somewhere and there are only > so many > way to implement it. The differences are not really that big. > > > Best, > > Philipp > > Am 24.10.2013 19:24, schrieb Toni Alatalo: > > On 24 Oct 2013, at 19:24, Philipp Slusallek > > > > __df__ki.de > > >>> wrote: > > Good discussion! > > > I find so too ? thanks for the questions and comments > and all! Now > briefly about just one point: > > Am 24.10.2013 17:37, schrieb Toni Alatalo: > > integrates to the scene system too - for > example if a > scene server > queries POI services, does it then only use the > data to > manipulate > the scene using other non-POI components, or > does it > often make sense > also to include POI components in the scene so > that the > clients get > it too automatically with the scene sync and > can for > example provide > POI specific GUI tools. Ofc clients can query POI > services directly > too but this server centric setup is also one > scenario > and there the > scene integration might make sense. > > But I would say that there is a clear distinction > between > the POI data > (which you query from some service) and the > visualization or > representation of the POI data. Maybe you are more > talking > about the > latter here. However, there really is an application > dependent mapping > from the POI data to its representation. Each > application > may choose > to present the same POI data in very different way > and its > only this > resulting representation that becomes part of the > scene. > > > No I was not talking about visualization or > representations here > but the > POI data. > > non-POI in the above tried to refer to the whole which > covers > visualisations etc :) > > Your last sentence may help to understand the confusion: in > these posts > I?ve been using the reX entity system terminology only > ? hoping > that it > is clear to discuss that way and not mix terms (like > I?ve tried > to do in > some other threads). > > There ?scene? does not refer to a visual / graphical or any > other type > of scene. It does not refer to e.g. something like what > xml3d.js and > three.js, or ogre, have as their Scene objects. > > It simply means the collection of all entities. There it is > perfectly > valid to any kind of data which does not end up to e.g. the > visual scene > ? many components are like that. > > So in the above ?only use the data to manipulate the > scene using > other > non-POI components? was referring to for example > creation of Mesh > components if some POI is to be visualised that way. > The mapping > that > you were discussing. > > But my point was not about that but about the POI data > itself ? > and the > example about some end user GUI with a widget that > manipulates > it. So it > then gets automatically synchronised along with all the > other > data in > the application in a collaborative setting etc. > > Stepping out of the previous terminology, we could perhaps > translate: > ?scene? -> ?application state? and ?scene server? -> > ?synchronization > server?. > > I hope this clarifies something ? my apologies if not.. > > Cheers, > ~Toni > > P.S. i sent the previous post from a foreign device and > accidentally > with my gmail address as sender so it didn?t make it to > the list > ? so > thank you for quoting it in full so I don?t think we > need to > repost that :) > > This is essentially the Mapping stage of the well-known > Visualization > pipeline > > (http://www.infovis-wiki.net/____index.php/Visualization_____Pipeline > > > >), > > except > that here we also map interaction aspects to an > abstract scene > description (XML3D) first, which then performs the > rendering and > interaction. So you can think of this as an additional > "Scene" stage > between "Mapping" and "Rendering". > > I think this is a different topic, but also with > real-virtual > interaction for example how to facilitate nice > simple > authoring of > the e.g. real-virtual object mappings seems a > fruitful > enough angle > to think a bit, perhaps as a case to help in > understanding the entity > system & the different servers etc. For example > if there's a > component type 'real world link', the Interface > Designer > GUI shows it > automatically in the list of components, ppl > can just > add them to > their scenes and somehow then the system just > works.. > > > I am not sure what you are getting at. But it would > be great > if the > Interface Designer would allow to choose such POI > mappings > from a > predegined catalog. It seems that Xflow can be used > nicely for > generating the mapped scene elements from some > input data, > e.g. using > the same approach we use to provide basic > primitives like > cubes or > spheres in XML3D. Here they are not fixed, build-in > tags as > in X3D but > can actually be added by the developer as it best fits. > > For generating more complex subgraphs we may have > to extend the > current Xflow implementation. But its at least a great > starting point > to experiment with it. Experiments and feedback > would be > very welcome > here. > > I don't think these discussions are now hurt by us > (currently) having > alternative renderers - the entity system, > formats, sync > and the > overall architecture is the same anyway. > > > Well, some things only work in one and others only > in the other > branch. So the above mechanism could not be used to > visualize POIs in > the three.js branch but we do not have all the > features to > visualize > Oulu (or whatever city) in the XML3D.js branch. This > definitely IS > greatly limiting how we can combine the GEs into > more complex > applications -- the untimate goal of the orthogonal > design > of this > chapter. > > And it does not even work within the same chapter. > It will > be hard to > explain to Juanjo and others from FI-WARE (or the > commission > for that > matter). > > BTW, I just learned today that there is a FI-WARE > smaller review > coming up soon. Let's see if we already have to present > things there. > So far they have not explicitly asked us. > > > Best, > > Philipp > > -Toni > > > From an XML3D POV things could actually be > quite > "easy". It should > be rather simple to directly interface to > the IoT > GEs of FI-WARE > through REST via a new Xflow element. This > would > then make the data > available through elements. Then you > can use > all the features > of Xflow to manipulate the scene based on > the data. > For example, we > are discussing building a set of > visualization nodes > that implement > common visualization metaphors, such as scatter > plots, animations, > you name it. A new member of the lab > starting soon > wants to look > into this area. > > For acting on objects we have always used Web > services attached to > the XML3D objects via DOM events. Eventually, I > believe we want a > higher level input handling and processing > framework > but no one > knows so far, how this should look like (we > have > some ideas but they > are not well baked, any inpu is highly welcome > here). This might or > might not reuse some of the Xflow mechanisms. > > But how to implement RealVirtual Interaction is > indeed an intersting > discussion. Getting us all on the same page and > sharing ideas and > implementations is very helpful. Doing this > on the > same SW platform > (without the fork that we currently have) would > facilitate a > powerful implementation even more. > > > Thanks > > Philipp > > Am 23.10.2013 08:02, schrieb Tomi Sarni: > > ->Philipp > /I did not get the idea why POIs are > similar to > ECA. At a very high > level I see it, but I am not sure what > it buys > us. Can someone sketch > that picture in some more detail?/ > > Well I suppose it becomes relevant at > point when > we are combining our > GEs together. If the model can be > applied in > level of scene then > down to > POI in a scene and further down in > sensor level, > things can be > more easily visualized. Not just in > terms of > painting 3D models but in > terms of handling big data as well, more > specifically handling > relationships/inheritance. It also > makes it easier > to design a RESTful API as we have a common > structure which to follow > and also provides more opportunities > for 3rd > party developers to make > use of the data for their own purposes. > > For instance > > ->Toni > > From point of sensors, the > entity-component becomes > device-sensors/actuators. A device may > have an > unique identifier and IP > by which to access it, but it may also > contain > several actuators and > sensors > that are components of that device entity. > Sensors/actuators > themselves > are not aware to whom they are > interesting to. > One client may use the > sensor information differently to other > client. > Sensor/actuator service > allows any other service to query using > request/response method either > by geo-coordinates (circle,square or > complex > shape queries) or perhaps > through type+maxresults and service > will return > entities and their > components > from which the reqester can form logical > groups(array of entity uuids) > and query more detailed information > based on > that logical group. > > I guess there needs to be similar > thinking done > on POI level. I guess > POI does not know which scene it > belongs to. It > is up to scene > server to > form a logical group of POIs (e.g. > restaurants > of oulu 3d city > model). Then > again the problem is that scene needs > to wait > for POI to query for > sensors and form its logical groups > before it > can pass information to > scene. This can lead to long wait > times. But > this sequencing problem is > also something > that could be thought. Anyways this is > a common > problem with everything > in web at the moment in my opinnion. > Services > become intertwined. > When a > client loads a web page there can be > queries to > 20 different services > for advertisment and other stuff. Web page > handles it by painting stuff > to the client on receive basis. I think > this > could be applied in Scene > as well. > > > > > > On Wed, Oct 23, 2013 at 8:00 AM, > Philipp Slusallek > > > > __df__ki.de > >> > __df__ki.de > > >>> wrote: > > Hi, > > First of all, its certainly a good > thing to > also meet locally. I was > just a bit confused whether that > meeting > somehow would involve us as > well. Summarizing the results > briefly for > the others would > definitely be interesting. > > I did not get the idea why POIs are > similar > to ECA. At a very high > level I see it, but I am not sure > what it > buys us. Can someone > sketch that picture in some more > detail? > > BTW, what is the status with the > Rendering > discussion (Three.js vs. > xml3d.js)? I still have the feeling > that we > are doing parallel work > here that should probably be avoided. > > BTW, as part of our shading work > (which is > shaping up nicely) Felix > has been looking lately at a way to > describe > rendering stages > (passes) essentially through Xflow. > It is > still very experimental > but he is using it to implement > shadow maps > right now. > > @Felix: Once this has converged > into a bit > more stable idea, it > would be good to post this here to get > feedback. The way we > discussed it, this approach could > form a > nice basis for a modular > design of advanced rasterization > techniques > (reflection maps, adv. > face rendering, SSAO, lens flare, tone > mapping, etc.), and (later) > maybe also describe global illumination > settings (similar to our > work on LightingNetworks some years > ago). > > > Best, > > Philipp > > Am 22.10.2013 23:03, schrieb > toni at playsign.net > > > > >> > > > >>: > > Just a brief note: we had some > interesting preliminary > discussion > triggered by how the data > schema that > Ari O. presented for > the POI > system seemed at least partly > similar to > what the Real-Virtual > interaction work had resulted > in too -- > and in fact about > how the > proposed POI schema was basically a > version of the > entity-component > model which we?ve already been > using for > scenes in realXtend > (it is > inspired by / modeled after it, Ari > told). So it can be much > related to > the Scene API work in the > Synchronization GE too. As the action > point we > agreed that Ari will organize a > specific > work session on that. > I was now thinking that it > perhaps at > least partly leads > back to the > question: how do we define (and > implement) component types. I.e. > what > was mentioned in that > entity-system post > a few weeks back (with > links > to reX IComponent etc.). I mean: if > functionality such as > POIs and > realworld interaction make sense as > somehow resulting in > custom data > component types, does it mean > that a key > part of the framework > is a way > for those systems to declare > their types > .. so that it > integrates nicely > for the whole we want? I?m not > sure, too > tired to think it > through now, > but anyhow just wanted to > mention that > this was one topic that > came up. > I think Web Components is again > something to check - as in XML > terms reX > Components are xml(3d) elements > .. just > ones that are usually in > a group > (according to the reX entity > <-> xml3d > group mapping). And Web > Components are about defining & > implementing new elements > (as Erno > pointed out in a different > discussion > about xml-html authoring > in the > session). > BTW Thanks Kristian for the great > comments in that entity system > thread - was really good to > learn about > the alternative > attribute access > syntax and the validation in > XML3D(.js). > ~Toni > P.S. for (Christof &) the DFKI > folks: > I?m sure you > understand the > rationale of these Oulu meets > -- idea is > ofc not to exclude you > from the > talks but just makes sense for > us to > meet live too as we are in > the same > city afterall etc -- naturally > with the > DFKI team you also talk > there > locally. Perhaps is a good idea > that we > make notes so that can > post e.g. > here then (I?m not volunteering > though! > ?) . Also, the now > agreed > bi-weekly setup on Tuesdays luckily > works so that we can then > summarize > fresh in the global Wed > meetings and > continue the talks etc. > *From:* Erno Kuusela > *Sent:* ?Tuesday?, ?October? > ?22?, ?2013 > ?9?:?57? ?AM > *To:* Fiware-miwi > > > Kari from CIE offered to host > it this > time, so see you there at > 13:00. > > Erno > > > _____________________________________________________ > > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > >> > > > >> > https://lists.fi-ware.eu/______listinfo/fiware-miwi > > > > > > > >> > > > > > _____________________________________________________ > > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > >> > > > >> > https://lists.fi-ware.eu/______listinfo/fiware-miwi > > > > > > > > >> > > > > -- > > > > ------------------------------______--------------------------__--__--__------------- > > Deutsches Forschungszentrum f?r > K?nstliche > Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 > Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang > Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: > Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: > 19/673/0060/3 > > > ------------------------------______--------------------------__--__--__--------------- > > > > > ___________________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > >> > > > > >> > https://lists.fi-ware.eu/____listinfo/fiware-miwi > > > > > > > > > -- > > > ------------------------------____----------------------------__--__------------- > Deutsches Forschungszentrum f?r K?nstliche > Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 > Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster > (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB > 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: > 19/673/0060/3 > > ------------------------------____----------------------------__--__--------------- > > > > > > -- > > > ------------------------------____----------------------------__--__------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz > (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster > (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > > ------------------------------____----------------------------__--__--------------- > > > > > > > -- > > > ------------------------------____----------------------------__--__------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz > (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > > ------------------------------____----------------------------__--__--------------- > > _________________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > https://lists.fi-ware.eu/__listinfo/fiware-miwi > > > > > > -- > > ------------------------------__------------------------------__------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------__------------------------------__--------------- > > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 456 bytes Desc: not available URL: From Philipp.Slusallek at dfki.de Fri Oct 25 10:33:27 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Fri, 25 Oct 2013 10:33:27 +0200 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <52692BAC.8050600@dfki.de> <526949CA.5080402@dfki.de> <52696BD5.8070007@dfki.de> <526A0FB1.9000709@dfki.de> Message-ID: <526A2CD7.80301@dfki.de> Hi, Just let me add: XML3D allows for all the usual HTML-5 events and ettends them with 3D equivalents (e.g. onclick/onmouseover/etc for 3D meshes). From those you can create more complex 3D UI widgets. We have some simple ones that we use but others certainly make sense as well. There is nothing even close to being accepted as a standard 3D interaction model with 3D UI widgets and corresponding UI metaphors. Lookin into this and providing some more and maybe better tools was the idea of the 2D-UI objective of the call. I hope that we make some progress in that direction. That would be a very interesting discussion as well. Best, Philipp Am 25.10.2013 09:05, schrieb Tomi Sarni: > /It becomes more of an issue when we talk about interactivity, when the > visual representation needs to react to user input in a way that is > consistent with the application and calls functionality in the > application. In other words, you have to do a mapping from the sensor to > the application at some point along the pipeline (and back for actions > to be performed by an actuator)./ > > Currently when a client polls a device(containing sensor and/or > actuators) it will receive all interaction options that available for > the particular sensor or actuator. These options can be then accessed > by a HTTP POST method from the service. So there is the logical mapping. > I can see your point though, in a way it would seem logical to have that > XML3D model to contain states (e.g. button up and button down 3d model > states), and i have no idea whether this is supported by XML3D, as i > have been busy on server/sensor side. This way when a sensor is being > accesses by HTTP POST call to change state to either on or off for > instance, the XML3D model could contain transition logic to change > appearance from one state to another. Alternatively there can be two > models for two states. When the actuator is being queried it will return > model that corresponds to its current state. > > > > > > On Fri, Oct 25, 2013 at 9:29 AM, Philipp Slusallek > > wrote: > > Hi Tomi, > > Yes, this is definitely an interesting option and when sensors offer > REST-ful interfaces, it should be almost trivial to add (once a > suitable and standardized way of how to find that data is specified. > At least it would provide a kind of default visualization in case no > other is available. > > It becomes more of an issue when we talk about interactivity, when > the visual representation needs to react to user input in a way that > is consistent with the application and calls functionality in the > application. In other words, you have to do a mapping from the > sensor to the application at some point along the pipeline (and back > for actions to be performed by an actuator). > > Either we specify the sensor type through some semantic means (a > simple tag in the simplest case, a full RDF/a graph in the best > case) and let the application choose how to represent it or we need > to find a way to map generic behavior of a default object to > application functionality. The first seems much easier to me as > application functionality is likely to vary much more than sensor > functionality. And semantic sensor description have been worked on > for a long time and are available on the market. > > Of course, there are hybrid methods as well: A simple one would be > to include a URI/URL to a default model in the semantic sensor > description that then gets loaded either from the sensor through > REST (given some namespace there) or via the Web (again using some > namespace or search strategy). Then the application can always > inject its own mapping to what it thinks is the best mapping. > > > Best, > > Philipp > > Am 25.10.2013 07:52, schrieb Tomi Sarni: > > *Following is completely on theoretical level:* > > To mix things a little further i've been thinking about a > possibility to > store visual representation of sensors within the sensors > themselves. > Many sensor types allow HTTP POST/GET or even PUT/DELETE methods > (wrapped in SNMP/CoAP communication protocols for instance) which in > theory would allow sensor subscribers to also publish > information in > sensors (e.g. upload an xml3d model). This approach could be > useful in > cases where these sensors would have different purposes of use. > But the > sensor may have very little space to use for the model from up > 8-18 KB. > Also the web service can attach the models to IDs through use of > data > base. This is really just a pointer, perhaps there would be > use-cases > where the sensor visualization could be stored within the sensor > itself, > i think specifically some AR solutions could benefit from this. > But do > not let this mix up things, this perhaps reinforces the fact > that there > need to be overlaying middleware services that attach visual > representation based on their own needs. One service could use > different > 3d representation for temperature sensor than another one. > > > > > On Thu, Oct 24, 2013 at 9:49 PM, Philipp Slusallek > > >> wrote: > > Hi, > > OK, now I get it. This does make sense -- at least in a local > scenario, where the POI data (in this example) needs to be > stored > somewhere anyway and storing it in a component and then > generating > the appropriate visual component does make sense. Using web > components or a similar mechanism we could actually do the > same via > the DOM (as discussed for the general ECA sync before). > > But even then you might actually not want to store all the > POI data > but only the part that really matter to the application > (there may > be much more data -- maybe not for POIs but potentially for > other > things). > > Also in a distributed scenario, I am not so sure. In that > case you > might want to do that mapping on the server and only sync the > resulting data, maybe with reference back so you can still > interact > with the original data through a service call. That is the main > reason why I in general think of POI data and POI > representation as > separate entities. > > Regarding terminology, I think it does make sense to > differntiate > between the 3D scene and the application state (that is not > directly > influencing the 3D rendering and interaction). While you > store them > within the same data entity (but in different components), they > still refer to quite different things and are operated on by > different parts of you program (e.g. the renderer only ever > touches > the "scene" data). We do the same within the XML3D core, > where we > attach renderer-specific data to DOM nodes and I believe > three.js > also does something similar within its data structures. At > the end, > you have to store these things somewhere and there are only > so many > way to implement it. The differences are not really that big. > > > Best, > > Philipp > > Am 24.10.2013 19:24, schrieb Toni Alatalo: > > On 24 Oct 2013, at 19:24, Philipp Slusallek > > > > __df__ki.de > > >>> wrote: > > Good discussion! > > > I find so too ? thanks for the questions and comments > and all! Now > briefly about just one point: > > Am 24.10.2013 17:37, schrieb Toni Alatalo: > > integrates to the scene system too - for > example if a > scene server > queries POI services, does it then only use the > data to > manipulate > the scene using other non-POI components, or > does it > often make sense > also to include POI components in the scene so > that the > clients get > it too automatically with the scene sync and > can for > example provide > POI specific GUI tools. Ofc clients can query POI > services directly > too but this server centric setup is also one > scenario > and there the > scene integration might make sense. > > But I would say that there is a clear distinction > between > the POI data > (which you query from some service) and the > visualization or > representation of the POI data. Maybe you are more > talking > about the > latter here. However, there really is an application > dependent mapping > from the POI data to its representation. Each > application > may choose > to present the same POI data in very different way > and its > only this > resulting representation that becomes part of the > scene. > > > No I was not talking about visualization or > representations here > but the > POI data. > > non-POI in the above tried to refer to the whole which > covers > visualisations etc :) > > Your last sentence may help to understand the confusion: in > these posts > I?ve been using the reX entity system terminology only > ? hoping > that it > is clear to discuss that way and not mix terms (like > I?ve tried > to do in > some other threads). > > There ?scene? does not refer to a visual / graphical or any > other type > of scene. It does not refer to e.g. something like what > xml3d.js and > three.js, or ogre, have as their Scene objects. > > It simply means the collection of all entities. There it is > perfectly > valid to any kind of data which does not end up to e.g. the > visual scene > ? many components are like that. > > So in the above ?only use the data to manipulate the > scene using > other > non-POI components? was referring to for example > creation of Mesh > components if some POI is to be visualised that way. > The mapping > that > you were discussing. > > But my point was not about that but about the POI data > itself ? > and the > example about some end user GUI with a widget that > manipulates > it. So it > then gets automatically synchronised along with all the > other > data in > the application in a collaborative setting etc. > > Stepping out of the previous terminology, we could perhaps > translate: > ?scene? -> ?application state? and ?scene server? -> > ?synchronization > server?. > > I hope this clarifies something ? my apologies if not.. > > Cheers, > ~Toni > > P.S. i sent the previous post from a foreign device and > accidentally > with my gmail address as sender so it didn?t make it to > the list > ? so > thank you for quoting it in full so I don?t think we > need to > repost that :) > > This is essentially the Mapping stage of the well-known > Visualization > pipeline > > (http://www.infovis-wiki.net/____index.php/Visualization_____Pipeline > > > >), > > except > that here we also map interaction aspects to an > abstract scene > description (XML3D) first, which then performs the > rendering and > interaction. So you can think of this as an additional > "Scene" stage > between "Mapping" and "Rendering". > > I think this is a different topic, but also with > real-virtual > interaction for example how to facilitate nice > simple > authoring of > the e.g. real-virtual object mappings seems a > fruitful > enough angle > to think a bit, perhaps as a case to help in > understanding the entity > system & the different servers etc. For example > if there's a > component type 'real world link', the Interface > Designer > GUI shows it > automatically in the list of components, ppl > can just > add them to > their scenes and somehow then the system just > works.. > > > I am not sure what you are getting at. But it would > be great > if the > Interface Designer would allow to choose such POI > mappings > from a > predegined catalog. It seems that Xflow can be used > nicely for > generating the mapped scene elements from some > input data, > e.g. using > the same approach we use to provide basic > primitives like > cubes or > spheres in XML3D. Here they are not fixed, build-in > tags as > in X3D but > can actually be added by the developer as it best fits. > > For generating more complex subgraphs we may have > to extend the > current Xflow implementation. But its at least a great > starting point > to experiment with it. Experiments and feedback > would be > very welcome > here. > > I don't think these discussions are now hurt by us > (currently) having > alternative renderers - the entity system, > formats, sync > and the > overall architecture is the same anyway. > > > Well, some things only work in one and others only > in the other > branch. So the above mechanism could not be used to > visualize POIs in > the three.js branch but we do not have all the > features to > visualize > Oulu (or whatever city) in the XML3D.js branch. This > definitely IS > greatly limiting how we can combine the GEs into > more complex > applications -- the untimate goal of the orthogonal > design > of this > chapter. > > And it does not even work within the same chapter. > It will > be hard to > explain to Juanjo and others from FI-WARE (or the > commission > for that > matter). > > BTW, I just learned today that there is a FI-WARE > smaller review > coming up soon. Let's see if we already have to present > things there. > So far they have not explicitly asked us. > > > Best, > > Philipp > > -Toni > > > From an XML3D POV things could actually be > quite > "easy". It should > be rather simple to directly interface to > the IoT > GEs of FI-WARE > through REST via a new Xflow element. This > would > then make the data > available through elements. Then you > can use > all the features > of Xflow to manipulate the scene based on > the data. > For example, we > are discussing building a set of > visualization nodes > that implement > common visualization metaphors, such as scatter > plots, animations, > you name it. A new member of the lab > starting soon > wants to look > into this area. > > For acting on objects we have always used Web > services attached to > the XML3D objects via DOM events. Eventually, I > believe we want a > higher level input handling and processing > framework > but no one > knows so far, how this should look like (we > have > some ideas but they > are not well baked, any inpu is highly welcome > here). This might or > might not reuse some of the Xflow mechanisms. > > But how to implement RealVirtual Interaction is > indeed an intersting > discussion. Getting us all on the same page and > sharing ideas and > implementations is very helpful. Doing this > on the > same SW platform > (without the fork that we currently have) would > facilitate a > powerful implementation even more. > > > Thanks > > Philipp > > Am 23.10.2013 08:02, schrieb Tomi Sarni: > > ->Philipp > /I did not get the idea why POIs are > similar to > ECA. At a very high > level I see it, but I am not sure what > it buys > us. Can someone sketch > that picture in some more detail?/ > > Well I suppose it becomes relevant at > point when > we are combining our > GEs together. If the model can be > applied in > level of scene then > down to > POI in a scene and further down in > sensor level, > things can be > more easily visualized. Not just in > terms of > painting 3D models but in > terms of handling big data as well, more > specifically handling > relationships/inheritance. It also > makes it easier > to design a RESTful API as we have a common > structure which to follow > and also provides more opportunities > for 3rd > party developers to make > use of the data for their own purposes. > > For instance > > ->Toni > > From point of sensors, the > entity-component becomes > device-sensors/actuators. A device may > have an > unique identifier and IP > by which to access it, but it may also > contain > several actuators and > sensors > that are components of that device entity. > Sensors/actuators > themselves > are not aware to whom they are > interesting to. > One client may use the > sensor information differently to other > client. > Sensor/actuator service > allows any other service to query using > request/response method either > by geo-coordinates (circle,square or > complex > shape queries) or perhaps > through type+maxresults and service > will return > entities and their > components > from which the reqester can form logical > groups(array of entity uuids) > and query more detailed information > based on > that logical group. > > I guess there needs to be similar > thinking done > on POI level. I guess > POI does not know which scene it > belongs to. It > is up to scene > server to > form a logical group of POIs (e.g. > restaurants > of oulu 3d city > model). Then > again the problem is that scene needs > to wait > for POI to query for > sensors and form its logical groups > before it > can pass information to > scene. This can lead to long wait > times. But > this sequencing problem is > also something > that could be thought. Anyways this is > a common > problem with everything > in web at the moment in my opinnion. > Services > become intertwined. > When a > client loads a web page there can be > queries to > 20 different services > for advertisment and other stuff. Web page > handles it by painting stuff > to the client on receive basis. I think > this > could be applied in Scene > as well. > > > > > > On Wed, Oct 23, 2013 at 8:00 AM, > Philipp Slusallek > > > > __df__ki.de > >> > __df__ki.de > > >>> wrote: > > Hi, > > First of all, its certainly a good > thing to > also meet locally. I was > just a bit confused whether that > meeting > somehow would involve us as > well. Summarizing the results > briefly for > the others would > definitely be interesting. > > I did not get the idea why POIs are > similar > to ECA. At a very high > level I see it, but I am not sure > what it > buys us. Can someone > sketch that picture in some more > detail? > > BTW, what is the status with the > Rendering > discussion (Three.js vs. > xml3d.js)? I still have the feeling > that we > are doing parallel work > here that should probably be avoided. > > BTW, as part of our shading work > (which is > shaping up nicely) Felix > has been looking lately at a way to > describe > rendering stages > (passes) essentially through Xflow. > It is > still very experimental > but he is using it to implement > shadow maps > right now. > > @Felix: Once this has converged > into a bit > more stable idea, it > would be good to post this here to get > feedback. The way we > discussed it, this approach could > form a > nice basis for a modular > design of advanced rasterization > techniques > (reflection maps, adv. > face rendering, SSAO, lens flare, tone > mapping, etc.), and (later) > maybe also describe global illumination > settings (similar to our > work on LightingNetworks some years > ago). > > > Best, > > Philipp > > Am 22.10.2013 23:03, schrieb > toni at playsign.net > > > > >> > > > >>: > > Just a brief note: we had some > interesting preliminary > discussion > triggered by how the data > schema that > Ari O. presented for > the POI > system seemed at least partly > similar to > what the Real-Virtual > interaction work had resulted > in too -- > and in fact about > how the > proposed POI schema was basically a > version of the > entity-component > model which we?ve already been > using for > scenes in realXtend > (it is > inspired by / modeled after it, Ari > told). So it can be much > related to > the Scene API work in the > Synchronization GE too. As the action > point we > agreed that Ari will organize a > specific > work session on that. > I was now thinking that it > perhaps at > least partly leads > back to the > question: how do we define (and > implement) component types. I.e. > what > was mentioned in that > entity-system post > a few weeks back (with > links > to reX IComponent etc.). I mean: if > functionality such as > POIs and > realworld interaction make sense as > somehow resulting in > custom data > component types, does it mean > that a key > part of the framework > is a way > for those systems to declare > their types > .. so that it > integrates nicely > for the whole we want? I?m not > sure, too > tired to think it > through now, > but anyhow just wanted to > mention that > this was one topic that > came up. > I think Web Components is again > something to check - as in XML > terms reX > Components are xml(3d) elements > .. just > ones that are usually in > a group > (according to the reX entity > <-> xml3d > group mapping). And Web > Components are about defining & > implementing new elements > (as Erno > pointed out in a different > discussion > about xml-html authoring > in the > session). > BTW Thanks Kristian for the great > comments in that entity system > thread - was really good to > learn about > the alternative > attribute access > syntax and the validation in > XML3D(.js). > ~Toni > P.S. for (Christof &) the DFKI > folks: > I?m sure you > understand the > rationale of these Oulu meets > -- idea is > ofc not to exclude you > from the > talks but just makes sense for > us to > meet live too as we are in > the same > city afterall etc -- naturally > with the > DFKI team you also talk > there > locally. Perhaps is a good idea > that we > make notes so that can > post e.g. > here then (I?m not volunteering > though! > ?) . Also, the now > agreed > bi-weekly setup on Tuesdays luckily > works so that we can then > summarize > fresh in the global Wed > meetings and > continue the talks etc. > *From:* Erno Kuusela > *Sent:* ?Tuesday?, ?October? > ?22?, ?2013 > ?9?:?57? ?AM > *To:* Fiware-miwi > > > Kari from CIE offered to host > it this > time, so see you there at > 13:00. > > Erno > > > _____________________________________________________ > > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > >> > > > >> > https://lists.fi-ware.eu/______listinfo/fiware-miwi > > > > > > > >> > > > > > _____________________________________________________ > > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > >> > > > >> > https://lists.fi-ware.eu/______listinfo/fiware-miwi > > > > > > > > >> > > > > -- > > > > ------------------------------______--------------------------__--__--__------------- > > Deutsches Forschungszentrum f?r > K?nstliche > Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 > Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang > Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: > Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: > 19/673/0060/3 > > > ------------------------------______--------------------------__--__--__--------------- > > > > > ___________________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > >> > > > > >> > https://lists.fi-ware.eu/____listinfo/fiware-miwi > > > > > > > > > -- > > > ------------------------------____----------------------------__--__------------- > Deutsches Forschungszentrum f?r K?nstliche > Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 > Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster > (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB > 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: > 19/673/0060/3 > > ------------------------------____----------------------------__--__--------------- > > > > > > -- > > > ------------------------------____----------------------------__--__------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz > (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster > (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > > ------------------------------____----------------------------__--__--------------- > > > > > > > -- > > > ------------------------------____----------------------------__--__------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz > (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > > ------------------------------____----------------------------__--__--------------- > > _________________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > https://lists.fi-ware.eu/__listinfo/fiware-miwi > > > > > > -- > > ------------------------------__------------------------------__------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------__------------------------------__--------------- > > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 456 bytes Desc: not available URL: From tomi.sarni at cyberlightning.com Fri Oct 25 10:34:12 2013 From: tomi.sarni at cyberlightning.com (Tomi Sarni) Date: Fri, 25 Oct 2013 11:34:12 +0300 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: <526A2BC4.7080005@dfki.de> References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <52692BAC.8050600@dfki.de> <526949CA.5080402@dfki.de> <52696BD5.8070007@dfki.de> <526A0FB1.9000709@dfki.de> <526A2BC4.7080005@dfki.de> Message-ID: Yes I agree in general. Just a thought that in some use-cases this could be thought as an option. It has been difficult to design the API in a way that it would be highly dynamic in a sense that it would suite wide variety of application development needs. The NGSI 9/10development in earlier GE seemed difficult to adapt and in my personal opinnion does allow passing interaction interface clearly enough for the application developer. On Fri, Oct 25, 2013 at 11:28 AM, Philipp Slusallek < Philipp.Slusallek at dfki.de> wrote: > Hi, > > With interaction I mean the user interaction. Yes, it eventually gets > mapped to REST (or such) calls to the device. But how you map the device > functionality to user interaction is a big step where different applicatios > will have very different assumptions and interaction metaphors. Mapping > them all to ageneric sensor model seems very difficult. > > Using a sematic annotation avoid having to create such a mapping when you > design the sensor, avoid having to store the model on each sonsor, and > pushes the mapping to the software/application side,which is (in my > opinion) in a much better option to decide on that mapping. A fallback > mapping may still be provided by the sensor for the most basic cases. > > > Best, > > Philipp > > Am 25.10.2013 09:05, schrieb Tomi Sarni: > >> /It becomes more of an issue when we talk about interactivity, when the >> >> visual representation needs to react to user input in a way that is >> consistent with the application and calls functionality in the >> application. In other words, you have to do a mapping from the sensor to >> the application at some point along the pipeline (and back for actions >> to be performed by an actuator)./ >> >> >> Currently when a client polls a device(containing sensor and/or >> actuators) it will receive all interaction options that available for >> the particular sensor or actuator. These options can be then accessed >> by a HTTP POST method from the service. So there is the logical mapping. >> I can see your point though, in a way it would seem logical to have that >> XML3D model to contain states (e.g. button up and button down 3d model >> states), and i have no idea whether this is supported by XML3D, as i >> have been busy on server/sensor side. This way when a sensor is being >> accesses by HTTP POST call to change state to either on or off for >> instance, the XML3D model could contain transition logic to change >> appearance from one state to another. Alternatively there can be two >> models for two states. When the actuator is being queried it will return >> model that corresponds to its current state. >> >> >> >> >> >> On Fri, Oct 25, 2013 at 9:29 AM, Philipp Slusallek >> >> >> wrote: >> >> Hi Tomi, >> >> Yes, this is definitely an interesting option and when sensors offer >> REST-ful interfaces, it should be almost trivial to add (once a >> suitable and standardized way of how to find that data is specified. >> At least it would provide a kind of default visualization in case no >> other is available. >> >> It becomes more of an issue when we talk about interactivity, when >> the visual representation needs to react to user input in a way that >> is consistent with the application and calls functionality in the >> application. In other words, you have to do a mapping from the >> sensor to the application at some point along the pipeline (and back >> for actions to be performed by an actuator). >> >> Either we specify the sensor type through some semantic means (a >> simple tag in the simplest case, a full RDF/a graph in the best >> case) and let the application choose how to represent it or we need >> to find a way to map generic behavior of a default object to >> application functionality. The first seems much easier to me as >> application functionality is likely to vary much more than sensor >> functionality. And semantic sensor description have been worked on >> for a long time and are available on the market. >> >> Of course, there are hybrid methods as well: A simple one would be >> to include a URI/URL to a default model in the semantic sensor >> description that then gets loaded either from the sensor through >> REST (given some namespace there) or via the Web (again using some >> namespace or search strategy). Then the application can always >> inject its own mapping to what it thinks is the best mapping. >> >> >> Best, >> >> Philipp >> >> Am 25.10.2013 07:52, schrieb Tomi Sarni: >> >> *Following is completely on theoretical level:* >> >> To mix things a little further i've been thinking about a >> possibility to >> store visual representation of sensors within the sensors >> themselves. >> Many sensor types allow HTTP POST/GET or even PUT/DELETE methods >> (wrapped in SNMP/CoAP communication protocols for instance) which >> in >> theory would allow sensor subscribers to also publish >> information in >> sensors (e.g. upload an xml3d model). This approach could be >> useful in >> cases where these sensors would have different purposes of use. >> But the >> sensor may have very little space to use for the model from up >> 8-18 KB. >> Also the web service can attach the models to IDs through use of >> data >> base. This is really just a pointer, perhaps there would be >> use-cases >> where the sensor visualization could be stored within the sensor >> itself, >> i think specifically some AR solutions could benefit from this. >> But do >> not let this mix up things, this perhaps reinforces the fact >> that there >> need to be overlaying middleware services that attach visual >> representation based on their own needs. One service could use >> different >> 3d representation for temperature sensor than another one. >> >> >> >> >> On Thu, Oct 24, 2013 at 9:49 PM, Philipp Slusallek >> >> > >> >> >>> >> wrote: >> >> Hi, >> >> OK, now I get it. This does make sense -- at least in a local >> scenario, where the POI data (in this example) needs to be >> stored >> somewhere anyway and storing it in a component and then >> generating >> the appropriate visual component does make sense. Using web >> components or a similar mechanism we could actually do the >> same via >> the DOM (as discussed for the general ECA sync before). >> >> But even then you might actually not want to store all the >> POI data >> but only the part that really matter to the application >> (there may >> be much more data -- maybe not for POIs but potentially for >> other >> things). >> >> Also in a distributed scenario, I am not so sure. In that >> case you >> might want to do that mapping on the server and only sync the >> resulting data, maybe with reference back so you can still >> interact >> with the original data through a service call. That is the >> main >> reason why I in general think of POI data and POI >> representation as >> separate entities. >> >> Regarding terminology, I think it does make sense to >> differntiate >> between the 3D scene and the application state (that is not >> directly >> influencing the 3D rendering and interaction). While you >> store them >> within the same data entity (but in different components), >> they >> still refer to quite different things and are operated on by >> different parts of you program (e.g. the renderer only ever >> touches >> the "scene" data). We do the same within the XML3D core, >> where we >> attach renderer-specific data to DOM nodes and I believe >> three.js >> also does something similar within its data structures. At >> the end, >> you have to store these things somewhere and there are only >> so many >> way to implement it. The differences are not really that big. >> >> >> Best, >> >> Philipp >> >> Am 24.10.2013 19:24, schrieb Toni Alatalo: >> >> On 24 Oct 2013, at 19:24, Philipp Slusallek >> > > >> >> >> >> > __d**f__ki.de < >> http://dfki.de> >> >> >> >> >>>> >> wrote: >> >> Good discussion! >> >> >> I find so too ? thanks for the questions and comments >> and all! Now >> briefly about just one point: >> >> Am 24.10.2013 17:37, schrieb Toni Alatalo: >> >> integrates to the scene system too - for >> example if a >> scene server >> queries POI services, does it then only use the >> data to >> manipulate >> the scene using other non-POI components, or >> does it >> often make sense >> also to include POI components in the scene so >> that the >> clients get >> it too automatically with the scene sync and >> can for >> example provide >> POI specific GUI tools. Ofc clients can query POI >> services directly >> too but this server centric setup is also one >> scenario >> and there the >> scene integration might make sense. >> >> But I would say that there is a clear distinction >> between >> the POI data >> (which you query from some service) and the >> visualization or >> representation of the POI data. Maybe you are more >> talking >> about the >> latter here. However, there really is an application >> dependent mapping >> from the POI data to its representation. Each >> application >> may choose >> to present the same POI data in very different way >> and its >> only this >> resulting representation that becomes part of the >> scene. >> >> >> No I was not talking about visualization or >> representations here >> but the >> POI data. >> >> non-POI in the above tried to refer to the whole which >> covers >> visualisations etc :) >> >> Your last sentence may help to understand the confusion: >> in >> these posts >> I?ve been using the reX entity system terminology only >> ? hoping >> that it >> is clear to discuss that way and not mix terms (like >> I?ve tried >> to do in >> some other threads). >> >> There ?scene? does not refer to a visual / graphical or >> any >> other type >> of scene. It does not refer to e.g. something like what >> xml3d.js and >> three.js, or ogre, have as their Scene objects. >> >> It simply means the collection of all entities. There it >> is >> perfectly >> valid to any kind of data which does not end up to e.g. >> the >> visual scene >> ? many components are like that. >> >> So in the above ?only use the data to manipulate the >> scene using >> other >> non-POI components? was referring to for example >> creation of Mesh >> components if some POI is to be visualised that way. >> The mapping >> that >> you were discussing. >> >> But my point was not about that but about the POI data >> itself ? >> and the >> example about some end user GUI with a widget that >> manipulates >> it. So it >> then gets automatically synchronised along with all the >> other >> data in >> the application in a collaborative setting etc. >> >> Stepping out of the previous terminology, we could >> perhaps >> translate: >> ?scene? -> ?application state? and ?scene server? -> >> ?synchronization >> server?. >> >> I hope this clarifies something ? my apologies if not.. >> >> Cheers, >> ~Toni >> >> P.S. i sent the previous post from a foreign device and >> accidentally >> with my gmail address as sender so it didn?t make it to >> the list >> ? so >> thank you for quoting it in full so I don?t think we >> need to >> repost that :) >> >> This is essentially the Mapping stage of the >> well-known >> Visualization >> pipeline >> >> (http://www.infovis-wiki.net/_**___index.php/Visualization____** >> _Pipeline >> > Pipeline >> > >> >> >> > Pipeline< >> http://www.infovis-wiki.net/**index.php/Visualization_**Pipeline >> >>), >> >> except >> that here we also map interaction aspects to an >> abstract scene >> description (XML3D) first, which then performs the >> rendering and >> interaction. So you can think of this as an >> additional >> "Scene" stage >> between "Mapping" and "Rendering". >> >> I think this is a different topic, but also with >> real-virtual >> interaction for example how to facilitate nice >> simple >> authoring of >> the e.g. real-virtual object mappings seems a >> fruitful >> enough angle >> to think a bit, perhaps as a case to help in >> understanding the entity >> system & the different servers etc. For example >> if there's a >> component type 'real world link', the Interface >> Designer >> GUI shows it >> automatically in the list of components, ppl >> can just >> add them to >> their scenes and somehow then the system just >> works.. >> >> >> I am not sure what you are getting at. But it would >> be great >> if the >> Interface Designer would allow to choose such POI >> mappings >> from a >> predegined catalog. It seems that Xflow can be used >> nicely for >> generating the mapped scene elements from some >> input data, >> e.g. using >> the same approach we use to provide basic >> primitives like >> cubes or >> spheres in XML3D. Here they are not fixed, build-in >> tags as >> in X3D but >> can actually be added by the developer as it best >> fits. >> >> For generating more complex subgraphs we may have >> to extend the >> current Xflow implementation. But its at least a >> great >> starting point >> to experiment with it. Experiments and feedback >> would be >> very welcome >> here. >> >> I don't think these discussions are now hurt by >> us >> (currently) having >> alternative renderers - the entity system, >> formats, sync >> and the >> overall architecture is the same anyway. >> >> >> Well, some things only work in one and others only >> in the other >> branch. So the above mechanism could not be used to >> visualize POIs in >> the three.js branch but we do not have all the >> features to >> visualize >> Oulu (or whatever city) in the XML3D.js branch. This >> definitely IS >> greatly limiting how we can combine the GEs into >> more complex >> applications -- the untimate goal of the orthogonal >> design >> of this >> chapter. >> >> And it does not even work within the same chapter. >> It will >> be hard to >> explain to Juanjo and others from FI-WARE (or the >> commission >> for that >> matter). >> >> BTW, I just learned today that there is a FI-WARE >> smaller review >> coming up soon. Let's see if we already have to >> present >> things there. >> So far they have not explicitly asked us. >> >> >> Best, >> >> Philipp >> >> -Toni >> >> >> From an XML3D POV things could actually be >> quite >> "easy". It should >> be rather simple to directly interface to >> the IoT >> GEs of FI-WARE >> through REST via a new Xflow element. This >> would >> then make the data >> available through elements. Then you >> can use >> all the features >> of Xflow to manipulate the scene based on >> the data. >> For example, we >> are discussing building a set of >> visualization nodes >> that implement >> common visualization metaphors, such as >> scatter >> plots, animations, >> you name it. A new member of the lab >> starting soon >> wants to look >> into this area. >> >> For acting on objects we have always used Web >> services attached to >> the XML3D objects via DOM events. >> Eventually, I >> believe we want a >> higher level input handling and processing >> framework >> but no one >> knows so far, how this should look like (we >> have >> some ideas but they >> are not well baked, any inpu is highly >> welcome >> here). This might or >> might not reuse some of the Xflow mechanisms. >> >> But how to implement RealVirtual Interaction >> is >> indeed an intersting >> discussion. Getting us all on the same page >> and >> sharing ideas and >> implementations is very helpful. Doing this >> on the >> same SW platform >> (without the fork that we currently have) >> would >> facilitate a >> powerful implementation even more. >> >> >> Thanks >> >> Philipp >> >> Am 23.10.2013 08:02, schrieb Tomi Sarni: >> >> ->Philipp >> /I did not get the idea why POIs are >> similar to >> ECA. At a very high >> level I see it, but I am not sure what >> it buys >> us. Can someone sketch >> that picture in some more detail?/ >> >> Well I suppose it becomes relevant at >> point when >> we are combining our >> GEs together. If the model can be >> applied in >> level of scene then >> down to >> POI in a scene and further down in >> sensor level, >> things can be >> more easily visualized. Not just in >> terms of >> painting 3D models but in >> terms of handling big data as well, more >> specifically handling >> relationships/inheritance. It also >> makes it easier >> to design a RESTful API as we have a >> common >> structure which to follow >> and also provides more opportunities >> for 3rd >> party developers to make >> use of the data for their own purposes. >> >> For instance >> >> ->Toni >> >> From point of sensors, the >> entity-component becomes >> device-sensors/actuators. A device may >> have an >> unique identifier and IP >> by which to access it, but it may also >> contain >> several actuators and >> sensors >> that are components of that device >> entity. >> Sensors/actuators >> themselves >> are not aware to whom they are >> interesting to. >> One client may use the >> sensor information differently to other >> client. >> Sensor/actuator service >> allows any other service to query using >> request/response method either >> by geo-coordinates (circle,square or >> complex >> shape queries) or perhaps >> through type+maxresults and service >> will return >> entities and their >> components >> from which the reqester can form logical >> groups(array of entity uuids) >> and query more detailed information >> based on >> that logical group. >> >> I guess there needs to be similar >> thinking done >> on POI level. I guess >> POI does not know which scene it >> belongs to. It >> is up to scene >> server to >> form a logical group of POIs (e.g. >> restaurants >> of oulu 3d city >> model). Then >> again the problem is that scene needs >> to wait >> for POI to query for >> sensors and form its logical groups >> before it >> can pass information to >> scene. This can lead to long wait >> times. But >> this sequencing problem is >> also something >> that could be thought. Anyways this is >> a common >> problem with everything >> in web at the moment in my opinnion. >> Services >> become intertwined. >> When a >> client loads a web page there can be >> queries to >> 20 different services >> for advertisment and other stuff. Web >> page >> handles it by painting stuff >> to the client on receive basis. I think >> this >> could be applied in Scene >> as well. >> >> >> >> >> >> On Wed, Oct 23, 2013 at 8:00 AM, >> Philipp Slusallek >> > > >> >> >> >> > __d**f__ki.de < >> http://dfki.de> >> >> >> >> >>> >> > __d**f__ki.de < >> http://dfki.de> >> >> >> >> >>>> >> wrote: >> >> Hi, >> >> First of all, its certainly a good >> thing to >> also meet locally. I was >> just a bit confused whether that >> meeting >> somehow would involve us as >> well. Summarizing the results >> briefly for >> the others would >> definitely be interesting. >> >> I did not get the idea why POIs are >> similar >> to ECA. At a very high >> level I see it, but I am not sure >> what it >> buys us. Can someone >> sketch that picture in some more >> detail? >> >> BTW, what is the status with the >> Rendering >> discussion (Three.js vs. >> xml3d.js)? I still have the feeling >> that we >> are doing parallel work >> here that should probably be avoided. >> >> BTW, as part of our shading work >> (which is >> shaping up nicely) Felix >> has been looking lately at a way to >> describe >> rendering stages >> (passes) essentially through Xflow. >> It is >> still very experimental >> but he is using it to implement >> shadow maps >> right now. >> >> @Felix: Once this has converged >> into a bit >> more stable idea, it >> would be good to post this here to >> get >> feedback. The way we >> discussed it, this approach could >> form a >> nice basis for a modular >> design of advanced rasterization >> techniques >> (reflection maps, adv. >> face rendering, SSAO, lens flare, >> tone >> mapping, etc.), and (later) >> maybe also describe global >> illumination >> settings (similar to our >> work on LightingNetworks some years >> ago). >> >> >> Best, >> >> Philipp >> >> Am 22.10.2013 23:03, schrieb >> toni at playsign.net >> > >> > >> > >> >> >> > >> > >>: >> >> Just a brief note: we had some >> interesting preliminary >> discussion >> triggered by how the data >> schema that >> Ari O. presented for >> the POI >> system seemed at least partly >> similar to >> what the Real-Virtual >> interaction work had resulted >> in too -- >> and in fact about >> how the >> proposed POI schema was >> basically a >> version of the >> entity-component >> model which we?ve already been >> using for >> scenes in realXtend >> (it is >> inspired by / modeled after it, >> Ari >> told). So it can be much >> related to >> the Scene API work in the >> Synchronization GE too. As the action >> point we >> agreed that Ari will organize a >> specific >> work session on that. >> I was now thinking that it >> perhaps at >> least partly leads >> back to the >> question: how do we define (and >> implement) component types. I.e. >> what >> was mentioned in that >> entity-system post >> a few weeks back (with >> links >> to reX IComponent etc.). I mean: >> if >> functionality such as >> POIs and >> realworld interaction make sense >> as >> somehow resulting in >> custom data >> component types, does it mean >> that a key >> part of the framework >> is a way >> for those systems to declare >> their types >> .. so that it >> integrates nicely >> for the whole we want? I?m not >> sure, too >> tired to think it >> through now, >> but anyhow just wanted to >> mention that >> this was one topic that >> came up. >> I think Web Components is again >> something to check - as in XML >> terms reX >> Components are xml(3d) elements >> .. just >> ones that are usually in >> a group >> (according to the reX entity >> <-> xml3d >> group mapping). And Web >> Components are about defining & >> implementing new elements >> (as Erno >> pointed out in a different >> discussion >> about xml-html authoring >> in the >> session). >> BTW Thanks Kristian for the great >> comments in that entity system >> thread - was really good to >> learn about >> the alternative >> attribute access >> syntax and the validation in >> XML3D(.js). >> ~Toni >> P.S. for (Christof &) the DFKI >> folks: >> I?m sure you >> understand the >> rationale of these Oulu meets >> -- idea is >> ofc not to exclude you >> from the >> talks but just makes sense for >> us to >> meet live too as we are in >> the same >> city afterall etc -- naturally >> with the >> DFKI team you also talk >> there >> locally. Perhaps is a good idea >> that we >> make notes so that can >> post e.g. >> here then (I?m not volunteering >> though! >> ?) . Also, the now >> agreed >> bi-weekly setup on Tuesdays >> luckily >> works so that we can then >> summarize >> fresh in the global Wed >> meetings and >> continue the talks etc. >> *From:* Erno Kuusela >> *Sent:* ?Tuesday?, ?October? >> ?22?, ?2013 >> ?9?:?57? ?AM >> *To:* Fiware-miwi >> >> >> Kari from CIE offered to host >> it this >> time, so see you there at >> 13:00. >> >> Erno >> >> >> ______________________________**_______________________ >> >> >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu > ware.eu > >> >> >> >> >> >> >> >> > >> >> >> >>> >> >> >> >> > >> >> >> >>> >> https://lists.fi-ware.eu/_____**_listinfo/fiware-miwi >> >> > >> >> >> >> >> >> >> >> >> >> > >> >> >> >>> >> >> >> >> >> ______________________________**_______________________ >> >> >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu > ware.eu > >> >> >> >> >> >> >> >> > >> >> >> >>> >> >> >> >> > >> >> >> >>> >> https://lists.fi-ware.eu/_____**_listinfo/fiware-miwi >> >> > >> >> >> >> >> >> >> >> >> >> > >> >> >> >>> >> >> >> >> -- >> >> >> >> ------------------------------**______------------------------** >> --__--__--__------------- >> >> >> Deutsches Forschungszentrum f?r >> K?nstliche >> Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 >> Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang >> Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: >> Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, >> Steuernummer: >> 19/673/0060/3 >> >> >> ------------------------------**______------------------------** >> --__--__--__--------------- >> >> >> >> >> >> ______________________________**_____________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu > ware.eu > >> >> >> >> >> >> >> >> > >> >> >> >>> >> >> >> >> > >> >> >> >> >> >>> >> https://lists.fi-ware.eu/____**listinfo/fiware-miwi >> >> > >> >> >> >> >> >> >> >> >> >> -- >> >> >> ------------------------------**____--------------------------** >> --__--__------------- >> Deutsches Forschungszentrum f?r K?nstliche >> Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 >> Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster >> (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB >> 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: >> 19/673/0060/3 >> >> ------------------------------**____--------------------------** >> --__--__--------------- >> >> >> >> >> >> >> -- >> >> >> ------------------------------**____--------------------------** >> --__--__------------- >> Deutsches Forschungszentrum f?r K?nstliche >> Intelligenz >> (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster >> (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: >> 19/673/0060/3 >> >> ------------------------------**____--------------------------** >> --__--__--------------- >> >> >> >> >> >> >> >> -- >> >> >> ------------------------------**____--------------------------** >> --__--__------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz >> (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> >> ------------------------------**____--------------------------** >> --__--__--------------- >> >> ______________________________**___________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> >> > >> >> >> >> >> https://lists.fi-ware.eu/__**listinfo/fiware-miwi >> >> > >> >> >> >> >> -- >> >> ------------------------------**__----------------------------** >> --__------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> ------------------------------**__----------------------------** >> --__--------------- >> >> >> > > -- > > ------------------------------**------------------------------** > ------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------**------------------------------** > --------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomi.sarni at cyberlightning.com Fri Oct 25 10:37:00 2013 From: tomi.sarni at cyberlightning.com (Tomi Sarni) Date: Fri, 25 Oct 2013 11:37:00 +0300 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <52692BAC.8050600@dfki.de> <526949CA.5080402@dfki.de> <52696BD5.8070007@dfki.de> <526A0FB1.9000709@dfki.de> <526A2BC4.7080005@dfki.de> Message-ID: And yes this is entirely another discussion :) sorry about off topic spam On Fri, Oct 25, 2013 at 11:34 AM, Tomi Sarni wrote: > Yes I agree in general. Just a thought that in some use-cases this could > be thought as an option. It has been difficult to design the API in a way > that it would be highly dynamic in a sense that it would suite wide variety > of application development needs. > The NGSI 9/10development in earlier GE seemed difficult to adapt and in my personal > opinnion does allow passing interaction interface clearly enough for the > application developer. > > > On Fri, Oct 25, 2013 at 11:28 AM, Philipp Slusallek < > Philipp.Slusallek at dfki.de> wrote: > >> Hi, >> >> With interaction I mean the user interaction. Yes, it eventually gets >> mapped to REST (or such) calls to the device. But how you map the device >> functionality to user interaction is a big step where different applicatios >> will have very different assumptions and interaction metaphors. Mapping >> them all to ageneric sensor model seems very difficult. >> >> Using a sematic annotation avoid having to create such a mapping when you >> design the sensor, avoid having to store the model on each sonsor, and >> pushes the mapping to the software/application side,which is (in my >> opinion) in a much better option to decide on that mapping. A fallback >> mapping may still be provided by the sensor for the most basic cases. >> >> >> Best, >> >> Philipp >> >> Am 25.10.2013 09:05, schrieb Tomi Sarni: >> >>> /It becomes more of an issue when we talk about interactivity, when the >>> >>> visual representation needs to react to user input in a way that is >>> consistent with the application and calls functionality in the >>> application. In other words, you have to do a mapping from the sensor to >>> the application at some point along the pipeline (and back for actions >>> to be performed by an actuator)./ >>> >>> >>> Currently when a client polls a device(containing sensor and/or >>> actuators) it will receive all interaction options that available for >>> the particular sensor or actuator. These options can be then accessed >>> by a HTTP POST method from the service. So there is the logical mapping. >>> I can see your point though, in a way it would seem logical to have that >>> XML3D model to contain states (e.g. button up and button down 3d model >>> states), and i have no idea whether this is supported by XML3D, as i >>> have been busy on server/sensor side. This way when a sensor is being >>> accesses by HTTP POST call to change state to either on or off for >>> instance, the XML3D model could contain transition logic to change >>> appearance from one state to another. Alternatively there can be two >>> models for two states. When the actuator is being queried it will return >>> model that corresponds to its current state. >>> >>> >>> >>> >>> >>> On Fri, Oct 25, 2013 at 9:29 AM, Philipp Slusallek >>> >> >>> wrote: >>> >>> Hi Tomi, >>> >>> Yes, this is definitely an interesting option and when sensors offer >>> REST-ful interfaces, it should be almost trivial to add (once a >>> suitable and standardized way of how to find that data is specified. >>> At least it would provide a kind of default visualization in case no >>> other is available. >>> >>> It becomes more of an issue when we talk about interactivity, when >>> the visual representation needs to react to user input in a way that >>> is consistent with the application and calls functionality in the >>> application. In other words, you have to do a mapping from the >>> sensor to the application at some point along the pipeline (and back >>> for actions to be performed by an actuator). >>> >>> Either we specify the sensor type through some semantic means (a >>> simple tag in the simplest case, a full RDF/a graph in the best >>> case) and let the application choose how to represent it or we need >>> to find a way to map generic behavior of a default object to >>> application functionality. The first seems much easier to me as >>> application functionality is likely to vary much more than sensor >>> functionality. And semantic sensor description have been worked on >>> for a long time and are available on the market. >>> >>> Of course, there are hybrid methods as well: A simple one would be >>> to include a URI/URL to a default model in the semantic sensor >>> description that then gets loaded either from the sensor through >>> REST (given some namespace there) or via the Web (again using some >>> namespace or search strategy). Then the application can always >>> inject its own mapping to what it thinks is the best mapping. >>> >>> >>> Best, >>> >>> Philipp >>> >>> Am 25.10.2013 07:52, schrieb Tomi Sarni: >>> >>> *Following is completely on theoretical level:* >>> >>> To mix things a little further i've been thinking about a >>> possibility to >>> store visual representation of sensors within the sensors >>> themselves. >>> Many sensor types allow HTTP POST/GET or even PUT/DELETE methods >>> (wrapped in SNMP/CoAP communication protocols for instance) >>> which in >>> theory would allow sensor subscribers to also publish >>> information in >>> sensors (e.g. upload an xml3d model). This approach could be >>> useful in >>> cases where these sensors would have different purposes of use. >>> But the >>> sensor may have very little space to use for the model from up >>> 8-18 KB. >>> Also the web service can attach the models to IDs through use of >>> data >>> base. This is really just a pointer, perhaps there would be >>> use-cases >>> where the sensor visualization could be stored within the sensor >>> itself, >>> i think specifically some AR solutions could benefit from this. >>> But do >>> not let this mix up things, this perhaps reinforces the fact >>> that there >>> need to be overlaying middleware services that attach visual >>> representation based on their own needs. One service could use >>> different >>> 3d representation for temperature sensor than another one. >>> >>> >>> >>> >>> On Thu, Oct 24, 2013 at 9:49 PM, Philipp Slusallek >>> >>> > >>> >>> >>> >>> wrote: >>> >>> Hi, >>> >>> OK, now I get it. This does make sense -- at least in a >>> local >>> scenario, where the POI data (in this example) needs to be >>> stored >>> somewhere anyway and storing it in a component and then >>> generating >>> the appropriate visual component does make sense. Using web >>> components or a similar mechanism we could actually do the >>> same via >>> the DOM (as discussed for the general ECA sync before). >>> >>> But even then you might actually not want to store all the >>> POI data >>> but only the part that really matter to the application >>> (there may >>> be much more data -- maybe not for POIs but potentially for >>> other >>> things). >>> >>> Also in a distributed scenario, I am not so sure. In that >>> case you >>> might want to do that mapping on the server and only sync >>> the >>> resulting data, maybe with reference back so you can still >>> interact >>> with the original data through a service call. That is the >>> main >>> reason why I in general think of POI data and POI >>> representation as >>> separate entities. >>> >>> Regarding terminology, I think it does make sense to >>> differntiate >>> between the 3D scene and the application state (that is not >>> directly >>> influencing the 3D rendering and interaction). While you >>> store them >>> within the same data entity (but in different components), >>> they >>> still refer to quite different things and are operated on by >>> different parts of you program (e.g. the renderer only ever >>> touches >>> the "scene" data). We do the same within the XML3D core, >>> where we >>> attach renderer-specific data to DOM nodes and I believe >>> three.js >>> also does something similar within its data structures. At >>> the end, >>> you have to store these things somewhere and there are only >>> so many >>> way to implement it. The differences are not really that >>> big. >>> >>> >>> Best, >>> >>> Philipp >>> >>> Am 24.10.2013 19:24, schrieb Toni Alatalo: >>> >>> On 24 Oct 2013, at 19:24, Philipp Slusallek >>> >> > >>> >>> >>> >> >>> >> __d**f__ki.de < >>> http://dfki.de> >>> >>> >>> >>> >>>> >>> wrote: >>> >>> Good discussion! >>> >>> >>> I find so too ? thanks for the questions and comments >>> and all! Now >>> briefly about just one point: >>> >>> Am 24.10.2013 17:37, schrieb Toni Alatalo: >>> >>> integrates to the scene system too - for >>> example if a >>> scene server >>> queries POI services, does it then only use the >>> data to >>> manipulate >>> the scene using other non-POI components, or >>> does it >>> often make sense >>> also to include POI components in the scene so >>> that the >>> clients get >>> it too automatically with the scene sync and >>> can for >>> example provide >>> POI specific GUI tools. Ofc clients can query >>> POI >>> services directly >>> too but this server centric setup is also one >>> scenario >>> and there the >>> scene integration might make sense. >>> >>> But I would say that there is a clear distinction >>> between >>> the POI data >>> (which you query from some service) and the >>> visualization or >>> representation of the POI data. Maybe you are more >>> talking >>> about the >>> latter here. However, there really is an application >>> dependent mapping >>> from the POI data to its representation. Each >>> application >>> may choose >>> to present the same POI data in very different way >>> and its >>> only this >>> resulting representation that becomes part of the >>> scene. >>> >>> >>> No I was not talking about visualization or >>> representations here >>> but the >>> POI data. >>> >>> non-POI in the above tried to refer to the whole which >>> covers >>> visualisations etc :) >>> >>> Your last sentence may help to understand the >>> confusion: in >>> these posts >>> I?ve been using the reX entity system terminology only >>> ? hoping >>> that it >>> is clear to discuss that way and not mix terms (like >>> I?ve tried >>> to do in >>> some other threads). >>> >>> There ?scene? does not refer to a visual / graphical or >>> any >>> other type >>> of scene. It does not refer to e.g. something like what >>> xml3d.js and >>> three.js, or ogre, have as their Scene objects. >>> >>> It simply means the collection of all entities. There >>> it is >>> perfectly >>> valid to any kind of data which does not end up to e.g. >>> the >>> visual scene >>> ? many components are like that. >>> >>> So in the above ?only use the data to manipulate the >>> scene using >>> other >>> non-POI components? was referring to for example >>> creation of Mesh >>> components if some POI is to be visualised that way. >>> The mapping >>> that >>> you were discussing. >>> >>> But my point was not about that but about the POI data >>> itself ? >>> and the >>> example about some end user GUI with a widget that >>> manipulates >>> it. So it >>> then gets automatically synchronised along with all the >>> other >>> data in >>> the application in a collaborative setting etc. >>> >>> Stepping out of the previous terminology, we could >>> perhaps >>> translate: >>> ?scene? -> ?application state? and ?scene server? -> >>> ?synchronization >>> server?. >>> >>> I hope this clarifies something ? my apologies if not.. >>> >>> Cheers, >>> ~Toni >>> >>> P.S. i sent the previous post from a foreign device and >>> accidentally >>> with my gmail address as sender so it didn?t make it to >>> the list >>> ? so >>> thank you for quoting it in full so I don?t think we >>> need to >>> repost that :) >>> >>> This is essentially the Mapping stage of the >>> well-known >>> Visualization >>> pipeline >>> >>> (http://www.infovis-wiki.net/_**___index.php/Visualization____** >>> _Pipeline >>> >> Pipeline >>> > >>> >>> >>> >> Pipeline< >>> http://www.infovis-wiki.net/**index.php/Visualization_**Pipeline >>> >>), >>> >>> except >>> that here we also map interaction aspects to an >>> abstract scene >>> description (XML3D) first, which then performs the >>> rendering and >>> interaction. So you can think of this as an >>> additional >>> "Scene" stage >>> between "Mapping" and "Rendering". >>> >>> I think this is a different topic, but also with >>> real-virtual >>> interaction for example how to facilitate nice >>> simple >>> authoring of >>> the e.g. real-virtual object mappings seems a >>> fruitful >>> enough angle >>> to think a bit, perhaps as a case to help in >>> understanding the entity >>> system & the different servers etc. For example >>> if there's a >>> component type 'real world link', the Interface >>> Designer >>> GUI shows it >>> automatically in the list of components, ppl >>> can just >>> add them to >>> their scenes and somehow then the system just >>> works.. >>> >>> >>> I am not sure what you are getting at. But it would >>> be great >>> if the >>> Interface Designer would allow to choose such POI >>> mappings >>> from a >>> predegined catalog. It seems that Xflow can be used >>> nicely for >>> generating the mapped scene elements from some >>> input data, >>> e.g. using >>> the same approach we use to provide basic >>> primitives like >>> cubes or >>> spheres in XML3D. Here they are not fixed, build-in >>> tags as >>> in X3D but >>> can actually be added by the developer as it best >>> fits. >>> >>> For generating more complex subgraphs we may have >>> to extend the >>> current Xflow implementation. But its at least a >>> great >>> starting point >>> to experiment with it. Experiments and feedback >>> would be >>> very welcome >>> here. >>> >>> I don't think these discussions are now hurt by >>> us >>> (currently) having >>> alternative renderers - the entity system, >>> formats, sync >>> and the >>> overall architecture is the same anyway. >>> >>> >>> Well, some things only work in one and others only >>> in the other >>> branch. So the above mechanism could not be used to >>> visualize POIs in >>> the three.js branch but we do not have all the >>> features to >>> visualize >>> Oulu (or whatever city) in the XML3D.js branch. This >>> definitely IS >>> greatly limiting how we can combine the GEs into >>> more complex >>> applications -- the untimate goal of the orthogonal >>> design >>> of this >>> chapter. >>> >>> And it does not even work within the same chapter. >>> It will >>> be hard to >>> explain to Juanjo and others from FI-WARE (or the >>> commission >>> for that >>> matter). >>> >>> BTW, I just learned today that there is a FI-WARE >>> smaller review >>> coming up soon. Let's see if we already have to >>> present >>> things there. >>> So far they have not explicitly asked us. >>> >>> >>> Best, >>> >>> Philipp >>> >>> -Toni >>> >>> >>> From an XML3D POV things could actually be >>> quite >>> "easy". It should >>> be rather simple to directly interface to >>> the IoT >>> GEs of FI-WARE >>> through REST via a new Xflow element. This >>> would >>> then make the data >>> available through elements. Then you >>> can use >>> all the features >>> of Xflow to manipulate the scene based on >>> the data. >>> For example, we >>> are discussing building a set of >>> visualization nodes >>> that implement >>> common visualization metaphors, such as >>> scatter >>> plots, animations, >>> you name it. A new member of the lab >>> starting soon >>> wants to look >>> into this area. >>> >>> For acting on objects we have always used >>> Web >>> services attached to >>> the XML3D objects via DOM events. >>> Eventually, I >>> believe we want a >>> higher level input handling and processing >>> framework >>> but no one >>> knows so far, how this should look like (we >>> have >>> some ideas but they >>> are not well baked, any inpu is highly >>> welcome >>> here). This might or >>> might not reuse some of the Xflow >>> mechanisms. >>> >>> But how to implement RealVirtual >>> Interaction is >>> indeed an intersting >>> discussion. Getting us all on the same page >>> and >>> sharing ideas and >>> implementations is very helpful. Doing this >>> on the >>> same SW platform >>> (without the fork that we currently have) >>> would >>> facilitate a >>> powerful implementation even more. >>> >>> >>> Thanks >>> >>> Philipp >>> >>> Am 23.10.2013 08:02, schrieb Tomi Sarni: >>> >>> ->Philipp >>> /I did not get the idea why POIs are >>> similar to >>> ECA. At a very high >>> level I see it, but I am not sure what >>> it buys >>> us. Can someone sketch >>> that picture in some more detail?/ >>> >>> Well I suppose it becomes relevant at >>> point when >>> we are combining our >>> GEs together. If the model can be >>> applied in >>> level of scene then >>> down to >>> POI in a scene and further down in >>> sensor level, >>> things can be >>> more easily visualized. Not just in >>> terms of >>> painting 3D models but in >>> terms of handling big data as well, more >>> specifically handling >>> relationships/inheritance. It also >>> makes it easier >>> to design a RESTful API as we have a >>> common >>> structure which to follow >>> and also provides more opportunities >>> for 3rd >>> party developers to make >>> use of the data for their own purposes. >>> >>> For instance >>> >>> ->Toni >>> >>> From point of sensors, the >>> entity-component becomes >>> device-sensors/actuators. A device may >>> have an >>> unique identifier and IP >>> by which to access it, but it may also >>> contain >>> several actuators and >>> sensors >>> that are components of that device >>> entity. >>> Sensors/actuators >>> themselves >>> are not aware to whom they are >>> interesting to. >>> One client may use the >>> sensor information differently to other >>> client. >>> Sensor/actuator service >>> allows any other service to query using >>> request/response method either >>> by geo-coordinates (circle,square or >>> complex >>> shape queries) or perhaps >>> through type+maxresults and service >>> will return >>> entities and their >>> components >>> from which the reqester can form logical >>> groups(array of entity uuids) >>> and query more detailed information >>> based on >>> that logical group. >>> >>> I guess there needs to be similar >>> thinking done >>> on POI level. I guess >>> POI does not know which scene it >>> belongs to. It >>> is up to scene >>> server to >>> form a logical group of POIs (e.g. >>> restaurants >>> of oulu 3d city >>> model). Then >>> again the problem is that scene needs >>> to wait >>> for POI to query for >>> sensors and form its logical groups >>> before it >>> can pass information to >>> scene. This can lead to long wait >>> times. But >>> this sequencing problem is >>> also something >>> that could be thought. Anyways this is >>> a common >>> problem with everything >>> in web at the moment in my opinnion. >>> Services >>> become intertwined. >>> When a >>> client loads a web page there can be >>> queries to >>> 20 different services >>> for advertisment and other stuff. Web >>> page >>> handles it by painting stuff >>> to the client on receive basis. I think >>> this >>> could be applied in Scene >>> as well. >>> >>> >>> >>> >>> >>> On Wed, Oct 23, 2013 at 8:00 AM, >>> Philipp Slusallek >>> >> > >>> >>> >>> >> >>> >> __d**f__ki.de < >>> http://dfki.de> >>> >>> >>> >>> >>> >>> >> __d**f__ki.de < >>> http://dfki.de> >>> >>> >>> >>> >>>> >>> wrote: >>> >>> Hi, >>> >>> First of all, its certainly a good >>> thing to >>> also meet locally. I was >>> just a bit confused whether that >>> meeting >>> somehow would involve us as >>> well. Summarizing the results >>> briefly for >>> the others would >>> definitely be interesting. >>> >>> I did not get the idea why POIs are >>> similar >>> to ECA. At a very high >>> level I see it, but I am not sure >>> what it >>> buys us. Can someone >>> sketch that picture in some more >>> detail? >>> >>> BTW, what is the status with the >>> Rendering >>> discussion (Three.js vs. >>> xml3d.js)? I still have the feeling >>> that we >>> are doing parallel work >>> here that should probably be >>> avoided. >>> >>> BTW, as part of our shading work >>> (which is >>> shaping up nicely) Felix >>> has been looking lately at a way to >>> describe >>> rendering stages >>> (passes) essentially through Xflow. >>> It is >>> still very experimental >>> but he is using it to implement >>> shadow maps >>> right now. >>> >>> @Felix: Once this has converged >>> into a bit >>> more stable idea, it >>> would be good to post this here to >>> get >>> feedback. The way we >>> discussed it, this approach could >>> form a >>> nice basis for a modular >>> design of advanced rasterization >>> techniques >>> (reflection maps, adv. >>> face rendering, SSAO, lens flare, >>> tone >>> mapping, etc.), and (later) >>> maybe also describe global >>> illumination >>> settings (similar to our >>> work on LightingNetworks some years >>> ago). >>> >>> >>> Best, >>> >>> Philipp >>> >>> Am 22.10.2013 23:03, schrieb >>> toni at playsign.net >>> > >>> >> >>> >> >> >>> >>> >> >>> >> >>: >>> >>> Just a brief note: we had some >>> interesting preliminary >>> discussion >>> triggered by how the data >>> schema that >>> Ari O. presented for >>> the POI >>> system seemed at least partly >>> similar to >>> what the Real-Virtual >>> interaction work had resulted >>> in too -- >>> and in fact about >>> how the >>> proposed POI schema was >>> basically a >>> version of the >>> entity-component >>> model which we?ve already been >>> using for >>> scenes in realXtend >>> (it is >>> inspired by / modeled after it, >>> Ari >>> told). So it can be much >>> related to >>> the Scene API work in the >>> Synchronization GE too. As the action >>> point we >>> agreed that Ari will organize a >>> specific >>> work session on that. >>> I was now thinking that it >>> perhaps at >>> least partly leads >>> back to the >>> question: how do we define (and >>> implement) component types. I.e. >>> what >>> was mentioned in that >>> entity-system post >>> a few weeks back (with >>> links >>> to reX IComponent etc.). I >>> mean: if >>> functionality such as >>> POIs and >>> realworld interaction make >>> sense as >>> somehow resulting in >>> custom data >>> component types, does it mean >>> that a key >>> part of the framework >>> is a way >>> for those systems to declare >>> their types >>> .. so that it >>> integrates nicely >>> for the whole we want? I?m not >>> sure, too >>> tired to think it >>> through now, >>> but anyhow just wanted to >>> mention that >>> this was one topic that >>> came up. >>> I think Web Components is again >>> something to check - as in XML >>> terms reX >>> Components are xml(3d) elements >>> .. just >>> ones that are usually in >>> a group >>> (according to the reX entity >>> <-> xml3d >>> group mapping). And Web >>> Components are about defining & >>> implementing new elements >>> (as Erno >>> pointed out in a different >>> discussion >>> about xml-html authoring >>> in the >>> session). >>> BTW Thanks Kristian for the >>> great >>> comments in that entity system >>> thread - was really good to >>> learn about >>> the alternative >>> attribute access >>> syntax and the validation in >>> XML3D(.js). >>> ~Toni >>> P.S. for (Christof &) the DFKI >>> folks: >>> I?m sure you >>> understand the >>> rationale of these Oulu meets >>> -- idea is >>> ofc not to exclude you >>> from the >>> talks but just makes sense for >>> us to >>> meet live too as we are in >>> the same >>> city afterall etc -- naturally >>> with the >>> DFKI team you also talk >>> there >>> locally. Perhaps is a good idea >>> that we >>> make notes so that can >>> post e.g. >>> here then (I?m not volunteering >>> though! >>> ?) . Also, the now >>> agreed >>> bi-weekly setup on Tuesdays >>> luckily >>> works so that we can then >>> summarize >>> fresh in the global Wed >>> meetings and >>> continue the talks etc. >>> *From:* Erno Kuusela >>> *Sent:* ?Tuesday?, ?October? >>> ?22?, ?2013 >>> ?9?:?57? ?AM >>> *To:* Fiware-miwi >>> >>> >>> Kari from CIE offered to host >>> it this >>> time, so see you there at >>> 13:00. >>> >>> Erno >>> >>> >>> ______________________________**_______________________ >>> >>> >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >> ware.eu > >>> >> _ware.eu >>> >>> >> >>> >>> >>> >>> > >>> >> _ware.eu >>> >>> >>> >>> >>> >>> >>> > >>> >> _ware.eu >>> >>> >>> >>> https://lists.fi-ware.eu/_____**_listinfo/fiware-miwi >>> >>> > >>> >>> >>> >>> >> >>> >>> >>> >>> >>> > >>> >>> >>> >>> >>> >>> >>> >>> >>> ______________________________**_______________________ >>> >>> >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >> ware.eu > >>> >> _ware.eu >>> >>> >> >>> >>> >>> >>> > >>> >> _ware.eu >>> >>> >>> >>> >>> >>> >>> > >>> >> _ware.eu >>> >>> >>> >>> https://lists.fi-ware.eu/_____**_listinfo/fiware-miwi >>> >>> > >>> >>> >>> >> >>> >>> >>> >>> >>> >>> > >>> >>> >>> >>> >>> >>> >>> >>> -- >>> >>> >>> >>> ------------------------------**______------------------------** >>> --__--__--__------------- >>> >>> >>> Deutsches Forschungszentrum f?r >>> K?nstliche >>> Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 >>> Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang >>> Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: >>> Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, >>> Steuernummer: >>> 19/673/0060/3 >>> >>> >>> ------------------------------**______------------------------** >>> --__--__--__--------------- >>> >>> >>> >>> >>> >>> ______________________________**_____________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >> ware.eu > >>> >> _ware.eu >>> >>> >> >>> >>> >>> >>> > >>> >> _ware.eu >>> >>> >>> >>> >>> >>> >>> > >>> >>> >>> >> _ware.eu >>> >>> >>> >>> https://lists.fi-ware.eu/____**listinfo/fiware-miwi >>> >>> > >>> >>> >>> >>> >> >>> >>> >>> >>> >>> -- >>> >>> >>> ------------------------------**____--------------------------** >>> --__--__------------- >>> Deutsches Forschungszentrum f?r K?nstliche >>> Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 >>> Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster >>> (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB >>> 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: >>> 19/673/0060/3 >>> >>> ------------------------------**____--------------------------** >>> --__--__--------------- >>> >>> >>> >>> >>> >>> >>> -- >>> >>> >>> ------------------------------**____--------------------------** >>> --__--__------------- >>> Deutsches Forschungszentrum f?r K?nstliche >>> Intelligenz >>> (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster >>> (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: >>> 19/673/0060/3 >>> >>> ------------------------------**____--------------------------** >>> --__--__--------------- >>> >>> >>> >>> >>> >>> >>> >>> -- >>> >>> >>> ------------------------------**____--------------------------** >>> --__--__------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz >>> (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> >>> ------------------------------**____--------------------------** >>> --__--__--------------- >>> >>> ______________________________**___________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> >>> > >>> >>> >>> >> >>> https://lists.fi-ware.eu/__**listinfo/fiware-miwi >>> >>> > >>> >>> >>> >>> >>> -- >>> >>> ------------------------------**__----------------------------** >>> --__------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> ------------------------------**__----------------------------** >>> --__--------------- >>> >>> >>> >> >> -- >> >> ------------------------------**------------------------------** >> ------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> ------------------------------**------------------------------** >> --------------- >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Fri Oct 25 10:48:08 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Fri, 25 Oct 2013 10:48:08 +0200 Subject: [Fiware-miwi] a canonical custom component example: Door (Re: 13:00 meeting location: CIE (Re: Oulu meet today 13:00)) In-Reply-To: <69BDA86F-68AF-47DC-B715-D27F9267B54A@playsign.net> References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <52692BAC.8050600@dfki.de> <526949CA.5080402@dfki.de> <52696BD5.8070007@dfki.de> <526A0FB1.9000709@dfki.de> <69BDA86F-68AF-47DC-B715-D27F9267B54A@playsign.net> Message-ID: <526A3048.2010101@dfki.de> Hi Toni, Great example! WebComponents are meant to allow you to do exactly this on the HTML-5 side and we should be targeting them also in 2D-UI objective. It would be great to port this to the XML3D side to show how we can (hopefully) nice model that also in the Web context. I would love to see and fix any issues we may encounter along this way. This would be a great collaboration and would nicely show the power of our GEs! I very much like the idea of using the mousewheel for opening the door in an analog way. It could even be coupled to a physics engine modifying the mapping based on the mass of the door (or such). Regarding AnyDSL, the JS side would define the "Door" data structure and connect to the corresponding "Door service" on the server side. In our C/C++ implementation, we would then see the native data structures in realXtend and map the JS data structure to the C/C++ data structure and generate optimal code to implement that mapping and the encoding for the communication protocal chosen (REST or binary or so). Currently we assume a 1:1 mapping of the data structures but more complex mappings are possible in a later stage. One thing that is high on my list are optional data elements, that must not always be transmitted for increasing performance and minimizing bandwidth. BTW, in our AI related work, the closed/locked state is exposed as sematic state that is fed into our semantic reasoner and planner in order for an agent to decide what action to do. the agents observes the XML3D scene, feeds this into its believe state, and then decides on the best action to perform to reach his goal(s). The actions are just Web services (maybe implemented within the same Web/JS scene, so not necessarily going to the network). This offers a lot of flexibility but can be costly to compute if you are not careful. Stripped down more reactive, rule-based modes are also possible, of course, and we use both depending on how "intelligent" an agent needs to be. The reasoner and planner are implemented as external Web services. Best, Philipp Am 25.10.2013 09:56, schrieb Toni Alatalo: > On 25 Oct 2013, at 10:05, Tomi Sarni > wrote: >> Currently when a client polls a device(containing sensor and/or >> actuators) it will receive all interaction options that available for >> the particular sensor or actuator. These options can be then accessed >> by a HTTP POST method from the service. So there is the logical >> mapping. I can see your point though, in a way it would seem logical >> to have that XML3D model to contain states (e.g. button up and button >> down 3d model states), and i have no idea whether this is supported by >> XML3D, as i have been busy on server/sensor side. This way when a >> sensor is being accesses by HTTP POST call to change state to either >> on or off for instance, the XML3D model could contain transition logic >> to change appearance from one state to another. Alternatively there >> can be two models for two states. When the actuator is being queried >> it will return model that corresponds to its current state. > > Having arbitrary custom state is exactly what the entity system in > realXtend is for, and applying the mapping between reX EC model & xml3d > we have now, it would be how we use the xml3d format as well. The ?x? in > xml is for ?extensible? (and we can nowadays read the X in reX the same > way, though it originally refers to extending reality, not extensible > virtual worlds :) > > The first script + custom component test&demo I made with Naali/Tundra > was a Door. It was implemented by defining a Door component like this: > > Door: > bool: opened > bool: locked > > The functionality was implemented with a script that listened > clicks/touches on the object. If it was closed, but was not opened, it > opened upon touch. If it was already open, it was always closed. If it > was closed by locked, it did not do anything. It could be > locked/unlocked with a GUI button. When hovering with the mouse cursor > over the door the cursor type depended on the state: different icon was > used based on whether it was closed-unlocked, closed-locked or opened > (to communicate the action that would happen in advance to the user). > The demo scene + code for that is in the tundra 1 branch, > https://github.com/realXtend/naali/blob/tundra/bin/scenes/Door/door.coffee > .. the port to tundra2 was not completed (yet?) so it?s not in current ? > would still be a nice demo of using custom data + scripting I think, > feel free to port it anyone! The model is a nicely modelled & rigged > accordion door (haitariovi) even which animates. Next planned step was > to allow use of mouse wheel to slide (animate) the door partially to > test streaming sync and not just boolean toggles, I think I tested > somehow a little too then. > > So in the human friendly xml format (i.e. xml3d) that example would look > like this: > > > > > > > Whereas the same as TXML, the relevant parts copy-pasted from > https://github.com/realXtend/naali/blob/tundra/bin/scenes/Door/door.txml > is: (note that this is tundra1 txml so a bit different from current 2.x > series!) > > > > > > > > > > > > > > > > > > > > > > You can see how it uses DynamicComponent -- that?s due to the weakness > of the JS API in Tundra currently which was discussed in an earlier > thread here on MIWI: ideally it would say (EC_)Door in that TXML. > > In the xml3d example it is assumed that we have a way to register custom > handlers for component types without separate script-references. That?s > how I originally implemented the system in Tundra 0.x times too and how > the original door implementation worked (the component registry and > sandboxed JS Api exposing was first implemented in Python which was made > optional later, and the later made EC_Script mechanism made the > component type handler registry quite redundant - it was nicer though to > only need to add one component to make an object a Door for example .. > currently I think it goes ok so that the handler script creates the data > component it needs so that users don?t need to add two manually). In > current Tundra you?d get the EC_Door working just like that in C++ > implementing the IComponent interface. > > ~Toni > > >> On Fri, Oct 25, 2013 at 9:29 AM, Philipp Slusallek >> > wrote: >> >> Hi Tomi, >> >> Yes, this is definitely an interesting option and when sensors >> offer REST-ful interfaces, it should be almost trivial to add >> (once a suitable and standardized way of how to find that data is >> specified. At least it would provide a kind of default >> visualization in case no other is available. >> >> It becomes more of an issue when we talk about interactivity, when >> the visual representation needs to react to user input in a way >> that is consistent with the application and calls functionality in >> the application. In other words, you have to do a mapping from the >> sensor to the application at some point along the pipeline (and >> back for actions to be performed by an actuator). >> >> Either we specify the sensor type through some semantic means (a >> simple tag in the simplest case, a full RDF/a graph in the best >> case) and let the application choose how to represent it or we >> need to find a way to map generic behavior of a default object to >> application functionality. The first seems much easier to me as >> application functionality is likely to vary much more than sensor >> functionality. And semantic sensor description have been worked on >> for a long time and are available on the market. >> >> Of course, there are hybrid methods as well: A simple one would be >> to include a URI/URL to a default model in the semantic sensor >> description that then gets loaded either from the sensor through >> REST (given some namespace there) or via the Web (again using some >> namespace or search strategy). Then the application can always >> inject its own mapping to what it thinks is the best mapping. >> >> >> Best, >> >> Philipp >> >> Am 25.10.2013 07:52, schrieb Tomi Sarni: >> >> *Following is completely on theoretical level:* >> >> To mix things a little further i've been thinking about a >> possibility to >> store visual representation of sensors within the sensors >> themselves. >> Many sensor types allow HTTP POST/GET or even PUT/DELETE methods >> (wrapped in SNMP/CoAP communication protocols for instance) >> which in >> theory would allow sensor subscribers to also publish >> information in >> sensors (e.g. upload an xml3d model). This approach could be >> useful in >> cases where these sensors would have different purposes of >> use. But the >> sensor may have very little space to use for the model from up >> 8-18 KB. >> Also the web service can attach the models to IDs through use >> of data >> base. This is really just a pointer, perhaps there would be >> use-cases >> where the sensor visualization could be stored within the >> sensor itself, >> i think specifically some AR solutions could benefit from >> this. But do >> not let this mix up things, this perhaps reinforces the fact >> that there >> need to be overlaying middleware services that attach visual >> representation based on their own needs. One service could use >> different >> 3d representation for temperature sensor than another one. >> >> >> >> >> On Thu, Oct 24, 2013 at 9:49 PM, Philipp Slusallek >> >> > >> wrote: >> >> Hi, >> >> OK, now I get it. This does make sense -- at least in a local >> scenario, where the POI data (in this example) needs to be >> stored >> somewhere anyway and storing it in a component and then >> generating >> the appropriate visual component does make sense. Using web >> components or a similar mechanism we could actually do the >> same via >> the DOM (as discussed for the general ECA sync before). >> >> But even then you might actually not want to store all the >> POI data >> but only the part that really matter to the application >> (there may >> be much more data -- maybe not for POIs but potentially >> for other >> things). >> >> Also in a distributed scenario, I am not so sure. In that >> case you >> might want to do that mapping on the server and only sync the >> resulting data, maybe with reference back so you can still >> interact >> with the original data through a service call. That is the >> main >> reason why I in general think of POI data and POI >> representation as >> separate entities. >> >> Regarding terminology, I think it does make sense to >> differntiate >> between the 3D scene and the application state (that is >> not directly >> influencing the 3D rendering and interaction). While you >> store them >> within the same data entity (but in different components), >> they >> still refer to quite different things and are operated on by >> different parts of you program (e.g. the renderer only >> ever touches >> the "scene" data). We do the same within the XML3D core, >> where we >> attach renderer-specific data to DOM nodes and I believe >> three.js >> also does something similar within its data structures. At >> the end, >> you have to store these things somewhere and there are >> only so many >> way to implement it. The differences are not really that big. >> >> >> Best, >> >> Philipp >> >> Am 24.10.2013 19:24, schrieb Toni Alatalo: >> >> On 24 Oct 2013, at 19:24, Philipp Slusallek >> > >> > > >> > __df__ki.de >> >> > >>> wrote: >> >> Good discussion! >> >> >> I find so too ? thanks for the questions and comments >> and all! Now >> briefly about just one point: >> >> Am 24.10.2013 17:37, schrieb Toni Alatalo: >> >> integrates to the scene system too - for >> example if a >> scene server >> queries POI services, does it then only use >> the data to >> manipulate >> the scene using other non-POI components, or >> does it >> often make sense >> also to include POI components in the scene so >> that the >> clients get >> it too automatically with the scene sync and >> can for >> example provide >> POI specific GUI tools. Ofc clients can query POI >> services directly >> too but this server centric setup is also one >> scenario >> and there the >> scene integration might make sense. >> >> But I would say that there is a clear distinction >> between >> the POI data >> (which you query from some service) and the >> visualization or >> representation of the POI data. Maybe you are more >> talking >> about the >> latter here. However, there really is an application >> dependent mapping >> from the POI data to its representation. Each >> application >> may choose >> to present the same POI data in very different way >> and its >> only this >> resulting representation that becomes part of the >> scene. >> >> >> No I was not talking about visualization or >> representations here >> but the >> POI data. >> >> non-POI in the above tried to refer to the whole which >> covers >> visualisations etc :) >> >> Your last sentence may help to understand the >> confusion: in >> these posts >> I?ve been using the reX entity system terminology only >> ? hoping >> that it >> is clear to discuss that way and not mix terms (like >> I?ve tried >> to do in >> some other threads). >> >> There ?scene? does not refer to a visual / graphical >> or any >> other type >> of scene. It does not refer to e.g. something like >> what xml3d.js and >> three.js, or ogre, have as their Scene objects. >> >> It simply means the collection of all entities. There >> it is >> perfectly >> valid to any kind of data which does not end up to >> e.g. the >> visual scene >> ? many components are like that. >> >> So in the above ?only use the data to manipulate the >> scene using >> other >> non-POI components? was referring to for example >> creation of Mesh >> components if some POI is to be visualised that way. >> The mapping >> that >> you were discussing. >> >> But my point was not about that but about the POI data >> itself ? >> and the >> example about some end user GUI with a widget that >> manipulates >> it. So it >> then gets automatically synchronised along with all >> the other >> data in >> the application in a collaborative setting etc. >> >> Stepping out of the previous terminology, we could perhaps >> translate: >> ?scene? -> ?application state? and ?scene server? -> >> ?synchronization >> server?. >> >> I hope this clarifies something ? my apologies if not.. >> >> Cheers, >> ~Toni >> >> P.S. i sent the previous post from a foreign device >> and accidentally >> with my gmail address as sender so it didn?t make it >> to the list >> ? so >> thank you for quoting it in full so I don?t think we >> need to >> repost that :) >> >> This is essentially the Mapping stage of the >> well-known >> Visualization >> pipeline >> >> (http://www.infovis-wiki.net/____index.php/Visualization_____Pipeline >> >> >> > >), >> >> except >> that here we also map interaction aspects to an >> abstract scene >> description (XML3D) first, which then performs the >> rendering and >> interaction. So you can think of this as an additional >> "Scene" stage >> between "Mapping" and "Rendering". >> >> I think this is a different topic, but also with >> real-virtual >> interaction for example how to facilitate nice >> simple >> authoring of >> the e.g. real-virtual object mappings seems a >> fruitful >> enough angle >> to think a bit, perhaps as a case to help in >> understanding the entity >> system & the different servers etc. For >> example if there's a >> component type 'real world link', the >> Interface Designer >> GUI shows it >> automatically in the list of components, ppl >> can just >> add them to >> their scenes and somehow then the system just >> works.. >> >> >> I am not sure what you are getting at. But it >> would be great >> if the >> Interface Designer would allow to choose such POI >> mappings >> from a >> predegined catalog. It seems that Xflow can be >> used nicely for >> generating the mapped scene elements from some >> input data, >> e.g. using >> the same approach we use to provide basic >> primitives like >> cubes or >> spheres in XML3D. Here they are not fixed, >> build-in tags as >> in X3D but >> can actually be added by the developer as it best >> fits. >> >> For generating more complex subgraphs we may have >> to extend the >> current Xflow implementation. But its at least a great >> starting point >> to experiment with it. Experiments and feedback >> would be >> very welcome >> here. >> >> I don't think these discussions are now hurt by us >> (currently) having >> alternative renderers - the entity system, >> formats, sync >> and the >> overall architecture is the same anyway. >> >> >> Well, some things only work in one and others only >> in the other >> branch. So the above mechanism could not be used to >> visualize POIs in >> the three.js branch but we do not have all the >> features to >> visualize >> Oulu (or whatever city) in the XML3D.js branch. This >> definitely IS >> greatly limiting how we can combine the GEs into >> more complex >> applications -- the untimate goal of the >> orthogonal design >> of this >> chapter. >> >> And it does not even work within the same chapter. >> It will >> be hard to >> explain to Juanjo and others from FI-WARE (or the >> commission >> for that >> matter). >> >> BTW, I just learned today that there is a FI-WARE >> smaller review >> coming up soon. Let's see if we already have to >> present >> things there. >> So far they have not explicitly asked us. >> >> >> Best, >> >> Philipp >> >> -Toni >> >> >> From an XML3D POV things could actually >> be quite >> "easy". It should >> be rather simple to directly interface to >> the IoT >> GEs of FI-WARE >> through REST via a new Xflow element. This >> would >> then make the data >> available through elements. Then >> you can use >> all the features >> of Xflow to manipulate the scene based on >> the data. >> For example, we >> are discussing building a set of >> visualization nodes >> that implement >> common visualization metaphors, such as >> scatter >> plots, animations, >> you name it. A new member of the lab >> starting soon >> wants to look >> into this area. >> >> For acting on objects we have always used Web >> services attached to >> the XML3D objects via DOM events. >> Eventually, I >> believe we want a >> higher level input handling and processing >> framework >> but no one >> knows so far, how this should look like >> (we have >> some ideas but they >> are not well baked, any inpu is highly welcome >> here). This might or >> might not reuse some of the Xflow mechanisms. >> >> But how to implement RealVirtual >> Interaction is >> indeed an intersting >> discussion. Getting us all on the same >> page and >> sharing ideas and >> implementations is very helpful. Doing >> this on the >> same SW platform >> (without the fork that we currently have) >> would >> facilitate a >> powerful implementation even more. >> >> >> Thanks >> >> Philipp >> >> Am 23.10.2013 08:02, schrieb Tomi Sarni: >> >> ->Philipp >> /I did not get the idea why POIs are >> similar to >> ECA. At a very high >> level I see it, but I am not sure what >> it buys >> us. Can someone sketch >> that picture in some more detail?/ >> >> Well I suppose it becomes relevant at >> point when >> we are combining our >> GEs together. If the model can be >> applied in >> level of scene then >> down to >> POI in a scene and further down in >> sensor level, >> things can be >> more easily visualized. Not just in >> terms of >> painting 3D models but in >> terms of handling big data as well, more >> specifically handling >> relationships/inheritance. It also >> makes it easier >> to design a RESTful API as we have a >> common >> structure which to follow >> and also provides more opportunities >> for 3rd >> party developers to make >> use of the data for their own purposes. >> >> For instance >> >> ->Toni >> >> From point of sensors, the >> entity-component becomes >> device-sensors/actuators. A device may >> have an >> unique identifier and IP >> by which to access it, but it may also >> contain >> several actuators and >> sensors >> that are components of that device >> entity. >> Sensors/actuators >> themselves >> are not aware to whom they are >> interesting to. >> One client may use the >> sensor information differently to >> other client. >> Sensor/actuator service >> allows any other service to query using >> request/response method either >> by geo-coordinates (circle,square or >> complex >> shape queries) or perhaps >> through type+maxresults and service >> will return >> entities and their >> components >> from which the reqester can form logical >> groups(array of entity uuids) >> and query more detailed information >> based on >> that logical group. >> >> I guess there needs to be similar >> thinking done >> on POI level. I guess >> POI does not know which scene it >> belongs to. It >> is up to scene >> server to >> form a logical group of POIs (e.g. >> restaurants >> of oulu 3d city >> model). Then >> again the problem is that scene needs >> to wait >> for POI to query for >> sensors and form its logical groups >> before it >> can pass information to >> scene. This can lead to long wait >> times. But >> this sequencing problem is >> also something >> that could be thought. Anyways this is >> a common >> problem with everything >> in web at the moment in my opinnion. >> Services >> become intertwined. >> When a >> client loads a web page there can be >> queries to >> 20 different services >> for advertisment and other stuff. Web page >> handles it by painting stuff >> to the client on receive basis. I >> think this >> could be applied in Scene >> as well. >> >> >> >> >> >> On Wed, Oct 23, 2013 at 8:00 AM, >> Philipp Slusallek >> > >> > > >> > __df__ki.de >> > >> >> > __df__ki.de >> >> > >>> wrote: >> >> Hi, >> >> First of all, its certainly a good >> thing to >> also meet locally. I was >> just a bit confused whether that >> meeting >> somehow would involve us as >> well. Summarizing the results >> briefly for >> the others would >> definitely be interesting. >> >> I did not get the idea why POIs >> are similar >> to ECA. At a very high >> level I see it, but I am not sure >> what it >> buys us. Can someone >> sketch that picture in some more >> detail? >> >> BTW, what is the status with the >> Rendering >> discussion (Three.js vs. >> xml3d.js)? I still have the >> feeling that we >> are doing parallel work >> here that should probably be avoided. >> >> BTW, as part of our shading work >> (which is >> shaping up nicely) Felix >> has been looking lately at a way >> to describe >> rendering stages >> (passes) essentially through >> Xflow. It is >> still very experimental >> but he is using it to implement >> shadow maps >> right now. >> >> @Felix: Once this has converged >> into a bit >> more stable idea, it >> would be good to post this here to get >> feedback. The way we >> discussed it, this approach could >> form a >> nice basis for a modular >> design of advanced rasterization >> techniques >> (reflection maps, adv. >> face rendering, SSAO, lens flare, tone >> mapping, etc.), and (later) >> maybe also describe global >> illumination >> settings (similar to our >> work on LightingNetworks some >> years ago). >> >> >> Best, >> >> Philipp >> >> Am 22.10.2013 23:03, schrieb >> toni at playsign.net >> > >> > >> > >> >> >> > >> > >>: >> >> Just a brief note: we had some >> interesting preliminary >> discussion >> triggered by how the data >> schema that >> Ari O. presented for >> the POI >> system seemed at least partly >> similar to >> what the Real-Virtual >> interaction work had resulted >> in too -- >> and in fact about >> how the >> proposed POI schema was >> basically a >> version of the >> entity-component >> model which we?ve already been >> using for >> scenes in realXtend >> (it is >> inspired by / modeled after >> it, Ari >> told). So it can be much >> related to >> the Scene API work in the >> Synchronization GE too. As the action >> point we >> agreed that Ari will organize >> a specific >> work session on that. >> I was now thinking that it >> perhaps at >> least partly leads >> back to the >> question: how do we define (and >> implement) component types. I.e. >> what >> was mentioned in that >> entity-system post >> a few weeks back (with >> links >> to reX IComponent etc.). I >> mean: if >> functionality such as >> POIs and >> realworld interaction make >> sense as >> somehow resulting in >> custom data >> component types, does it mean >> that a key >> part of the framework >> is a way >> for those systems to declare >> their types >> .. so that it >> integrates nicely >> for the whole we want? I?m not >> sure, too >> tired to think it >> through now, >> but anyhow just wanted to >> mention that >> this was one topic that >> came up. >> I think Web Components is again >> something to check - as in XML >> terms reX >> Components are xml(3d) >> elements .. just >> ones that are usually in >> a group >> (according to the reX entity >> <-> xml3d >> group mapping). And Web >> Components are about defining & >> implementing new elements >> (as Erno >> pointed out in a different >> discussion >> about xml-html authoring >> in the >> session). >> BTW Thanks Kristian for the great >> comments in that entity system >> thread - was really good to >> learn about >> the alternative >> attribute access >> syntax and the validation in >> XML3D(.js). >> ~Toni >> P.S. for (Christof &) the DFKI >> folks: >> I?m sure you >> understand the >> rationale of these Oulu meets >> -- idea is >> ofc not to exclude you >> from the >> talks but just makes sense for >> us to >> meet live too as we are in >> the same >> city afterall etc -- naturally >> with the >> DFKI team you also talk >> there >> locally. Perhaps is a good >> idea that we >> make notes so that can >> post e.g. >> here then (I?m not >> volunteering though! >> ?) . Also, the now >> agreed >> bi-weekly setup on Tuesdays >> luckily >> works so that we can then >> summarize >> fresh in the global Wed >> meetings and >> continue the talks etc. >> *From:* Erno Kuusela >> *Sent:* ?Tuesday?, ?October? >> ?22?, ?2013 >> ?9?:?57? ?AM >> *To:* Fiware-miwi >> >> >> Kari from CIE offered to host >> it this >> time, so see you there at >> 13:00. >> >> Erno >> >> >> _____________________________________________________ >> >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> > > >> >> > >> > >> >> >> > >> > >> >> https://lists.fi-ware.eu/______listinfo/fiware-miwi >> >> > > >> >> >> > >> > >> >> >> >> >> >> _____________________________________________________ >> >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> > > >> >> > >> > >> >> >> > >> > >> >> https://lists.fi-ware.eu/______listinfo/fiware-miwi >> >> > > >> >> >> >> > >> > >> >> >> >> >> -- >> >> >> >> ------------------------------______--------------------------__--__--__------------- >> >> Deutsches Forschungszentrum f?r >> K?nstliche >> Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 >> Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang >> Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: >> Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, >> Steuernummer: >> 19/673/0060/3 >> >> >> ------------------------------______--------------------------__--__--__--------------- >> >> >> >> >> ___________________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> > > >> >> > >> > >> >> >> > >> >> > >> >> https://lists.fi-ware.eu/____listinfo/fiware-miwi >> >> >> > > >> >> >> >> >> -- >> >> >> ------------------------------____----------------------------__--__------------- >> Deutsches Forschungszentrum f?r K?nstliche >> Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 >> Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster >> (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB >> 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: >> 19/673/0060/3 >> >> ------------------------------____----------------------------__--__--------------- >> >> >> >> >> >> -- >> >> >> ------------------------------____----------------------------__--__------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz >> (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster >> (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> >> ------------------------------____----------------------------__--__--------------- >> >> >> >> >> >> >> -- >> >> >> ------------------------------____----------------------------__--__------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz >> (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> >> ------------------------------____----------------------------__--__--------------- >> >> _________________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> >> > > >> https://lists.fi-ware.eu/__listinfo/fiware-miwi >> >> >> >> >> >> -- >> >> ------------------------------__------------------------------__------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> ------------------------------__------------------------------__--------------- >> >> > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 456 bytes Desc: not available URL: From Philipp.Slusallek at dfki.de Fri Oct 25 10:51:13 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Fri, 25 Oct 2013 10:51:13 +0200 Subject: [Fiware-miwi] 13:00 meeting location: CIE (Re: Oulu meet today 13:00) In-Reply-To: References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <52692BAC.8050600@dfki.de> <526949CA.5080402@dfki.de> <52696BD5.8070007@dfki.de> <526A0FB1.9000709@dfki.de> <526A2BC4.7080005@dfki.de> Message-ID: <526A3101.3000300@dfki.de> Hi, I would love to get very concrete and clear feedback on the NGI GEs from FI-WARE. We are having a Architecture Board meeting next week and I could put this on that agenda still. As you know a lot of people see 3D worlds and IoT as a perfect combination and want to see things happening. So we should tell them how to improve things to make it possible/easier. Maybe you can set up a Google doc where we summarize this? Best, Philipp Am 25.10.2013 10:34, schrieb Tomi Sarni: > Yes I agree in general. Just a thought that in some use-cases this > could be thought as an option. It has been difficult to design the API > in a way that it would be highly dynamic in a sense that it would suite > wide variety of application development needs. > The NGSI 9/10 > > development in earlier GE seemed difficult to adapt and in my personal > opinnion does allow passing interaction interface clearly enough for the > application developer. > > > On Fri, Oct 25, 2013 at 11:28 AM, Philipp Slusallek > > wrote: > > Hi, > > With interaction I mean the user interaction. Yes, it eventually > gets mapped to REST (or such) calls to the device. But how you map > the device functionality to user interaction is a big step where > different applicatios will have very different assumptions and > interaction metaphors. Mapping them all to ageneric sensor model > seems very difficult. > > Using a sematic annotation avoid having to create such a mapping > when you design the sensor, avoid having to store the model on each > sonsor, and pushes the mapping to the software/application > side,which is (in my opinion) in a much better option to decide on > that mapping. A fallback mapping may still be provided by the sensor > for the most basic cases. > > > Best, > > Philipp > > Am 25.10.2013 09:05, schrieb Tomi Sarni: > > /It becomes more of an issue when we talk about interactivity, > when the > > visual representation needs to react to user input in a way that is > consistent with the application and calls functionality in the > application. In other words, you have to do a mapping from the > sensor to > the application at some point along the pipeline (and back for > actions > to be performed by an actuator)./ > > > Currently when a client polls a device(containing sensor and/or > actuators) it will receive all interaction options that > available for > the particular sensor or actuator. These options can be then > accessed > by a HTTP POST method from the service. So there is the logical > mapping. > I can see your point though, in a way it would seem logical to > have that > XML3D model to contain states (e.g. button up and button down 3d > model > states), and i have no idea whether this is supported by XML3D, as i > have been busy on server/sensor side. This way when a sensor is > being > accesses by HTTP POST call to change state to either on or off for > instance, the XML3D model could contain transition logic to change > appearance from one state to another. Alternatively there can be two > models for two states. When the actuator is being queried it > will return > model that corresponds to its current state. > > > > > > On Fri, Oct 25, 2013 at 9:29 AM, Philipp Slusallek > > >> wrote: > > Hi Tomi, > > Yes, this is definitely an interesting option and when > sensors offer > REST-ful interfaces, it should be almost trivial to add (once a > suitable and standardized way of how to find that data is > specified. > At least it would provide a kind of default visualization > in case no > other is available. > > It becomes more of an issue when we talk about > interactivity, when > the visual representation needs to react to user input in a > way that > is consistent with the application and calls functionality > in the > application. In other words, you have to do a mapping from the > sensor to the application at some point along the pipeline > (and back > for actions to be performed by an actuator). > > Either we specify the sensor type through some semantic > means (a > simple tag in the simplest case, a full RDF/a graph in the best > case) and let the application choose how to represent it or > we need > to find a way to map generic behavior of a default object to > application functionality. The first seems much easier to me as > application functionality is likely to vary much more than > sensor > functionality. And semantic sensor description have been > worked on > for a long time and are available on the market. > > Of course, there are hybrid methods as well: A simple one > would be > to include a URI/URL to a default model in the semantic sensor > description that then gets loaded either from the sensor > through > REST (given some namespace there) or via the Web (again > using some > namespace or search strategy). Then the application can always > inject its own mapping to what it thinks is the best mapping. > > > Best, > > Philipp > > Am 25.10.2013 07:52, schrieb Tomi Sarni: > > *Following is completely on theoretical level:* > > To mix things a little further i've been thinking about a > possibility to > store visual representation of sensors within the sensors > themselves. > Many sensor types allow HTTP POST/GET or even > PUT/DELETE methods > (wrapped in SNMP/CoAP communication protocols for > instance) which in > theory would allow sensor subscribers to also publish > information in > sensors (e.g. upload an xml3d model). This approach > could be > useful in > cases where these sensors would have different purposes > of use. > But the > sensor may have very little space to use for the model > from up > 8-18 KB. > Also the web service can attach the models to IDs > through use of > data > base. This is really just a pointer, perhaps there would be > use-cases > where the sensor visualization could be stored within > the sensor > itself, > i think specifically some AR solutions could benefit > from this. > But do > not let this mix up things, this perhaps reinforces the > fact > that there > need to be overlaying middleware services that attach > visual > representation based on their own needs. One service > could use > different > 3d representation for temperature sensor than another one. > > > > > On Thu, Oct 24, 2013 at 9:49 PM, Philipp Slusallek > > > > __df__ki.de > >>> wrote: > > Hi, > > OK, now I get it. This does make sense -- at least > in a local > scenario, where the POI data (in this example) > needs to be > stored > somewhere anyway and storing it in a component and > then > generating > the appropriate visual component does make sense. > Using web > components or a similar mechanism we could > actually do the > same via > the DOM (as discussed for the general ECA sync > before). > > But even then you might actually not want to store > all the > POI data > but only the part that really matter to the > application > (there may > be much more data -- maybe not for POIs but > potentially for > other > things). > > Also in a distributed scenario, I am not so sure. > In that > case you > might want to do that mapping on the server and > only sync the > resulting data, maybe with reference back so you > can still > interact > with the original data through a service call. > That is the main > reason why I in general think of POI data and POI > representation as > separate entities. > > Regarding terminology, I think it does make sense to > differntiate > between the 3D scene and the application state > (that is not > directly > influencing the 3D rendering and interaction). > While you > store them > within the same data entity (but in different > components), they > still refer to quite different things and are > operated on by > different parts of you program (e.g. the renderer > only ever > touches > the "scene" data). We do the same within the XML3D > core, > where we > attach renderer-specific data to DOM nodes and I > believe > three.js > also does something similar within its data > structures. At > the end, > you have to store these things somewhere and there > are only > so many > way to implement it. The differences are not > really that big. > > > Best, > > Philipp > > Am 24.10.2013 19:24, schrieb Toni Alatalo: > > On 24 Oct 2013, at 19:24, Philipp Slusallek > > > > __df__ki.de > >> > > >__d__f__ki.de > > > > __df__ki.de > >>>> wrote: > > Good discussion! > > > I find so too ? thanks for the questions and > comments > and all! Now > briefly about just one point: > > Am 24.10.2013 17:37, schrieb Toni Alatalo: > > integrates to the scene system too - for > example if a > scene server > queries POI services, does it then > only use the > data to > manipulate > the scene using other non-POI > components, or > does it > often make sense > also to include POI components in the > scene so > that the > clients get > it too automatically with the scene > sync and > can for > example provide > POI specific GUI tools. Ofc clients > can query POI > services directly > too but this server centric setup is > also one > scenario > and there the > scene integration might make sense. > > But I would say that there is a clear > distinction > between > the POI data > (which you query from some service) and the > visualization or > representation of the POI data. Maybe you > are more > talking > about the > latter here. However, there really is an > application > dependent mapping > from the POI data to its representation. Each > application > may choose > to present the same POI data in very > different way > and its > only this > resulting representation that becomes part > of the > scene. > > > No I was not talking about visualization or > representations here > but the > POI data. > > non-POI in the above tried to refer to the > whole which > covers > visualisations etc :) > > Your last sentence may help to understand the > confusion: in > these posts > I?ve been using the reX entity system > terminology only > ? hoping > that it > is clear to discuss that way and not mix terms > (like > I?ve tried > to do in > some other threads). > > There ?scene? does not refer to a visual / > graphical or any > other type > of scene. It does not refer to e.g. something > like what > xml3d.js and > three.js, or ogre, have as their Scene objects. > > It simply means the collection of all > entities. There it is > perfectly > valid to any kind of data which does not end > up to e.g. the > visual scene > ? many components are like that. > > So in the above ?only use the data to > manipulate the > scene using > other > non-POI components? was referring to for example > creation of Mesh > components if some POI is to be visualised > that way. > The mapping > that > you were discussing. > > But my point was not about that but about the > POI data > itself ? > and the > example about some end user GUI with a widget that > manipulates > it. So it > then gets automatically synchronised along > with all the > other > data in > the application in a collaborative setting etc. > > Stepping out of the previous terminology, we > could perhaps > translate: > ?scene? -> ?application state? and ?scene > server? -> > ?synchronization > server?. > > I hope this clarifies something ? my apologies > if not.. > > Cheers, > ~Toni > > P.S. i sent the previous post from a foreign > device and > accidentally > with my gmail address as sender so it didn?t > make it to > the list > ? so > thank you for quoting it in full so I don?t > think we > need to > repost that :) > > This is essentially the Mapping stage of > the well-known > Visualization > pipeline > > > (http://www.infovis-wiki.net/______index.php/Visualization_______Pipeline > > > > > > > > > >>), > > except > that here we also map interaction aspects > to an > abstract scene > description (XML3D) first, which then > performs the > rendering and > interaction. So you can think of this as > an additional > "Scene" stage > between "Mapping" and "Rendering". > > I think this is a different topic, but > also with > real-virtual > interaction for example how to > facilitate nice > simple > authoring of > the e.g. real-virtual object mappings > seems a > fruitful > enough angle > to think a bit, perhaps as a case to > help in > understanding the entity > system & the different servers etc. > For example > if there's a > component type 'real world link', the > Interface > Designer > GUI shows it > automatically in the list of > components, ppl > can just > add them to > their scenes and somehow then the > system just > works.. > > > I am not sure what you are getting at. But > it would > be great > if the > Interface Designer would allow to choose > such POI > mappings > from a > predegined catalog. It seems that Xflow > can be used > nicely for > generating the mapped scene elements from some > input data, > e.g. using > the same approach we use to provide basic > primitives like > cubes or > spheres in XML3D. Here they are not fixed, > build-in > tags as > in X3D but > can actually be added by the developer as > it best fits. > > For generating more complex subgraphs we > may have > to extend the > current Xflow implementation. But its at > least a great > starting point > to experiment with it. Experiments and > feedback > would be > very welcome > here. > > I don't think these discussions are > now hurt by us > (currently) having > alternative renderers - the entity system, > formats, sync > and the > overall architecture is the same anyway. > > > Well, some things only work in one and > others only > in the other > branch. So the above mechanism could not > be used to > visualize POIs in > the three.js branch but we do not have all the > features to > visualize > Oulu (or whatever city) in the XML3D.js > branch. This > definitely IS > greatly limiting how we can combine the > GEs into > more complex > applications -- the untimate goal of the > orthogonal > design > of this > chapter. > > And it does not even work within the same > chapter. > It will > be hard to > explain to Juanjo and others from FI-WARE > (or the > commission > for that > matter). > > BTW, I just learned today that there is a > FI-WARE > smaller review > coming up soon. Let's see if we already > have to present > things there. > So far they have not explicitly asked us. > > > Best, > > Philipp > > -Toni > > > From an XML3D POV things could > actually be > quite > "easy". It should > be rather simple to directly > interface to > the IoT > GEs of FI-WARE > through REST via a new Xflow > element. This > would > then make the data > available through elements. > Then you > can use > all the features > of Xflow to manipulate the scene > based on > the data. > For example, we > are discussing building a set of > visualization nodes > that implement > common visualization metaphors, > such as scatter > plots, animations, > you name it. A new member of the lab > starting soon > wants to look > into this area. > > For acting on objects we have > always used Web > services attached to > the XML3D objects via DOM events. > Eventually, I > believe we want a > higher level input handling and > processing > framework > but no one > knows so far, how this should look > like (we > have > some ideas but they > are not well baked, any inpu is > highly welcome > here). This might or > might not reuse some of the Xflow > mechanisms. > > But how to implement RealVirtual > Interaction is > indeed an intersting > discussion. Getting us all on the > same page and > sharing ideas and > implementations is very helpful. > Doing this > on the > same SW platform > (without the fork that we > currently have) would > facilitate a > powerful implementation even more. > > > Thanks > > Philipp > > Am 23.10.2013 08:02, schrieb Tomi > Sarni: > > ->Philipp > /I did not get the idea why > POIs are > similar to > ECA. At a very high > level I see it, but I am not > sure what > it buys > us. Can someone sketch > that picture in some more detail?/ > > Well I suppose it becomes > relevant at > point when > we are combining our > GEs together. If the model can be > applied in > level of scene then > down to > POI in a scene and further down in > sensor level, > things can be > more easily visualized. Not > just in > terms of > painting 3D models but in > terms of handling big data as > well, more > specifically handling > relationships/inheritance. It also > makes it easier > to design a RESTful API as we > have a common > structure which to follow > and also provides more > opportunities > for 3rd > party developers to make > use of the data for their own > purposes. > > For instance > > ->Toni > > From point of sensors, the > entity-component becomes > device-sensors/actuators. A > device may > have an > unique identifier and IP > by which to access it, but it > may also > contain > several actuators and > sensors > that are components of that > device entity. > Sensors/actuators > themselves > are not aware to whom they are > interesting to. > One client may use the > sensor information differently > to other > client. > Sensor/actuator service > allows any other service to > query using > request/response method either > by geo-coordinates > (circle,square or > complex > shape queries) or perhaps > through type+maxresults and > service > will return > entities and their > components > from which the reqester can > form logical > groups(array of entity uuids) > and query more detailed > information > based on > that logical group. > > I guess there needs to be similar > thinking done > on POI level. I guess > POI does not know which scene it > belongs to. It > is up to scene > server to > form a logical group of POIs (e.g. > restaurants > of oulu 3d city > model). Then > again the problem is that > scene needs > to wait > for POI to query for > sensors and form its logical > groups > before it > can pass information to > scene. This can lead to long wait > times. But > this sequencing problem is > also something > that could be thought. Anyways > this is > a common > problem with everything > in web at the moment in my > opinnion. > Services > become intertwined. > When a > client loads a web page there > can be > queries to > 20 different services > for advertisment and other > stuff. Web page > handles it by painting stuff > to the client on receive > basis. I think > this > could be applied in Scene > as well. > > > > > > On Wed, Oct 23, 2013 at 8:00 AM, > Philipp Slusallek > > > > __df__ki.de > >> > > >__d__f__ki.de > > > __df__ki.de > >>> > > >__d__f__ki.de > > > > __df__ki.de > >>>> wrote: > > Hi, > > First of all, its > certainly a good > thing to > also meet locally. I was > just a bit confused > whether that > meeting > somehow would involve us as > well. Summarizing the results > briefly for > the others would > definitely be interesting. > > I did not get the idea why > POIs are > similar > to ECA. At a very high > level I see it, but I am > not sure > what it > buys us. Can someone > sketch that picture in > some more > detail? > > BTW, what is the status > with the > Rendering > discussion (Three.js vs. > xml3d.js)? I still have > the feeling > that we > are doing parallel work > here that should probably > be avoided. > > BTW, as part of our > shading work > (which is > shaping up nicely) Felix > has been looking lately at > a way to > describe > rendering stages > (passes) essentially > through Xflow. > It is > still very experimental > but he is using it to > implement > shadow maps > right now. > > @Felix: Once this has > converged > into a bit > more stable idea, it > would be good to post this > here to get > feedback. The way we > discussed it, this > approach could > form a > nice basis for a modular > design of advanced > rasterization > techniques > (reflection maps, adv. > face rendering, SSAO, lens > flare, tone > mapping, etc.), and (later) > maybe also describe global > illumination > settings (similar to our > work on LightingNetworks > some years > ago). > > > Best, > > Philipp > > Am 22.10.2013 23:03, schrieb > toni at playsign.net > > > > >> > > > > > >>> > > > > > > >>>: > > Just a brief note: we > had some > interesting preliminary > discussion > triggered by how the data > schema that > Ari O. presented for > the POI > system seemed at least > partly > similar to > what the Real-Virtual > interaction work had > resulted > in too -- > and in fact about > how the > proposed POI schema > was basically a > version of the > entity-component > model which we?ve > already been > using for > scenes in realXtend > (it is > inspired by / modeled > after it, Ari > told). So it can be much > related to > the Scene API work in the > Synchronization GE too. As the > action > point we > agreed that Ari will > organize a > specific > work session on that. > I was now thinking that it > perhaps at > least partly leads > back to the > question: how do we > define (and > implement) component types. I.e. > what > was mentioned in that > entity-system post > a few weeks back (with > links > to reX IComponent > etc.). I mean: if > functionality such as > POIs and > realworld interaction > make sense as > somehow resulting in > custom data > component types, does > it mean > that a key > part of the framework > is a way > for those systems to > declare > their types > .. so that it > integrates nicely > for the whole we want? > I?m not > sure, too > tired to think it > through now, > but anyhow just wanted to > mention that > this was one topic that > came up. > I think Web Components > is again > something to check - as in XML > terms reX > Components are xml(3d) > elements > .. just > ones that are usually in > a group > (according to the reX > entity > <-> xml3d > group mapping). And Web > Components are about > defining & > implementing new elements > (as Erno > pointed out in a different > discussion > about xml-html authoring > in the > session). > BTW Thanks Kristian > for the great > comments in that entity system > thread - was really > good to > learn about > the alternative > attribute access > syntax and the > validation in > XML3D(.js). > ~Toni > P.S. for (Christof &) > the DFKI > folks: > I?m sure you > understand the > rationale of these > Oulu meets > -- idea is > ofc not to exclude you > from the > talks but just makes > sense for > us to > meet live too as we are in > the same > city afterall etc -- > naturally > with the > DFKI team you also talk > there > locally. Perhaps is a > good idea > that we > make notes so that can > post e.g. > here then (I?m not > volunteering > though! > ?) . Also, the now > agreed > bi-weekly setup on > Tuesdays luckily > works so that we can then > summarize > fresh in the global Wed > meetings and > continue the talks etc. > *From:* Erno Kuusela > *Sent:* ?Tuesday?, > ?October? > ?22?, ?2013 > ?9?:?57? ?AM > *To:* Fiware-miwi > > > Kari from CIE offered > to host > it this > time, so see you there at > 13:00. > > Erno > > > _______________________________________________________ > > > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > > >> > > > > > > > >>> > > > > > > > >>> > https://lists.fi-ware.eu/________listinfo/fiware-miwi > > > > > > >> > > > > > > > >>> > > > > > _______________________________________________________ > > > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > > >> > > > > > > > >>> > > > > > > > >>> > https://lists.fi-ware.eu/________listinfo/fiware-miwi > > > > > >> > > > > > > > > >>> > > > > -- > > > > > ------------------------------________------------------------__--__--__--__------------- > > > Deutsches > Forschungszentrum f?r > K?nstliche > Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, > D-67663 > Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. > mult. Wolfgang > Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des > Aufsichtsrats: > Prof. Dr. h.c. Hans A. > Aukes > > Sitz der Gesellschaft: > Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, > Steuernummer: > 19/673/0060/3 > > > > ------------------------------________------------------------__--__--__--__--------------- > > > > > > _____________________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > > > >> > > > > > > > >>> > > > > > > > > > >>> > https://lists.fi-ware.eu/______listinfo/fiware-miwi > > > > > > >> > > > > > -- > > > > ------------------------------______--------------------------__--__--__------------- > Deutsches Forschungszentrum f?r > K?nstliche > Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 > Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. > Wolfgang Wahlster > (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: > Kaiserslautern (HRB > 2313) > USt-Id.Nr.: DE 148646973, > Steuernummer: > 19/673/0060/3 > > > ------------------------------______--------------------------__--__--__--------------- > > > > > > > -- > > > > ------------------------------______--------------------------__--__--__------------- > Deutsches Forschungszentrum f?r K?nstliche > Intelligenz > (DFKI) GmbH > Trippstadter Strasse 122, D-67663 > Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster > (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB > 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: > 19/673/0060/3 > > > ------------------------------______--------------------------__--__--__--------------- > > > > > > > > -- > > > > ------------------------------______--------------------------__--__--__------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz > (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster > (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > > > ------------------------------______--------------------------__--__--__--------------- > > ___________________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > > > > >> > https://lists.fi-ware.eu/____listinfo/fiware-miwi > > > > > > > > -- > > > ------------------------------____----------------------------__--__------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz > (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > > ------------------------------____----------------------------__--__--------------- > > > > > -- > > ------------------------------__------------------------------__------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------__------------------------------__--------------- > > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 456 bytes Desc: not available URL: From Philipp.Slusallek at dfki.de Fri Oct 25 11:35:48 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Fri, 25 Oct 2013 11:35:48 +0200 Subject: [Fiware-miwi] Google Web Designer Message-ID: <526A3B74.8040204@dfki.de> Hi, Have you seen Google's "Web Designer" tool for Web-based Web authoring? It even allows downloading of the resulting Web page in a zip file for easy deployment. It seems that this is exactly what we are looking for our "Interface Designer" GE -- just for 3D scenes. We can certainly reuse many of their ideas but maybe also some of their code. Ideally, we could just use their SW and add 3D to it, but I guess that will not be possible for technical and legal reasons. But still. I am not sure what the current state of things is with respect to this GE. But maybe we can start with some simple examples and get them to work and then incrementally add new features as needed (very agile :-). What do you think? One other thing: Would it make sense to have a compact list of features (backlog) somewhere where we can easily see what is planned next and what has been done already. I do not necessarily mean Forge but just a simple Web page with a table or so, as we would only use it internally. Best, Philipp -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From lasse.oorni at ludocraft.com Fri Oct 25 16:48:35 2013 From: lasse.oorni at ludocraft.com (=?iso-8859-1?Q?=22Lasse_=D6=F6rni=22?=) Date: Fri, 25 Oct 2013 17:48:35 +0300 Subject: [Fiware-miwi] Initial Synchronization GE code Message-ID: Hi, the initial code for both the server (realXtend Tundra plugin) and client (JavaScript) parts of the Synchronization GE has been pushed to GitHub. It will naturally be subject to heavy refactoring. Server: https://github.com/realXtend/naali/tree/tundra2/src/Application/WebSocketServerModule Client: https://github.com/realXtend/WebTundraNetworking The WebSocket server's initial code was contributed to open source by Adminotech, thanks to them! The SceneAPI part of the GE will likely become another Tundra plugin. For that, a http server functionality is required. This can be realized with the same websocketpp library that the WebSocket server already uses. -- Lasse ??rni Game Programmer LudoCraft Ltd. From mach at zhaw.ch Sat Oct 26 14:42:48 2013 From: mach at zhaw.ch (Marti Christof (mach)) Date: Sat, 26 Oct 2013 12:42:48 +0000 Subject: [Fiware-miwi] Properly joining FI-WARE and WP13 Message-ID: <6306BE90-B79D-4537-9466-8E912EBAB77F@zhaw.ch> Hi everybody This week we got some requests to join the WP13 Forge project, but without joining FI-WARE completely. To join FI-WARE properly, new members should also request access to some other required projects and tools: - At least join the main FI-WARE project and join the fiware at lists.fi-ware.eu mailing-list - They should also join "FI-WARE PPP? to access infos shared with UseCase projects and the EC - If you are involved in other FI-WARE WPs you should also also apply for these Forge Projects - If you work with the testbed/fi-lab (e.g. setting up VMs, blueprints), you should also join the "FI-WARE testbed" Project If somebody requests access to FI-WARE MiWI forge (and is eligible to join), I will automatically add him/here also to the FI-WARE Private forge project and to the fiware-miwi at lists.fi-ware.eu mailing list. Please instruct people of your organization to follow these rules. See the following presentation and/or the following video about the usage of the available tools, projects, etc. Presentation: https://forge.fi-ware.eu/docman/view.php/27/1982/FI-WARE-Forge+and+tools.pptx Video: https://dl.dropboxusercontent.com/u/25916180/FI_WARE/Forge_Induction_OpenCall1_partners.mp4 Thanks ? Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering P.O.Box, CH-8401 Winterthur Office:TD O3.18, Obere Kirchgasse 2 Phone: +41 58 934 70 63 Mail: mach at zhaw.ch Skype: christof-marti From mach at zhaw.ch Sat Oct 26 16:01:16 2013 From: mach at zhaw.ch (Marti Christof (mach)) Date: Sat, 26 Oct 2013 14:01:16 +0000 Subject: [Fiware-miwi] Properly joining FI-WARE and WP13 In-Reply-To: <663f3b65646f4fa4abc318f39273a9b1@SRV-MAIL-001.zhaw.ch> References: <663f3b65646f4fa4abc318f39273a9b1@SRV-MAIL-001.zhaw.ch> Message-ID: <8B8FA047-1C84-47CA-9D19-4F97114562C1@zhaw.ch> Hi A small extension to my previous mail regarding joining mailing-lists. To subscribe to a mailing-list you have to: - contact the owner (e.g. WPL for chapter mailing-lists) - or go to http://lists.fi-ware.eu, select the list and fill the form (not all mailing-lists are listed here) - or send an email from your email address to -join at lists.fi-ware.eu or -subscribe at lists.fi-ware.eu (subject and body will be ignored) Details to the mailman interface see here: http://www.list.org/mailman-member/node8.html Cheers, - Christof Am 26.10.2013 um 14:42 schrieb Marti Christof (mach) : > Hi everybody > > This week we got some requests to join the WP13 Forge project, but without joining FI-WARE completely. > > To join FI-WARE properly, new members should also request access to some other required projects and tools: > - At least join the main FI-WARE project and join the fiware at lists.fi-ware.eu mailing-list > - They should also join "FI-WARE PPP? to access infos shared with UseCase projects and the EC > - If you are involved in other FI-WARE WPs you should also also apply for these Forge Projects > - If you work with the testbed/fi-lab (e.g. setting up VMs, blueprints), you should also join the "FI-WARE testbed" Project > > If somebody requests access to FI-WARE MiWI forge (and is eligible to join), I will automatically add him/here also to the FI-WARE Private forge project and to the fiware-miwi at lists.fi-ware.eu mailing list. > > Please instruct people of your organization to follow these rules. > > See the following presentation and/or the following video about the usage of the available tools, projects, etc. > Presentation: https://forge.fi-ware.eu/docman/view.php/27/1982/FI-WARE-Forge+and+tools.pptx > Video: https://dl.dropboxusercontent.com/u/25916180/FI_WARE/Forge_Induction_OpenCall1_partners.mp4 > > Thanks > ? Christof > ---- > InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch > Institut of Applied Information Technology - InIT > Zurich University of Applied Sciences - ZHAW > School of Engineering > P.O.Box, CH-8401 Winterthur > Office:TD O3.18, Obere Kirchgasse 2 > Phone: +41 58 934 70 63 > Mail: mach at zhaw.ch > Skype: christof-marti > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi From jonne at adminotech.com Sat Oct 26 16:13:07 2013 From: jonne at adminotech.com (Jonne Nauha) Date: Sat, 26 Oct 2013 17:13:07 +0300 Subject: [Fiware-miwi] Initial Synchronization GE code In-Reply-To: References: Message-ID: I would recommend making a HttpServerPlugin that can be used from code by other modules to start listening to spesific ports and register handlers to those server instances. This will be much more flexible and more probably more performant than doing it via the websocketpp that is not intended for heavy http traffic. We should let that library and port focus on processing the websocket networking. As the Tundra core heavily utilized Qt, QObject and the signal/slot mechanism I recommend using this library https://github.com/nikhilm/qhttpserver. I've found it to be the best thing out there currently made with Qt. I've used this to prototype Cloud Rendering GE stuff and I even contributes some code and stuff to fix some bugs :) Using some low level lib and implementing the http stuff on top of it would not be worth it, huge overhead to implement it, then wrap all this inside a Qt style API for Tundra. Other option is to pick up a C/C++ library for a http server like https://code.google.com/p/mongoose/. I tried that also and it was a mess to integrate cleanly to the Qt system. Imo the best option is to use Qt networking (QTcpSocket) and thats what the qt library there does. It may not be the best thing out there in terms of performance and handling huge amount of HTTP calls (eg. hundreds/thousands per sec) but it is perfect for implementing API style HTTP server that is not intended to server files from disk, if you need that then you might want to go with mongoose or similar. Just my 2 cents. Good work porting the websocketpp to 0.3.x! I'll give it a spin with WebRocket at some point and see if anything critical is broken :) Best regards, Jonne Nauha Meshmoon developer at Adminotech Ltd. www.meshmoon.com On Fri, Oct 25, 2013 at 5:48 PM, "Lasse ??rni" wrote: > Hi, > the initial code for both the server (realXtend Tundra plugin) and client > (JavaScript) parts of the Synchronization GE has been pushed to GitHub. It > will naturally be subject to heavy refactoring. > > Server: > > https://github.com/realXtend/naali/tree/tundra2/src/Application/WebSocketServerModule > > Client: > https://github.com/realXtend/WebTundraNetworking > > The WebSocket server's initial code was contributed to open source by > Adminotech, thanks to them! > > The SceneAPI part of the GE will likely become another Tundra plugin. For > that, a http server functionality is required. This can be realized with > the same websocketpp library that the WebSocket server already uses. > > -- > Lasse ??rni > Game Programmer > LudoCraft Ltd. > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonne at adminotech.com Sat Oct 26 16:46:07 2013 From: jonne at adminotech.com (Jonne Nauha) Date: Sat, 26 Oct 2013 17:46:07 +0300 Subject: [Fiware-miwi] declarative layer & scalability work (Re: XML3D versus three.js (was Re: 13:00 meeting location: CIE (Re: Oulu meet today 13:00))) In-Reply-To: References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <19841C55-EF46-4D01-AD1F-633812458545@playsign.net> <5269206C.1030207@dfki.de> Message-ID: I'm also a bit confused about the KIARA business. I think we have been pretty clear since applying this FIWARE work that we are going to use realXtend Tundra as the server and that imo implies our current protocol and the entity-component system. We have already perfectly good Tundra protocol for what this project is set out to accomplish (performant extendable system). We also have all the experience in the world to implement the whole picture on the Tundra server and the web browser side. We have none of that with KIARA tech. What would be the benefit exactly now to make Tundra server talk to desktop clients with the Tundra protocol and then swap that out for KIARA communication when a websocket client connects? This would require all sorts of ugly code in out Tundra networking layer and the syncmanager involved with it. Surely this CAN be implemented, I just wonder why we should basically re-invent the wheel when we have things in place for the Tundra protocol. I'm no involved in the actual GEs so I haven't either participated in the talks. I don't know enough about what is intended to be sent via the KIARA networking. But if its the node types etc. that XML3D seems to support today. We are going to drop most of the nice components from the Tundra EC model to the floor and basically only communicate the ones that can be found from XML3D to the web clients. Is this the intent? Or was the intent to send the Tundra protocol just via KIARA? If so, I still don't see the benefits. I my opinion we should send everything Tundra offers to the clients and on the client side make decisions what we are going to be pushed to the renderer (may it be xml3d or threejs). Again as Toni said the networking is separated from the DOM interaction and rendering. We can send the Tundra protocol to the client and map the parts that we can and should from our Entity-Component system into the XML3D nodes etc. P.S. We have documented a long time ago what should be found from XML3D so we can map things into it https://forge.fi-ware.eu/plugins/mediawiki/wiki/miwi/index.php/RealXtend_Scene_and_EC_model. Granted things like water and sky component is not probably a primitive type that needs to be there, but things like these are important for 3D worlds. There is an easy and fast way Tundra to say "I want water" and it's there and can be configured to your needs. On the web client this can map to creating a XML3D primitive and setting a nice shader and textures to it. But the end user should not have to describe his scene at such low level. P.S.S. These are my personal two cents / thoughts on the things discussed here from how I understand them. I'd like to keep the discussion going so we can all get to the same page. Best regards, Jonne Nauha Meshmoon developer at Adminotech Ltd. www.meshmoon.com On Fri, Oct 25, 2013 at 6:02 AM, Toni Alatalo wrote: > On 24 Oct 2013, at 16:28, Philipp Slusallek > wrote: > > Continuing the, so far apparently successful, technique of clarifying a > single point at a time a note about scene declarations and description of > the scalability work: > > > I am not too happy that we are investing the FI-WARE resources into > circumventing the declarative layer completely. > > We are not doing that. realXtend has had a declarative layer for the past > 4-5 years(*) and we totally depend on it ? that?s not going away. The > situation is totally the opposite: it is assumed to always be there. > There?s absolutely no work done anywhere to circumvent it somehow. [insert > favourite 7th way of saying this]. > > In my view the case with the current work on scene rendering scalability > is this: We already have all the basics implemented and tested in some form > - realXtend web client implementations (e.g. ?WebTundra? in form of > Chiru-Webclient on github, and other works) have complete entity systems > integrated with networking and rendering. XML3d.js is the reference > implementation for XML3d parsing, rendering etc. But one of the identified > key parts missing was managing larger complex scenes. And that is a pretty > hard requirement from the Intelligent City use case which has been the > candidate for the main integrated larger use case. IIRC scalability was > also among the original requirements and proposals. Also Kristian stated > here that he finds it a good area to work on now so the basic motivation > for the work seemed clear. > > So we tackled this straight on by first testing the behaviour of loading & > unloading scene parts and then proceeded to implement a simple but > effective scene manager. We?re documenting that separately so I won?t go > into details here. So far it works even surprisingly well which has been a > huge relief during the past couple of days ? not only for us on the > platform dev side but also for the modelling and application companies > working with the city model here (I demoed the first version in a live meet > on Wed), we?ll post demo links soon (within days) as soon as can confirm a > bit more that the results seem conclusive. Now in general for the whole 3D > UI and nearby GEs I think we have most of the parts (and the rest are > coming) and ?just? need to integrate.. > > The point here is that in that work the focus is on the memory management > of the rendering and the efficiency & non-blockingness of loading geometry > data and textures for display. In my understanding that is orthogonal to > scene declaration formats ? or networking for that matter. In any case we > get geometry and texture data to load and manage. An analogue (just to > illustrate, not a real case): When someone works on improving the CPU > process scheduler in Linux kernel he/she does not touch file system code. > That does not mean that the improved scheduler proposes to remove file > system support from Linux. Also, it is not investing resources into > circumventing (your term) file systems ? even if in the scheduler dev it is > practical to just create competing processes from code, and not load > applications to execute from the file system. It is absolutely clear for > the scheduler developer how filesystems are a part of the big picture but > they are just not relevant to the task at hand. > > Again I hope this clarifies what?s going on. Please note that I?m /not/ > addressing renderer alternatives and selection here *at all* ? only the > relationship of the declarative layer and of the scalability work that you > seemed to bring up in the sentence quoted in the beginning. > > > I suggest that we start to work on the shared communication layer using > the KIARA API (part of a FI-WARE GE) and add the code to make the relevant > components work in XML3D. Can someone put together a plan for this. We are > happy to help where necessary -- but from my point of view we need to do > this as part of the Open Call. > > I?m sorry I don?t get how this is related. Then again I was not in the > KIARA session that one Wed morning ? Erno and Lasse were so I can talk with > them to get an understanding. Now I can?t find a thought-path from renderer > to networking here yet.. :o > > Also, I do need to (re-)read all these posts ? so far have had mostly > little timeslots to quickly clarify some basic miscommunications (like the > poi data vs. poi data derived visualisations topic in the other thread, and > the case with the declarative layer & scalability work in this one). I?m > mostly not working at all this Friday though (am with kids) and also in > general only work on fi-ware 50% of my work time (though I don?t mind when > both the share and the total times are more, this is business development!) > so it can take a while from my part. > > > Philipp > > Cheers, > ~Toni > > (*) "realXtend has had a declarative layer for the past 4-5 years(*)": in > the very beginning in 2007-2008 we didn?t have it in the same way, due to > how the first prototype was based on Opensimulator and Second Life (tm) > viewer. Only way to create a scene was to, in technical terms, to send > object creation commands over UDP to the server. Or write code to run in > the server. That is how Second Life was originally built: people use the > GUI client to build the worlds one object at a time and there was no > support for importing nor exporting objects or scenes (people did write > scripts to generate objects etc.). For us that was a terrible nightmare > (ask anyone from Ludocraft who worked on the Beneath the Waves demo scene > for reX 0.3 ? I was fortunate enough to not be involved in that period). As > a remedy to that insanity I first implemented importing from Ogre?s very > simple .scene (?dotScene?) format in the new Naali viewer (which later > became the Tundra codebase). Then we could finally bring full scenes from > Blender and Max. We were still using Opensimulator as the server then and > after my client-side prototype Mikko Pallari implemented dotScene import to > the server side and we got an ok production solution. Nowadays > Opensimulator has OAR files and likewise the community totally depends on > those. On reX side, Jukka Jyl?nki & Lasse wrote Tundra and we switched to > it and the TXML & TBIN support there which still seem ok as machine > authored formats. We do support Ogre dotScene import in current Tundra too. > And even Linden (the Second Life company) has gotten to support COLLADA > import, I think mostly meant for single objects but IIRC works for scenes > too. > > Now XML3d seems like a good next step to get a human friendly (and perhaps > just a more sane way to use xml in general) declarative format. It actually > addresses an issue I created in our tracker 2 years ago, "xmlifying txml? > https://github.com/realXtend/naali/issues/215 .. the draft in the gist > linked from there is a bit more like xml3d than txml. I?m very happy that > you?ve already made xml3d so we didn?t have to try to invent it :) > > > Am 23.10.2013 09:51, schrieb Toni Alatalo: > >> On Oct 23, 2013, at 8:00 AM, Philipp Slusallek < > Philipp.Slusallek at dfki.de> wrote: > >> > >>> BTW, what is the status with the Rendering discussion (Three.js vs. > xml3d.js)? I still have the feeling that we are doing parallel work here > that should probably be avoided. > >> > >> I'm not aware of any overlapping work so far -- then again I'm not > fully aware what all is up with xml3d.js. > >> > >> For the rendering for 3D UI, my conclusion from the discussion on this > list was that it is best to use three.js now for the case of big complex > fully featured scenes, i.e. typical realXtend worlds (e.g. Ludocraft's > Circus demo or the Chesapeake Bay from the LVM project, a creative commons > realXtend example scene). And in miwi in particular for the city model / > app now. Basically because that's where we already got the basics working > in the August-September work (and had earlier in realXtend web client > codebases). That is why we implemented the scalability system on top of > that too now -- scalability was the only thing missing. > >> > >> Until yesterday I thought the question was still open regarding XFlow > integration. Latest information I got was that there was no hardware > acceleration support for XFlow in XML3d.js either so it seemed worth a > check whether it's better to implement it for xml3d.js or for three. > >> > >> Yesterday, however, we learned from Cyberlightning that work on XFlow > hardware acceleration was already on-going in xml3d.js (I think mostly by > DFKI so far?). And that it was decided that work within fi-ware now is > limited to that (and we also understood that the functionality will be > quite limited by April, or?). > >> > >> This obviously affects the overall situation. > >> > >> At least in an intermediate stage this means that we have two renderers > for different purposes: three.js for some apps, without XFlow support, and > xml3d.js for others, with XFlow but other things missing. This is certain > because that is the case today and probably in the coming weeks at least. > >> > >> For a good final goal I think we can be clever and make an effective > roadmap. I don't know yet what it is, though -- certainly to be discussed. > The requirements doc -- perhaps by continuing work on it -- hopefully helps. > >> > >>> Philipp > >> > >> ~Toni > >> > >>> > >>> Am 22.10.2013 23:03, schrieb toni at playsign.net: > >>>> Just a brief note: we had some interesting preliminary discussion > >>>> triggered by how the data schema that Ari O. presented for the POI > >>>> system seemed at least partly similar to what the Real-Virtual > >>>> interaction work had resulted in too -- and in fact about how the > >>>> proposed POI schema was basically a version of the entity-component > >>>> model which we?ve already been using for scenes in realXtend (it is > >>>> inspired by / modeled after it, Ari told). So it can be much related > to > >>>> the Scene API work in the Synchronization GE too. As the action point > we > >>>> agreed that Ari will organize a specific work session on that. > >>>> I was now thinking that it perhaps at least partly leads back to the > >>>> question: how do we define (and implement) component types. I.e. what > >>>> was mentioned in that entity-system post a few weeks back (with links > >>>> to reX IComponent etc.). I mean: if functionality such as POIs and > >>>> realworld interaction make sense as somehow resulting in custom data > >>>> component types, does it mean that a key part of the framework is a > way > >>>> for those systems to declare their types .. so that it integrates > nicely > >>>> for the whole we want? I?m not sure, too tired to think it through > now, > >>>> but anyhow just wanted to mention that this was one topic that came > up. > >>>> I think Web Components is again something to check - as in XML terms > reX > >>>> Components are xml(3d) elements .. just ones that are usually in a > group > >>>> (according to the reX entity <-> xml3d group mapping). And Web > >>>> Components are about defining & implementing new elements (as Erno > >>>> pointed out in a different discussion about xml-html authoring in the > >>>> session). > >>>> BTW Thanks Kristian for the great comments in that entity system > >>>> thread - was really good to learn about the alternative attribute > access > >>>> syntax and the validation in XML3D(.js). > >>>> ~Toni > >>>> P.S. for (Christof &) the DFKI folks: I?m sure you understand the > >>>> rationale of these Oulu meets -- idea is ofc not to exclude you from > the > >>>> talks but just makes sense for us to meet live too as we are in the > same > >>>> city afterall etc -- naturally with the DFKI team you also talk there > >>>> locally. Perhaps is a good idea that we make notes so that can post > e.g. > >>>> here then (I?m not volunteering though! ?) . Also, the now agreed > >>>> bi-weekly setup on Tuesdays luckily works so that we can then > summarize > >>>> fresh in the global Wed meetings and continue the talks etc. > >>>> *From:* Erno Kuusela > >>>> *Sent:* ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM > >>>> *To:* Fiware-miwi > >>>> > >>>> Kari from CIE offered to host it this time, so see you there at 13:00. > >>>> > >>>> Erno > >>>> _______________________________________________ > >>>> Fiware-miwi mailing list > >>>> Fiware-miwi at lists.fi-ware.eu > >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi > >>>> > >>>> > >>>> _______________________________________________ > >>>> Fiware-miwi mailing list > >>>> Fiware-miwi at lists.fi-ware.eu > >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi > >>>> > >>> > >>> > >>> -- > >>> > >>> > ------------------------------------------------------------------------- > >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > >>> Trippstadter Strasse 122, D-67663 Kaiserslautern > >>> > >>> Gesch?ftsf?hrung: > >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > >>> Dr. Walter Olthoff > >>> Vorsitzender des Aufsichtsrats: > >>> Prof. Dr. h.c. Hans A. Aukes > >>> > >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > >>> > --------------------------------------------------------------------------- > >>> > >> > >> _______________________________________________ > >> Fiware-miwi mailing list > >> Fiware-miwi at lists.fi-ware.eu > >> https://lists.fi-ware.eu/listinfo/fiware-miwi > >> > > > > > > -- > > > > ------------------------------------------------------------------------- > > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > > Trippstadter Strasse 122, D-67663 Kaiserslautern > > > > Gesch?ftsf?hrung: > > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > > Dr. Walter Olthoff > > Vorsitzender des Aufsichtsrats: > > Prof. Dr. h.c. Hans A. Aukes > > > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > > > --------------------------------------------------------------------------- > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Sat Oct 26 19:04:18 2013 From: toni at playsign.net (Toni Alatalo) Date: Sat, 26 Oct 2013 20:04:18 +0300 Subject: [Fiware-miwi] declarative layer & scalability work (Re: XML3D versus three.js (was Re: 13:00 meeting location: CIE (Re: Oulu meet today 13:00))) In-Reply-To: References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <19841C55-EF46-4D01-AD1F-633812458545@playsign.net> <5269206C.1030207@dfki.de> Message-ID: Again just a brief note about one point: On 26 Oct 2013, at 17:46, Jonne Nauha wrote: > I don't know enough about what is intended to be sent via the KIARA networking. But if its the node types etc. that XML3D seems to support today. We are going to drop most of the nice components from the Tundra EC model to the floor and basically only communicate the ones that can be found from XML3D to the web clients. Is this the intent? Or was the I think I?ve said this to you many times already :) No we are not dropping any components and we are definitely keeping the support to have whatever components in the application data. The xml3d set is just a set of components, like Tundra core also has a set, and some of those are the same (like mesh). Having those components around does not mean that use of other components would be harmed in any way. Xml by itself supports this easily as in any xml you can just add or or whatever component you may want. The net protocol must remain generic to work for those in the future as well (like it does in Tundra now) ? and afaik the outcome from the miwi-kiara session (where i didn?t participate) was that for miwi tundra protocol is kept (at least for now). Does not stop us from further considering possible benefits of Kiara but in any case I think everyone is for keeping the support for arbitrary components > Jonne Nauha ~Toni > On Fri, Oct 25, 2013 at 6:02 AM, Toni Alatalo wrote: > On 24 Oct 2013, at 16:28, Philipp Slusallek wrote: > > Continuing the, so far apparently successful, technique of clarifying a single point at a time a note about scene declarations and description of the scalability work: > > > I am not too happy that we are investing the FI-WARE resources into circumventing the declarative layer completely. > > We are not doing that. realXtend has had a declarative layer for the past 4-5 years(*) and we totally depend on it ? that?s not going away. The situation is totally the opposite: it is assumed to always be there. There?s absolutely no work done anywhere to circumvent it somehow. [insert favourite 7th way of saying this]. > > In my view the case with the current work on scene rendering scalability is this: We already have all the basics implemented and tested in some form - realXtend web client implementations (e.g. ?WebTundra? in form of Chiru-Webclient on github, and other works) have complete entity systems integrated with networking and rendering. XML3d.js is the reference implementation for XML3d parsing, rendering etc. But one of the identified key parts missing was managing larger complex scenes. And that is a pretty hard requirement from the Intelligent City use case which has been the candidate for the main integrated larger use case. IIRC scalability was also among the original requirements and proposals. Also Kristian stated here that he finds it a good area to work on now so the basic motivation for the work seemed clear. > > So we tackled this straight on by first testing the behaviour of loading & unloading scene parts and then proceeded to implement a simple but effective scene manager. We?re documenting that separately so I won?t go into details here. So far it works even surprisingly well which has been a huge relief during the past couple of days ? not only for us on the platform dev side but also for the modelling and application companies working with the city model here (I demoed the first version in a live meet on Wed), we?ll post demo links soon (within days) as soon as can confirm a bit more that the results seem conclusive. Now in general for the whole 3D UI and nearby GEs I think we have most of the parts (and the rest are coming) and ?just? need to integrate.. > > The point here is that in that work the focus is on the memory management of the rendering and the efficiency & non-blockingness of loading geometry data and textures for display. In my understanding that is orthogonal to scene declaration formats ? or networking for that matter. In any case we get geometry and texture data to load and manage. An analogue (just to illustrate, not a real case): When someone works on improving the CPU process scheduler in Linux kernel he/she does not touch file system code. That does not mean that the improved scheduler proposes to remove file system support from Linux. Also, it is not investing resources into circumventing (your term) file systems ? even if in the scheduler dev it is practical to just create competing processes from code, and not load applications to execute from the file system. It is absolutely clear for the scheduler developer how filesystems are a part of the big picture but they are just not relevant to the task at hand. > > Again I hope this clarifies what?s going on. Please note that I?m /not/ addressing renderer alternatives and selection here *at all* ? only the relationship of the declarative layer and of the scalability work that you seemed to bring up in the sentence quoted in the beginning. > > > I suggest that we start to work on the shared communication layer using the KIARA API (part of a FI-WARE GE) and add the code to make the relevant components work in XML3D. Can someone put together a plan for this. We are happy to help where necessary -- but from my point of view we need to do this as part of the Open Call. > > I?m sorry I don?t get how this is related. Then again I was not in the KIARA session that one Wed morning ? Erno and Lasse were so I can talk with them to get an understanding. Now I can?t find a thought-path from renderer to networking here yet.. :o > > Also, I do need to (re-)read all these posts ? so far have had mostly little timeslots to quickly clarify some basic miscommunications (like the poi data vs. poi data derived visualisations topic in the other thread, and the case with the declarative layer & scalability work in this one). I?m mostly not working at all this Friday though (am with kids) and also in general only work on fi-ware 50% of my work time (though I don?t mind when both the share and the total times are more, this is business development!) so it can take a while from my part. > > > Philipp > > Cheers, > ~Toni > > (*) "realXtend has had a declarative layer for the past 4-5 years(*)": in the very beginning in 2007-2008 we didn?t have it in the same way, due to how the first prototype was based on Opensimulator and Second Life (tm) viewer. Only way to create a scene was to, in technical terms, to send object creation commands over UDP to the server. Or write code to run in the server. That is how Second Life was originally built: people use the GUI client to build the worlds one object at a time and there was no support for importing nor exporting objects or scenes (people did write scripts to generate objects etc.). For us that was a terrible nightmare (ask anyone from Ludocraft who worked on the Beneath the Waves demo scene for reX 0.3 ? I was fortunate enough to not be involved in that period). As a remedy to that insanity I first implemented importing from Ogre?s very simple .scene (?dotScene?) format in the new Naali viewer (which later became the Tundra codebase). Then we could finally bring full scenes from Blender and Max. We were still using Opensimulator as the server then and after my client-side prototype Mikko Pallari implemented dotScene import to the server side and we got an ok production solution. Nowadays Opensimulator has OAR files and likewise the community totally depends on those. On reX side, Jukka Jyl?nki & Lasse wrote Tundra and we switched to it and the TXML & TBIN support there which still seem ok as machine authored formats. We do support Ogre dotScene import in current Tundra too. And even Linden (the Second Life company) has gotten to support COLLADA import, I think mostly meant for single objects but IIRC works for scenes too. > > Now XML3d seems like a good next step to get a human friendly (and perhaps just a more sane way to use xml in general) declarative format. It actually addresses an issue I created in our tracker 2 years ago, "xmlifying txml? https://github.com/realXtend/naali/issues/215 .. the draft in the gist linked from there is a bit more like xml3d than txml. I?m very happy that you?ve already made xml3d so we didn?t have to try to invent it :) > > > Am 23.10.2013 09:51, schrieb Toni Alatalo: > >> On Oct 23, 2013, at 8:00 AM, Philipp Slusallek wrote: > >> > >>> BTW, what is the status with the Rendering discussion (Three.js vs. xml3d.js)? I still have the feeling that we are doing parallel work here that should probably be avoided. > >> > >> I'm not aware of any overlapping work so far -- then again I'm not fully aware what all is up with xml3d.js. > >> > >> For the rendering for 3D UI, my conclusion from the discussion on this list was that it is best to use three.js now for the case of big complex fully featured scenes, i.e. typical realXtend worlds (e.g. Ludocraft's Circus demo or the Chesapeake Bay from the LVM project, a creative commons realXtend example scene). And in miwi in particular for the city model / app now. Basically because that's where we already got the basics working in the August-September work (and had earlier in realXtend web client codebases). That is why we implemented the scalability system on top of that too now -- scalability was the only thing missing. > >> > >> Until yesterday I thought the question was still open regarding XFlow integration. Latest information I got was that there was no hardware acceleration support for XFlow in XML3d.js either so it seemed worth a check whether it's better to implement it for xml3d.js or for three. > >> > >> Yesterday, however, we learned from Cyberlightning that work on XFlow hardware acceleration was already on-going in xml3d.js (I think mostly by DFKI so far?). And that it was decided that work within fi-ware now is limited to that (and we also understood that the functionality will be quite limited by April, or?). > >> > >> This obviously affects the overall situation. > >> > >> At least in an intermediate stage this means that we have two renderers for different purposes: three.js for some apps, without XFlow support, and xml3d.js for others, with XFlow but other things missing. This is certain because that is the case today and probably in the coming weeks at least. > >> > >> For a good final goal I think we can be clever and make an effective roadmap. I don't know yet what it is, though -- certainly to be discussed. The requirements doc -- perhaps by continuing work on it -- hopefully helps. > >> > >>> Philipp > >> > >> ~Toni > >> > >>> > >>> Am 22.10.2013 23:03, schrieb toni at playsign.net: > >>>> Just a brief note: we had some interesting preliminary discussion > >>>> triggered by how the data schema that Ari O. presented for the POI > >>>> system seemed at least partly similar to what the Real-Virtual > >>>> interaction work had resulted in too -- and in fact about how the > >>>> proposed POI schema was basically a version of the entity-component > >>>> model which we?ve already been using for scenes in realXtend (it is > >>>> inspired by / modeled after it, Ari told). So it can be much related to > >>>> the Scene API work in the Synchronization GE too. As the action point we > >>>> agreed that Ari will organize a specific work session on that. > >>>> I was now thinking that it perhaps at least partly leads back to the > >>>> question: how do we define (and implement) component types. I.e. what > >>>> was mentioned in that entity-system post a few weeks back (with links > >>>> to reX IComponent etc.). I mean: if functionality such as POIs and > >>>> realworld interaction make sense as somehow resulting in custom data > >>>> component types, does it mean that a key part of the framework is a way > >>>> for those systems to declare their types .. so that it integrates nicely > >>>> for the whole we want? I?m not sure, too tired to think it through now, > >>>> but anyhow just wanted to mention that this was one topic that came up. > >>>> I think Web Components is again something to check - as in XML terms reX > >>>> Components are xml(3d) elements .. just ones that are usually in a group > >>>> (according to the reX entity <-> xml3d group mapping). And Web > >>>> Components are about defining & implementing new elements (as Erno > >>>> pointed out in a different discussion about xml-html authoring in the > >>>> session). > >>>> BTW Thanks Kristian for the great comments in that entity system > >>>> thread - was really good to learn about the alternative attribute access > >>>> syntax and the validation in XML3D(.js). > >>>> ~Toni > >>>> P.S. for (Christof &) the DFKI folks: I?m sure you understand the > >>>> rationale of these Oulu meets -- idea is ofc not to exclude you from the > >>>> talks but just makes sense for us to meet live too as we are in the same > >>>> city afterall etc -- naturally with the DFKI team you also talk there > >>>> locally. Perhaps is a good idea that we make notes so that can post e.g. > >>>> here then (I?m not volunteering though! ?) . Also, the now agreed > >>>> bi-weekly setup on Tuesdays luckily works so that we can then summarize > >>>> fresh in the global Wed meetings and continue the talks etc. > >>>> *From:* Erno Kuusela > >>>> *Sent:* ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM > >>>> *To:* Fiware-miwi > >>>> > >>>> Kari from CIE offered to host it this time, so see you there at 13:00. > >>>> > >>>> Erno > >>>> _______________________________________________ > >>>> Fiware-miwi mailing list > >>>> Fiware-miwi at lists.fi-ware.eu > >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi > >>>> > >>>> > >>>> _______________________________________________ > >>>> Fiware-miwi mailing list > >>>> Fiware-miwi at lists.fi-ware.eu > >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi > >>>> > >>> > >>> > >>> -- > >>> > >>> ------------------------------------------------------------------------- > >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > >>> Trippstadter Strasse 122, D-67663 Kaiserslautern > >>> > >>> Gesch?ftsf?hrung: > >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > >>> Dr. Walter Olthoff > >>> Vorsitzender des Aufsichtsrats: > >>> Prof. Dr. h.c. Hans A. Aukes > >>> > >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > >>> --------------------------------------------------------------------------- > >>> > >> > >> _______________________________________________ > >> Fiware-miwi mailing list > >> Fiware-miwi at lists.fi-ware.eu > >> https://lists.fi-ware.eu/listinfo/fiware-miwi > >> > > > > > > -- > > > > ------------------------------------------------------------------------- > > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > > Trippstadter Strasse 122, D-67663 Kaiserslautern > > > > Gesch?ftsf?hrung: > > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > > Dr. Walter Olthoff > > Vorsitzender des Aufsichtsrats: > > Prof. Dr. h.c. Hans A. Aukes > > > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > > --------------------------------------------------------------------------- > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Sun Oct 27 10:42:08 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Sun, 27 Oct 2013 10:42:08 +0100 Subject: [Fiware-miwi] WP13 Architecture Page Message-ID: <526CDFF0.2000004@dfki.de> Hi all, I have finally added the private architecture page for the MiWi chapter (WP13). Sorry for the very late completion of this. http://forge.fi-ware.eu/plugins/mediawiki/wiki/fi-ware-private/index.php/Advanced_Middleware_and_Web_UI_Architecture @All: Please look over it and provide and feedback for things that could be improved. Note that this page also covers the Middleware GE. I have also added many new entries to te Glossary, so it look at least a bit more impressive -- but I am sure that mayn entries that should be there are still missing. @All: Can everyone, please, spend 15 minutes and go through their GE and add any terminology that normal people not from our field might not be familiar with. I have also edited the overview/completion table to reflect those changes. Except for the cover page things should be done now. @Christof: I believ we can submit this to FI-WARE now (but it still needs a cover page). Furthermore, I did briefly read over many of the GE descriptions and I believe we can still do a better job. I will try to allocate some time to go through them in more detail and send suggestions. But in the mean time it would be worth refining them some more. @All: Look in to improving the GE documentation. Of course, make sure that the changes get approved before submitting. Best, Philipp -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From Philipp.Slusallek at dfki.de Sun Oct 27 16:12:58 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Sun, 27 Oct 2013 16:12:58 +0100 Subject: [Fiware-miwi] declarative layer & scalability work (Re: XML3D versus three.js (was Re: 13:00 meeting location: CIE (Re: Oulu meet today 13:00))) In-Reply-To: References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <19841C55-EF46-4D01-AD1F-633812458545@playsign.net> <5269206C.1030207@dfki.de> Message-ID: <526D2D7A.8020509@dfki.de> Hi Toni, I understand that the work you have done on the model scalability side will be ported to XML3D within WP13 and the current three.js version is just an intermediate step. Is this understanding correct? If so, I have no problem with that. Maybe I missed this part, but maybe we need to communicate things like this a bit more. See my suggestion a few email ago of setting up an Agile backlog where we record the tasks to be done and their schedule. But also let me make sure there is no missunderstanding regarding "declarative" either: While realXtend may have a declarative approach, declarative in the Web context refers to HTML and its in-memory representation in the DOM. Let me try to better explain my point about using the KIARA API: KIARA offers a nice (at least I would argue :-), and even declarative API. As we talked about using a common component models and a common translation layer between ECA and XML3D, it would make sense to hide the realXtend protocol adaptation behind this KIARA layer. This is one of the two versions of arriving at a common model by doing the unification on the client side with a common API and data model but having two different protocols behind it: one using your knet approach for realXtend and one using KIARA all the way across the network for our server. This would make it so much easier to port from the direct driving of three.js to XML3D as discussed above. Essentially, in both cases the JS application gets delivered the components that are coming in over the wire. We just hie the knet code behind the KIARA API. Since all the network code is already there and we can probably easily agree on a common format for the deliveres components in JS (should be pretty obvious), this might not be a big effort anyway. What do you think. Thanks, Philipp Am 25.10.2013 05:02, schrieb Toni Alatalo: > On 24 Oct 2013, at 16:28, Philipp Slusallek wrote: > > Continuing the, so far apparently successful, technique of clarifying a single point at a time a note about scene declarations and description of the scalability work: > >> I am not too happy that we are investing the FI-WARE resources into circumventing the declarative layer completely. > > We are not doing that. realXtend has had a declarative layer for the past 4-5 years(*) and we totally depend on it ? that?s not going away. The situation is totally the opposite: it is assumed to always be there. There?s absolutely no work done anywhere to circumvent it somehow. [insert favourite 7th way of saying this]. > > In my view the case with the current work on scene rendering scalability is this: We already have all the basics implemented and tested in some form - realXtend web client implementations (e.g. ?WebTundra? in form of Chiru-Webclient on github, and other works) have complete entity systems integrated with networking and rendering. XML3d.js is the reference implementation for XML3d parsing, rendering etc. But one of the identified key parts missing was managing larger complex scenes. And that is a pretty hard requirement from the Intelligent City use case which has been the candidate for the main integrated larger use case. IIRC scalability was also among the original requirements and proposals. Also Kristian stated here that he finds it a good area to work on now so the basic motivation for the work seemed clear. > > So we tackled this straight on by first testing the behaviour of loading & unloading scene parts and then proceeded to implement a simple but effective scene manager. We?re documenting that separately so I won?t go into details here. So far it works even surprisingly well which has been a huge relief during the past couple of days ? not only for us on the platform dev side but also for the modelling and application companies working with the city model here (I demoed the first version in a live meet on Wed), we?ll post demo links soon (within days) as soon as can confirm a bit more that the results seem conclusive. Now in general for the whole 3D UI and nearby GEs I think we have most of the parts (and the rest are coming) and ?just? need to integrate.. > > The point here is that in that work the focus is on the memory management of the rendering and the efficiency & non-blockingness of loading geometry data and textures for display. In my understanding that is orthogonal to scene declaration formats ? or networking for that matter. In any case we get geometry and texture data to load and manage. An analogue (just to illustrate, not a real case): When someone works on improving the CPU process scheduler in Linux kernel he/she does not touch file system code. That does not mean that the improved scheduler proposes to remove file system support from Linux. Also, it is not investing resources into circumventing (your term) file systems ? even if in the scheduler dev it is practical to just create competing processes from code, and not load applications to execute from the file system. It is absolutely clear for the scheduler developer how filesystems are a part of the big picture but they are just not relevant to the task at hand. > > Again I hope this clarifies what?s going on. Please note that I?m /not/ addressing renderer alternatives and selection here *at all* ? only the relationship of the declarative layer and of the scalability work that you seemed to bring up in the sentence quoted in the beginning. > >> I suggest that we start to work on the shared communication layer using the KIARA API (part of a FI-WARE GE) and add the code to make the relevant components work in XML3D. Can someone put together a plan for this. We are happy to help where necessary -- but from my point of view we need to do this as part of the Open Call. > > I?m sorry I don?t get how this is related. Then again I was not in the KIARA session that one Wed morning ? Erno and Lasse were so I can talk with them to get an understanding. Now I can?t find a thought-path from renderer to networking here yet.. :o > > Also, I do need to (re-)read all these posts ? so far have had mostly little timeslots to quickly clarify some basic miscommunications (like the poi data vs. poi data derived visualisations topic in the other thread, and the case with the declarative layer & scalability work in this one). I?m mostly not working at all this Friday though (am with kids) and also in general only work on fi-ware 50% of my work time (though I don?t mind when both the share and the total times are more, this is business development!) so it can take a while from my part. > >> Philipp > > Cheers, > ~Toni > > (*) "realXtend has had a declarative layer for the past 4-5 years(*)": in the very beginning in 2007-2008 we didn?t have it in the same way, due to how the first prototype was based on Opensimulator and Second Life (tm) viewer. Only way to create a scene was to, in technical terms, to send object creation commands over UDP to the server. Or write code to run in the server. That is how Second Life was originally built: people use the GUI client to build the worlds one object at a time and there was no support for importing nor exporting objects or scenes (people did write scripts to generate objects etc.). For us that was a terrible nightmare (ask anyone from Ludocraft who worked on the Beneath the Waves demo scene for reX 0.3 ? I was fortunate enough to not be involved in that period). As a remedy to that insanity I first implemented importing from Ogre?s very simple .scene (?dotScene?) format in the new Naali viewer (which later became the Tundra codebase). Then we could finally bring full scenes from Blender and Max. We were still using Opensimulator as the server then and after my client-side prototype Mikko Pallari implemented dotScene import to the server side and we got an ok production solution. Nowadays Opensimulator has OAR files and likewise the community totally depends on those. On reX side, Jukka Jyl?nki & Lasse wrote Tundra and we switched to it and the TXML & TBIN support there which still seem ok as machine authored formats. We do support Ogre dotScene import in current Tundra too. And even Linden (the Second Life company) has gotten to support COLLADA import, I think mostly meant for single objects but IIRC works for scenes too. > > Now XML3d seems like a good next step to get a human friendly (and perhaps just a more sane way to use xml in general) declarative format. It actually addresses an issue I created in our tracker 2 years ago, "xmlifying txml? https://github.com/realXtend/naali/issues/215 .. the draft in the gist linked from there is a bit more like xml3d than txml. I?m very happy that you?ve already made xml3d so we didn?t have to try to invent it :) > >> Am 23.10.2013 09:51, schrieb Toni Alatalo: >>> On Oct 23, 2013, at 8:00 AM, Philipp Slusallek wrote: >>> >>>> BTW, what is the status with the Rendering discussion (Three.js vs. xml3d.js)? I still have the feeling that we are doing parallel work here that should probably be avoided. >>> >>> I'm not aware of any overlapping work so far -- then again I'm not fully aware what all is up with xml3d.js. >>> >>> For the rendering for 3D UI, my conclusion from the discussion on this list was that it is best to use three.js now for the case of big complex fully featured scenes, i.e. typical realXtend worlds (e.g. Ludocraft's Circus demo or the Chesapeake Bay from the LVM project, a creative commons realXtend example scene). And in miwi in particular for the city model / app now. Basically because that's where we already got the basics working in the August-September work (and had earlier in realXtend web client codebases). That is why we implemented the scalability system on top of that too now -- scalability was the only thing missing. >>> >>> Until yesterday I thought the question was still open regarding XFlow integration. Latest information I got was that there was no hardware acceleration support for XFlow in XML3d.js either so it seemed worth a check whether it's better to implement it for xml3d.js or for three. >>> >>> Yesterday, however, we learned from Cyberlightning that work on XFlow hardware acceleration was already on-going in xml3d.js (I think mostly by DFKI so far?). And that it was decided that work within fi-ware now is limited to that (and we also understood that the functionality will be quite limited by April, or?). >>> >>> This obviously affects the overall situation. >>> >>> At least in an intermediate stage this means that we have two renderers for different purposes: three.js for some apps, without XFlow support, and xml3d.js for others, with XFlow but other things missing. This is certain because that is the case today and probably in the coming weeks at least. >>> >>> For a good final goal I think we can be clever and make an effective roadmap. I don't know yet what it is, though -- certainly to be discussed. The requirements doc -- perhaps by continuing work on it -- hopefully helps. >>> >>>> Philipp >>> >>> ~Toni >>> >>>> >>>> Am 22.10.2013 23:03, schrieb toni at playsign.net: >>>>> Just a brief note: we had some interesting preliminary discussion >>>>> triggered by how the data schema that Ari O. presented for the POI >>>>> system seemed at least partly similar to what the Real-Virtual >>>>> interaction work had resulted in too -- and in fact about how the >>>>> proposed POI schema was basically a version of the entity-component >>>>> model which we?ve already been using for scenes in realXtend (it is >>>>> inspired by / modeled after it, Ari told). So it can be much related to >>>>> the Scene API work in the Synchronization GE too. As the action point we >>>>> agreed that Ari will organize a specific work session on that. >>>>> I was now thinking that it perhaps at least partly leads back to the >>>>> question: how do we define (and implement) component types. I.e. what >>>>> was mentioned in that entity-system post a few weeks back (with links >>>>> to reX IComponent etc.). I mean: if functionality such as POIs and >>>>> realworld interaction make sense as somehow resulting in custom data >>>>> component types, does it mean that a key part of the framework is a way >>>>> for those systems to declare their types .. so that it integrates nicely >>>>> for the whole we want? I?m not sure, too tired to think it through now, >>>>> but anyhow just wanted to mention that this was one topic that came up. >>>>> I think Web Components is again something to check - as in XML terms reX >>>>> Components are xml(3d) elements .. just ones that are usually in a group >>>>> (according to the reX entity <-> xml3d group mapping). And Web >>>>> Components are about defining & implementing new elements (as Erno >>>>> pointed out in a different discussion about xml-html authoring in the >>>>> session). >>>>> BTW Thanks Kristian for the great comments in that entity system >>>>> thread - was really good to learn about the alternative attribute access >>>>> syntax and the validation in XML3D(.js). >>>>> ~Toni >>>>> P.S. for (Christof &) the DFKI folks: I?m sure you understand the >>>>> rationale of these Oulu meets -- idea is ofc not to exclude you from the >>>>> talks but just makes sense for us to meet live too as we are in the same >>>>> city afterall etc -- naturally with the DFKI team you also talk there >>>>> locally. Perhaps is a good idea that we make notes so that can post e.g. >>>>> here then (I?m not volunteering though! ?) . Also, the now agreed >>>>> bi-weekly setup on Tuesdays luckily works so that we can then summarize >>>>> fresh in the global Wed meetings and continue the talks etc. >>>>> *From:* Erno Kuusela >>>>> *Sent:* ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM >>>>> *To:* Fiware-miwi >>>>> >>>>> Kari from CIE offered to host it this time, so see you there at 13:00. >>>>> >>>>> Erno >>>>> _______________________________________________ >>>>> Fiware-miwi mailing list >>>>> Fiware-miwi at lists.fi-ware.eu >>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>> >>>>> >>>>> _______________________________________________ >>>>> Fiware-miwi mailing list >>>>> Fiware-miwi at lists.fi-ware.eu >>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>> >>>> >>>> >>>> -- >>>> >>>> ------------------------------------------------------------------------- >>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>> >>>> Gesch?ftsf?hrung: >>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>> Dr. Walter Olthoff >>>> Vorsitzender des Aufsichtsrats: >>>> Prof. Dr. h.c. Hans A. Aukes >>>> >>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>> --------------------------------------------------------------------------- >>>> >>> >>> _______________________________________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>> >> >> >> -- >> >> ------------------------------------------------------------------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> --------------------------------------------------------------------------- >> > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 456 bytes Desc: not available URL: From Philipp.Slusallek at dfki.de Sun Oct 27 18:14:35 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Sun, 27 Oct 2013 18:14:35 +0100 Subject: [Fiware-miwi] declarative layer & scalability work (Re: XML3D versus three.js (was Re: 13:00 meeting location: CIE (Re: Oulu meet today 13:00))) In-Reply-To: References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <19841C55-EF46-4D01-AD1F-633812458545@playsign.net> <5269206C.1030207@dfki.de> Message-ID: <526D49FB.9060901@dfki.de> Hi Toni, Jonne, Let me support this statement from Toni. There is need to change your realXtend server or even its network code in any way. Since we need to create the mapping code to XML3D on the browser side anyway, my point was to use the KIARA API on the Web/JS side as the interface to the networking code and share the mapping code that comes after it (seen from the network). The KIARA API would then simply deliver the (jointly agreed upon) JS representation of the ECA components (or used them for sending). For connecting to realXtend, KIARA would internally reuse the JS networking code that you already have. Also see me email from earlier today. If you later like KIARA (it should have a few advantages over knet) then there is the option of switching to it even on the realXtend (server and/or client) side. But that is your decision! Our job is it to deliver a version of KIARA that will make you want to switch :-). Best, Philipp Am 26.10.2013 19:04, schrieb Toni Alatalo: > Again just a brief note about one point: > > On 26 Oct 2013, at 17:46, Jonne Nauha > wrote: > >> I don't know enough about what is intended to be sent via the KIARA >> networking. But if its the node types etc. that XML3D seems to support >> today. We are going to drop most of the nice components from the >> Tundra EC model to the floor and basically only communicate the ones >> that can be found from XML3D to the web clients. Is this the intent? >> Or was the > > I think I?ve said this to you many times already :) > > No we are not dropping any components and we are definitely keeping the > support to have whatever components in the application data. > > The xml3d set is just a set of components, like Tundra core also has a > set, and some of those are the same (like mesh). > > Having those components around does not mean that use of other > components would be harmed in any way. > > Xml by itself supports this easily as in any xml you can just add > or or whatever component you may want. > > The net protocol must remain generic to work for those in the future as > well (like it does in Tundra now) ? and afaik the outcome from the > miwi-kiara session (where i didn?t participate) was that for miwi tundra > protocol is kept (at least for now). Does not stop us from further > considering possible benefits of Kiara but in any case I think everyone > is for keeping the support for arbitrary components > >> Jonne Nauha > > ~Toni > >> On Fri, Oct 25, 2013 at 6:02 AM, Toni Alatalo > > wrote: >> >> On 24 Oct 2013, at 16:28, Philipp Slusallek >> > wrote: >> >> Continuing the, so far apparently successful, technique of >> clarifying a single point at a time a note about scene >> declarations and description of the scalability work: >> >> > I am not too happy that we are investing the FI-WARE resources >> into circumventing the declarative layer completely. >> >> We are not doing that. realXtend has had a declarative layer for >> the past 4-5 years(*) and we totally depend on it ? that?s not >> going away. The situation is totally the opposite: it is assumed >> to always be there. There?s absolutely no work done anywhere to >> circumvent it somehow. [insert favourite 7th way of saying this]. >> >> In my view the case with the current work on scene rendering >> scalability is this: We already have all the basics implemented >> and tested in some form - realXtend web client implementations >> (e.g. ?WebTundra? in form of Chiru-Webclient on github, and other >> works) have complete entity systems integrated with networking and >> rendering. XML3d.js is the reference implementation for XML3d >> parsing, rendering etc. But one of the identified key parts >> missing was managing larger complex scenes. And that is a pretty >> hard requirement from the Intelligent City use case which has been >> the candidate for the main integrated larger use case. IIRC >> scalability was also among the original requirements and >> proposals. Also Kristian stated here that he finds it a good area >> to work on now so the basic motivation for the work seemed clear. >> >> So we tackled this straight on by first testing the behaviour of >> loading & unloading scene parts and then proceeded to implement a >> simple but effective scene manager. We?re documenting that >> separately so I won?t go into details here. So far it works even >> surprisingly well which has been a huge relief during the past >> couple of days ? not only for us on the platform dev side but also >> for the modelling and application companies working with the city >> model here (I demoed the first version in a live meet on Wed), >> we?ll post demo links soon (within days) as soon as can confirm a >> bit more that the results seem conclusive. Now in general for the >> whole 3D UI and nearby GEs I think we have most of the parts (and >> the rest are coming) and ?just? need to integrate.. >> >> The point here is that in that work the focus is on the memory >> management of the rendering and the efficiency & non-blockingness >> of loading geometry data and textures for display. In my >> understanding that is orthogonal to scene declaration formats ? or >> networking for that matter. In any case we get geometry and >> texture data to load and manage. An analogue (just to illustrate, >> not a real case): When someone works on improving the CPU process >> scheduler in Linux kernel he/she does not touch file system code. >> That does not mean that the improved scheduler proposes to remove >> file system support from Linux. Also, it is not investing >> resources into circumventing (your term) file systems ? even if in >> the scheduler dev it is practical to just create competing >> processes from code, and not load applications to execute from the >> file system. It is absolutely clear for the scheduler developer >> how filesystems are a part of the big picture but they are just >> not relevant to the task at hand. >> >> Again I hope this clarifies what?s going on. Please note that I?m >> /not/ addressing renderer alternatives and selection here *at all* >> ? only the relationship of the declarative layer and of the >> scalability work that you seemed to bring up in the sentence >> quoted in the beginning. >> >> > I suggest that we start to work on the shared communication >> layer using the KIARA API (part of a FI-WARE GE) and add the code >> to make the relevant components work in XML3D. Can someone put >> together a plan for this. We are happy to help where necessary -- >> but from my point of view we need to do this as part of the Open Call. >> >> I?m sorry I don?t get how this is related. Then again I was not in >> the KIARA session that one Wed morning ? Erno and Lasse were so I >> can talk with them to get an understanding. Now I can?t find a >> thought-path from renderer to networking here yet.. :o >> >> Also, I do need to (re-)read all these posts ? so far have had >> mostly little timeslots to quickly clarify some basic >> miscommunications (like the poi data vs. poi data derived >> visualisations topic in the other thread, and the case with the >> declarative layer & scalability work in this one). I?m mostly not >> working at all this Friday though (am with kids) and also in >> general only work on fi-ware 50% of my work time (though I don?t >> mind when both the share and the total times are more, this is >> business development!) so it can take a while from my part. >> >> > Philipp >> >> Cheers, >> ~Toni >> >> (*) "realXtend has had a declarative layer for the past 4-5 >> years(*)": in the very beginning in 2007-2008 we didn?t have it in >> the same way, due to how the first prototype was based on >> Opensimulator and Second Life (tm) viewer. Only way to create a >> scene was to, in technical terms, to send object creation commands >> over UDP to the server. Or write code to run in the server. That >> is how Second Life was originally built: people use the GUI client >> to build the worlds one object at a time and there was no support >> for importing nor exporting objects or scenes (people did write >> scripts to generate objects etc.). For us that was a terrible >> nightmare (ask anyone from Ludocraft who worked on the Beneath the >> Waves demo scene for reX 0.3 ? I was fortunate enough to not be >> involved in that period). As a remedy to that insanity I first >> implemented importing from Ogre?s very simple .scene (?dotScene?) >> format in the new Naali viewer (which later became the Tundra >> codebase). Then we could finally bring full scenes from Blender >> and Max. We were still using Opensimulator as the server then and >> after my client-side prototype Mikko Pallari implemented dotScene >> import to the server side and we got an ok production solution. >> Nowadays Opensimulator has OAR files and likewise the community >> totally depends on those. On reX side, Jukka Jyl?nki & Lasse wrote >> Tundra and we switched to it and the TXML & TBIN support there >> which still seem ok as machine authored formats. We do support >> Ogre dotScene import in current Tundra too. And even Linden (the >> Second Life company) has gotten to support COLLADA import, I think >> mostly meant for single objects but IIRC works for scenes too. >> >> Now XML3d seems like a good next step to get a human friendly (and >> perhaps just a more sane way to use xml in general) declarative >> format. It actually addresses an issue I created in our tracker 2 >> years ago, "xmlifying txml? >> https://github.com/realXtend/naali/issues/215 .. the draft in the >> gist linked from there is a bit more like xml3d than txml. I?m >> very happy that you?ve already made xml3d so we didn?t have to try >> to invent it :) >> >> > Am 23.10.2013 09:51, schrieb Toni Alatalo: >> >> On Oct 23, 2013, at 8:00 AM, Philipp Slusallek >> > wrote: >> >> >> >>> BTW, what is the status with the Rendering discussion >> (Three.js vs. xml3d.js)? I still have the feeling that we are >> doing parallel work here that should probably be avoided. >> >> >> >> I'm not aware of any overlapping work so far -- then again I'm >> not fully aware what all is up with xml3d.js. >> >> >> >> For the rendering for 3D UI, my conclusion from the discussion >> on this list was that it is best to use three.js now for the case >> of big complex fully featured scenes, i.e. typical realXtend >> worlds (e.g. Ludocraft's Circus demo or the Chesapeake Bay from >> the LVM project, a creative commons realXtend example scene). And >> in miwi in particular for the city model / app now. Basically >> because that's where we already got the basics working in the >> August-September work (and had earlier in realXtend web client >> codebases). That is why we implemented the scalability system on >> top of that too now -- scalability was the only thing missing. >> >> >> >> Until yesterday I thought the question was still open regarding >> XFlow integration. Latest information I got was that there was no >> hardware acceleration support for XFlow in XML3d.js either so it >> seemed worth a check whether it's better to implement it for >> xml3d.js or for three. >> >> >> >> Yesterday, however, we learned from Cyberlightning that work on >> XFlow hardware acceleration was already on-going in xml3d.js (I >> think mostly by DFKI so far?). And that it was decided that work >> within fi-ware now is limited to that (and we also understood that >> the functionality will be quite limited by April, or?). >> >> >> >> This obviously affects the overall situation. >> >> >> >> At least in an intermediate stage this means that we have two >> renderers for different purposes: three.js for some apps, without >> XFlow support, and xml3d.js for others, with XFlow but other >> things missing. This is certain because that is the case today and >> probably in the coming weeks at least. >> >> >> >> For a good final goal I think we can be clever and make an >> effective roadmap. I don't know yet what it is, though -- >> certainly to be discussed. The requirements doc -- perhaps by >> continuing work on it -- hopefully helps. >> >> >> >>> Philipp >> >> >> >> ~Toni >> >> >> >>> >> >>> Am 22.10.2013 23:03, schrieb toni at playsign.net >> : >> >>>> Just a brief note: we had some interesting preliminary discussion >> >>>> triggered by how the data schema that Ari O. presented for >> the POI >> >>>> system seemed at least partly similar to what the Real-Virtual >> >>>> interaction work had resulted in too -- and in fact about how the >> >>>> proposed POI schema was basically a version of the >> entity-component >> >>>> model which we?ve already been using for scenes in realXtend >> (it is >> >>>> inspired by / modeled after it, Ari told). So it can be much >> related to >> >>>> the Scene API work in the Synchronization GE too. As the >> action point we >> >>>> agreed that Ari will organize a specific work session on that. >> >>>> I was now thinking that it perhaps at least partly leads back >> to the >> >>>> question: how do we define (and implement) component types. >> I.e. what >> >>>> was mentioned in that entity-system post a few weeks back >> (with links >> >>>> to reX IComponent etc.). I mean: if functionality such as >> POIs and >> >>>> realworld interaction make sense as somehow resulting in >> custom data >> >>>> component types, does it mean that a key part of the >> framework is a way >> >>>> for those systems to declare their types .. so that it >> integrates nicely >> >>>> for the whole we want? I?m not sure, too tired to think it >> through now, >> >>>> but anyhow just wanted to mention that this was one topic >> that came up. >> >>>> I think Web Components is again something to check - as in >> XML terms reX >> >>>> Components are xml(3d) elements .. just ones that are usually >> in a group >> >>>> (according to the reX entity <-> xml3d group mapping). And Web >> >>>> Components are about defining & implementing new elements (as >> Erno >> >>>> pointed out in a different discussion about xml-html >> authoring in the >> >>>> session). >> >>>> BTW Thanks Kristian for the great comments in that entity system >> >>>> thread - was really good to learn about the alternative >> attribute access >> >>>> syntax and the validation in XML3D(.js). >> >>>> ~Toni >> >>>> P.S. for (Christof &) the DFKI folks: I?m sure you understand the >> >>>> rationale of these Oulu meets -- idea is ofc not to exclude >> you from the >> >>>> talks but just makes sense for us to meet live too as we are >> in the same >> >>>> city afterall etc -- naturally with the DFKI team you also >> talk there >> >>>> locally. Perhaps is a good idea that we make notes so that >> can post e.g. >> >>>> here then (I?m not volunteering though! ?) . Also, the now >> agreed >> >>>> bi-weekly setup on Tuesdays luckily works so that we can >> then summarize >> >>>> fresh in the global Wed meetings and continue the talks etc. >> >>>> *From:* Erno Kuusela >> >>>> *Sent:* ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM >> >>>> *To:* Fiware-miwi >> >>>> >> >>>> Kari from CIE offered to host it this time, so see you there >> at 13:00. >> >>>> >> >>>> Erno >> >>>> _______________________________________________ >> >>>> Fiware-miwi mailing list >> >>>> Fiware-miwi at lists.fi-ware.eu >> >> >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >>>> >> >>>> >> >>>> _______________________________________________ >> >>>> Fiware-miwi mailing list >> >>>> Fiware-miwi at lists.fi-ware.eu >> >> >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >>>> >> >>> >> >>> >> >>> -- >> >>> >> >>> >> ------------------------------------------------------------------------- >> >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >>> >> >>> Gesch?ftsf?hrung: >> >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> >>> Dr. Walter Olthoff >> >>> Vorsitzender des Aufsichtsrats: >> >>> Prof. Dr. h.c. Hans A. Aukes >> >>> >> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> >>> >> --------------------------------------------------------------------------- >> >>> >> >> >> >> _______________________________________________ >> >> Fiware-miwi mailing list >> >> Fiware-miwi at lists.fi-ware.eu >> >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >> >> > >> > >> > -- >> > >> > >> ------------------------------------------------------------------------- >> > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> > Trippstadter Strasse 122, D-67663 Kaiserslautern >> > >> > Gesch?ftsf?hrung: >> > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> > Dr. Walter Olthoff >> > Vorsitzender des Aufsichtsrats: >> > Prof. Dr. h.c. Hans A. Aukes >> > >> > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> > >> --------------------------------------------------------------------------- >> > >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi >> >> > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 456 bytes Desc: not available URL: From Philipp.Slusallek at dfki.de Sun Oct 27 18:20:40 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Sun, 27 Oct 2013 18:20:40 +0100 Subject: [Fiware-miwi] Fwd: Undelivered Mail Returned to Sender In-Reply-To: <20131027171452.0148C9EB6B_26D4A0CB@sea-mail.dfki.de> References: <20131027171452.0148C9EB6B_26D4A0CB@sea-mail.dfki.de> Message-ID: <526D4B68.7050208@dfki.de> Hi Toni, I hope you got the previous email through the MiWi mailing list already, as there seems to be a problem with your email address right now. See below. Best, Philipp -------- Original-Nachricht -------- Betreff: Undelivered Mail Returned to Sender Datum: Sun, 27 Oct 2013 17:14:52 +0000 (GMT) Von: MAILER-DAEMON at sea-mail.dfki.de (Mail Delivery System) An: Philipp.Slusallek at dfki.de This is the mail system at host sea-mail.dfki.de. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to postmaster. If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system : host mail.joker.com[194.245.148.6] said: 554 spam rejected, score 98 (in reply to end of DATA command) -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- An embedded message was scrubbed... From: Philipp Slusallek Subject: Re: [Fiware-miwi] declarative layer & scalability work (Re: XML3D versus three.js (was Re: 13:00 meeting location: CIE (Re: Oulu meet today 13:00))) Date: Sun, 27 Oct 2013 18:14:35 +0100 Size: 22638 URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 456 bytes Desc: not available URL: From toni at playsign.net Sun Oct 27 22:39:54 2013 From: toni at playsign.net (toni at playsign.net) Date: Sun, 27 Oct 2013 21:39:54 +0000 Subject: [Fiware-miwi] =?utf-8?q?txml_to_xml3d_converter_test?= Message-ID: <20131027220450.BF0A918003E@dionysos.netplaza.fi> Hi again, a note about a little beginning of a software tool for change: I started an experimental txml (<)-> xml3d converter: https://github.com/playsign/txml-xml3d An example conversion is shown on the webpage (readme) there. This was inspired by the earlier discussion about how the mapping could look like, and by the repeated attempts to illustrate how the reX custom components we are used to can be preserved in a html & xml3d style human friendly xml. NOTE: this does not have anything to do with renderer selection or anything like that but is an exploration of formalizing the data model mapping. To make talks about it more concrete. Inspired by the POI & real-world mapping & custom components discussion and obviously the door example which is used as the minimal test data here now. Is very much work-in-progress, only little partial functionality and some of the code is crappy etc. I just wrote it in the evening for fun now. Did help a lot in understanding better what it would take to actually complete it if we end up finding this useful (we?dneed a mapping of attribute names, ?Mesh ref? <-> ?src? etc). Might be fun to run some reX scenes like the bundled Tundra examples through this to see what they look like.. Those live in https://github.com/realXtend/naali/tree/tundra2/bin/scenes It might be interesting vice versa too, from xml3d to txml .. am not sure if that?d really teach anything though. Is ofc a possible way to add xml3d support to Tundra (if we need it on the server). For real usage might be better to write a xml3d loader there (either in js or c++). This Python converter is mainly just exploration to understand the mapping better and hopefully make it more concrete for others as well - I already had code and experience from converting TXML with Py?s ElementTree so this was easy to do. Can be completed to production quality if needed (for example to use as a conversion web service .. http post txml/xml3d to it and get the other back). Cheers, ~Toni P.S. had a brief look of the arch desc, seems interesting but didn?t get much yet, too tired now -- will read properly tomorrow. -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Mon Oct 28 08:29:35 2013 From: toni at playsign.net (Toni Alatalo) Date: Mon, 28 Oct 2013 09:29:35 +0200 Subject: [Fiware-miwi] declarative layer & scalability work (Re: XML3D versus three.js (was Re: 13:00 meeting location: CIE (Re: Oulu meet today 13:00))) In-Reply-To: <526D2D7A.8020509@dfki.de> References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <19841C55-EF46-4D01-AD1F-633812458545@playsign.net> <5269206C.1030207@dfki.de> <526D2D7A.8020509@dfki.de> Message-ID: On 27 Oct 2013, at 17:12, Philipp Slusallek wrote: > I understand that the work you have done on the model scalability side will be ported to XML3D within WP13 and the current three.js version is just an intermediate step. Is this understanding correct? If so, I have no problem with that. I?m sorry but I can?t parse this. XML3D is an xml schema, right? How would you expect a scene manager to be ported to it? If I rephrase your statement: ?within WP13 a system will be created where both a) scalable rendering and b) declarative authoring with xml3d work? ? then the answer is yes. On way to reach that goal is to have xml3d work with three.js ? in that case that grid scene manager does not need porting as it?d already work. > Maybe I missed this part, but maybe we need to communicate things like this a bit more. See my suggestion a few email ago of setting up an Agile backlog where we record the tasks to be done and their schedule. Agreed, and I think that?s a good idea. We used agile practices in the earlier Tekes-funded reX project. One of those that I?ve been thinking for MIWI is sprint emails - short descs about what tasks a party is starting. I don?t know how the different companies do the scheduling etc. now ? back then we worked so that the sprint master (first Jani Pirkola and then Antti Ilom?ki) coordinated the whole project and all the tasks for all the companies were in one backlog, sprints synchronised in the same schedule, and we made releases from the common codebase then as a part of that then 3 week cycle. Now the GEs are more separate (not all part of one codebase) so I suppose is fine for internal cycles to differ but that wouldn?t prevent e.g. those emails from being used, they?d just come with different intervals from the different parties. As an old example here?s Mikko Pallari?s beginning-of-sprint email from 2010 .. working on adding arbitrary EC storage support to Opensimulator in ModRex :) https://groups.google.com/forum/#!topic/realxtend-dev/7_SvYPzi3VI For the backlog we just had a google spreadsheet. In Playsign?s game project we?ve now used the Pivotal tracker web service (Admino used that at some point too, I don?t know what they use nowadays). > But also let me make sure there is no missunderstanding regarding "declarative" either: While realXtend may have a declarative approach, declarative in the Web context refers to HTML and its in-memory representation in the DOM. Yes as described in the plans and in this attempt at a more detailed arch diagram about the client internals the plan to have the DOM integration has been firm all along, https://rawgithub.com/realXtend/doc/master/dom/rexdom-dom_as_ui.svg ? that?s why we made the DOM tests with mutation observers etc. to begin with in July. > Let me try to better explain my point about using the KIARA API: KIARA offers a nice (at least I would argue :-), and even declarative API. As we talked about using a common component models and a common translation layer between ECA and XML3D, it would make sense to hide the realXtend protocol adaptation behind this KIARA layer. This is one of the two versions of arriving at a common model by doing the unification on the client side with a common API and data model but having two different protocols behind it: one using your knet approach for realXtend and one using KIARA all the way across the network for our server. This would make it so much easier to port from the direct driving of three.js to XML3D as discussed above. Ok this seems helpful. Another angle to this would be how I?ve been thinking and we?ve been using the entity system. In realXtend applications work by using the entity system API to create entities and components, set their attributes etc. That is abstracted away from networking ? the ECA model is protocol independent. When you say myent.mycomponent.someattr = 1, the fact that in Tundra then kNet is used for the synchronisation is completely hidden, there isn?t even any way for application code to touch it (from javascript - in c++ you could probably hack something). Perhaps in this way Kiara and the ECA model are similar? And about porting or interoperability between xml3d.js and three.js: again the entity system is also renderer independent ? you just create an entity with e.g. mesh and placeable components, add it to your scene, and it shows automatically. When authoring a scene and creating an application even with native Tundra when doing the basics Ogre is abstracted out the same way as the networking is. In our (Playsign) use case / test driven dev style, we earlier created the multiplayer networked pong game and later the oulu city car driving with the idea that their codebases will become *much* simpler and cleaner, but the functionality should become better too, while the platform gets there so that we can gradually port those applications to an entity system which integrates visuals, networking and physics. That is explained in the last paragraph of the intro do chapter 4. Use cases and tests in the requirements doc draft, https://docs.google.com/document/d/1P03BgfEG1Ly2dI2Cs9ODVDmBhH4A438Ynlaxc4vXn1o/edit#heading=h.qflk54puc3rr It is possible to look at the GRID.Manager made for the city model in the same way. It only needs very little from the renderer: ability to load and unload objects and assets. In Tundra that would work via the entity system. This is visible in the old first texture unload -> memory freeing test that I made for Tundra some 2-3 years ago .. that only uses the material reference attribute of the Mesh component to load textures and then the Asset API to unload them. It does not access Ogre API directly. https://github.com/realXtend/naali/blob/tundra2/bin/scenes/TextureMemoryManagement/texmanager.js So if we go via a similar entity (& asset) system in the web client it becomes easy to port, perhaps similar to what you say above. It also becomes possible to support multiple renderers if that?s ever needed. To return to DOM integration, one way would be to use the DOM as the API in which case that grid scene manager would add/remove/configure DOM nodes. But I understood earlier that the idea is to use the DOM as the UI only, not as the way for the rendering internals to work with itself (for performance reasons). > Essentially, in both cases the JS application gets delivered the components that are coming in over the wire. We just hie the knet code behind the KIARA API. Since all the network code is already there and we can probably easily agree on a common format for the deliveres components in JS (should be pretty obvious), this might not be a big effort anyway. > What do you think. I think we are very close if not already in the exact same thoughts. I don?t know KIARA enough to understand this suggestion completely. If KIARA is similar to the ECA model and provides a nice way for example for the custom component type definitions we need etc. then it can suite well. In any case I think we?ve all assumed that the kNet code in the web client will be hidden by apps the same way as it is in native Tundra. OTOH networking is not in our (Playsign) agenda anyways (we just added the WebRTC part to the Pong use case to get a networked test quickly too as it seemed to be far away otherwise back in August). But certainly how it works is an essential part of the client architecture and important for our businesses etc. so I?ll try to understand more, will talk with Erno and perhaps Lasse about that KIARA session they were in (mentioned this earlier too but didn?t get a chance yet). Thanks! And again hopefully this clarified something.. > Philipp ~Toni > Am 25.10.2013 05:02, schrieb Toni Alatalo: >> On 24 Oct 2013, at 16:28, Philipp Slusallek wrote: >> >> Continuing the, so far apparently successful, technique of clarifying a single point at a time a note about scene declarations and description of the scalability work: >> >>> I am not too happy that we are investing the FI-WARE resources into circumventing the declarative layer completely. >> >> We are not doing that. realXtend has had a declarative layer for the past 4-5 years(*) and we totally depend on it ? that?s not going away. The situation is totally the opposite: it is assumed to always be there. There?s absolutely no work done anywhere to circumvent it somehow. [insert favourite 7th way of saying this]. >> >> In my view the case with the current work on scene rendering scalability is this: We already have all the basics implemented and tested in some form - realXtend web client implementations (e.g. ?WebTundra? in form of Chiru-Webclient on github, and other works) have complete entity systems integrated with networking and rendering. XML3d.js is the reference implementation for XML3d parsing, rendering etc. But one of the identified key parts missing was managing larger complex scenes. And that is a pretty hard requirement from the Intelligent City use case which has been the candidate for the main integrated larger use case. IIRC scalability was also among the original requirements and proposals. Also Kristian stated here that he finds it a good area to work on now so the basic motivation for the work seemed clear. >> >> So we tackled this straight on by first testing the behaviour of loading & unloading scene parts and then proceeded to implement a simple but effective scene manager. We?re documenting that separately so I won?t go into details here. So far it works even surprisingly well which has been a huge relief during the past couple of days ? not only for us on the platform dev side but also for the modelling and application companies working with the city model here (I demoed the first version in a live meet on Wed), we?ll post demo links soon (within days) as soon as can confirm a bit more that the results seem conclusive. Now in general for the whole 3D UI and nearby GEs I think we have most of the parts (and the rest are coming) and ?just? need to integrate.. >> >> The point here is that in that work the focus is on the memory management of the rendering and the efficiency & non-blockingness of loading geometry data and textures for display. In my understanding that is orthogonal to scene declaration formats ? or networking for that matter. In any case we get geometry and texture data to load and manage. An analogue (just to illustrate, not a real case): When someone works on improving the CPU process scheduler in Linux kernel he/she does not touch file system code. That does not mean that the improved scheduler proposes to remove file system support from Linux. Also, it is not investing resources into circumventing (your term) file systems ? even if in the scheduler dev it is practical to just create competing processes from code, and not load applications to execute from the file system. It is absolutely clear for the scheduler developer how filesystems are a part of the big picture but they are just not relevant to the task at > hand. >> >> Again I hope this clarifies what?s going on. Please note that I?m /not/ addressing renderer alternatives and selection here *at all* ? only the relationship of the declarative layer and of the scalability work that you seemed to bring up in the sentence quoted in the beginning. >> >>> I suggest that we start to work on the shared communication layer using the KIARA API (part of a FI-WARE GE) and add the code to make the relevant components work in XML3D. Can someone put together a plan for this. We are happy to help where necessary -- but from my point of view we need to do this as part of the Open Call. >> >> I?m sorry I don?t get how this is related. Then again I was not in the KIARA session that one Wed morning ? Erno and Lasse were so I can talk with them to get an understanding. Now I can?t find a thought-path from renderer to networking here yet.. :o >> >> Also, I do need to (re-)read all these posts ? so far have had mostly little timeslots to quickly clarify some basic miscommunications (like the poi data vs. poi data derived visualisations topic in the other thread, and the case with the declarative layer & scalability work in this one). I?m mostly not working at all this Friday though (am with kids) and also in general only work on fi-ware 50% of my work time (though I don?t mind when both the share and the total times are more, this is business development!) so it can take a while from my part. >> >>> Philipp >> >> Cheers, >> ~Toni >> >> (*) "realXtend has had a declarative layer for the past 4-5 years(*)": in the very beginning in 2007-2008 we didn?t have it in the same way, due to how the first prototype was based on Opensimulator and Second Life (tm) viewer. Only way to create a scene was to, in technical terms, to send object creation commands over UDP to the server. Or write code to run in the server. That is how Second Life was originally built: people use the GUI client to build the worlds one object at a time and there was no support for importing nor exporting objects or scenes (people did write scripts to generate objects etc.). For us that was a terrible nightmare (ask anyone from Ludocraft who worked on the Beneath the Waves demo scene for reX 0.3 ? I was fortunate enough to not be involved in that period). As a remedy to that insanity I first implemented importing from Ogre?s very simple .scene (?dotScene?) format in the new Naali viewer (which later became the Tundra codebase). Then > we could finally bring full scenes from Blender and Max. We were still using Opensimulator as the server then and after my client-side prototype Mikko Pallari implemented dotScene import to the server side and we got an ok production solution. Nowadays Opensimulator has OAR files and likewise the community totally depends on those. On reX side, Jukka Jyl?nki & Lasse wrote Tundra and we switched to it and the TXML & TBIN support there which still seem ok as machine authored formats. We do support Ogre dotScene import in current Tundra too. And even Linden (the Second Life company) has gotten to support COLLADA import, I think mostly meant for single objects but IIRC works for scenes too. >> >> Now XML3d seems like a good next step to get a human friendly (and perhaps just a more sane way to use xml in general) declarative format. It actually addresses an issue I created in our tracker 2 years ago, "xmlifying txml? https://github.com/realXtend/naali/issues/215 .. the draft in the gist linked from there is a bit more like xml3d than txml. I?m very happy that you?ve already made xml3d so we didn?t have to try to invent it :) >> >>> Am 23.10.2013 09:51, schrieb Toni Alatalo: >>>> On Oct 23, 2013, at 8:00 AM, Philipp Slusallek wrote: >>>> >>>>> BTW, what is the status with the Rendering discussion (Three.js vs. xml3d.js)? I still have the feeling that we are doing parallel work here that should probably be avoided. >>>> >>>> I'm not aware of any overlapping work so far -- then again I'm not fully aware what all is up with xml3d.js. >>>> >>>> For the rendering for 3D UI, my conclusion from the discussion on this list was that it is best to use three.js now for the case of big complex fully featured scenes, i.e. typical realXtend worlds (e.g. Ludocraft's Circus demo or the Chesapeake Bay from the LVM project, a creative commons realXtend example scene). And in miwi in particular for the city model / app now. Basically because that's where we already got the basics working in the August-September work (and had earlier in realXtend web client codebases). That is why we implemented the scalability system on top of that too now -- scalability was the only thing missing. >>>> >>>> Until yesterday I thought the question was still open regarding XFlow integration. Latest information I got was that there was no hardware acceleration support for XFlow in XML3d.js either so it seemed worth a check whether it's better to implement it for xml3d.js or for three. >>>> >>>> Yesterday, however, we learned from Cyberlightning that work on XFlow hardware acceleration was already on-going in xml3d.js (I think mostly by DFKI so far?). And that it was decided that work within fi-ware now is limited to that (and we also understood that the functionality will be quite limited by April, or?). >>>> >>>> This obviously affects the overall situation. >>>> >>>> At least in an intermediate stage this means that we have two renderers for different purposes: three.js for some apps, without XFlow support, and xml3d.js for others, with XFlow but other things missing. This is certain because that is the case today and probably in the coming weeks at least. >>>> >>>> For a good final goal I think we can be clever and make an effective roadmap. I don't know yet what it is, though -- certainly to be discussed. The requirements doc -- perhaps by continuing work on it -- hopefully helps. >>>> >>>>> Philipp >>>> >>>> ~Toni >>>> >>>>> >>>>> Am 22.10.2013 23:03, schrieb toni at playsign.net: >>>>>> Just a brief note: we had some interesting preliminary discussion >>>>>> triggered by how the data schema that Ari O. presented for the POI >>>>>> system seemed at least partly similar to what the Real-Virtual >>>>>> interaction work had resulted in too -- and in fact about how the >>>>>> proposed POI schema was basically a version of the entity-component >>>>>> model which we?ve already been using for scenes in realXtend (it is >>>>>> inspired by / modeled after it, Ari told). So it can be much related to >>>>>> the Scene API work in the Synchronization GE too. As the action point we >>>>>> agreed that Ari will organize a specific work session on that. >>>>>> I was now thinking that it perhaps at least partly leads back to the >>>>>> question: how do we define (and implement) component types. I.e. what >>>>>> was mentioned in that entity-system post a few weeks back (with links >>>>>> to reX IComponent etc.). I mean: if functionality such as POIs and >>>>>> realworld interaction make sense as somehow resulting in custom data >>>>>> component types, does it mean that a key part of the framework is a way >>>>>> for those systems to declare their types .. so that it integrates nicely >>>>>> for the whole we want? I?m not sure, too tired to think it through now, >>>>>> but anyhow just wanted to mention that this was one topic that came up. >>>>>> I think Web Components is again something to check - as in XML terms reX >>>>>> Components are xml(3d) elements .. just ones that are usually in a group >>>>>> (according to the reX entity <-> xml3d group mapping). And Web >>>>>> Components are about defining & implementing new elements (as Erno >>>>>> pointed out in a different discussion about xml-html authoring in the >>>>>> session). >>>>>> BTW Thanks Kristian for the great comments in that entity system >>>>>> thread - was really good to learn about the alternative attribute access >>>>>> syntax and the validation in XML3D(.js). >>>>>> ~Toni >>>>>> P.S. for (Christof &) the DFKI folks: I?m sure you understand the >>>>>> rationale of these Oulu meets -- idea is ofc not to exclude you from the >>>>>> talks but just makes sense for us to meet live too as we are in the same >>>>>> city afterall etc -- naturally with the DFKI team you also talk there >>>>>> locally. Perhaps is a good idea that we make notes so that can post e.g. >>>>>> here then (I?m not volunteering though! ?) . Also, the now agreed >>>>>> bi-weekly setup on Tuesdays luckily works so that we can then summarize >>>>>> fresh in the global Wed meetings and continue the talks etc. >>>>>> *From:* Erno Kuusela >>>>>> *Sent:* ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM >>>>>> *To:* Fiware-miwi >>>>>> >>>>>> Kari from CIE offered to host it this time, so see you there at 13:00. >>>>>> >>>>>> Erno >>>>>> _______________________________________________ >>>>>> Fiware-miwi mailing list >>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Fiware-miwi mailing list >>>>>> Fiware-miwi at lists.fi-ware.eu >>>>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>>>> >>>>> >>>>> >>>>> -- >>>>> >>>>> ------------------------------------------------------------------------- >>>>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>>>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>>>> >>>>> Gesch?ftsf?hrung: >>>>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>>>> Dr. Walter Olthoff >>>>> Vorsitzender des Aufsichtsrats: >>>>> Prof. Dr. h.c. Hans A. Aukes >>>>> >>>>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>>>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>>>> --------------------------------------------------------------------------- >>>>> >>>> >>>> _______________________________________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/listinfo/fiware-miwi >>>> >>> >>> >>> -- >>> >>> ------------------------------------------------------------------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> --------------------------------------------------------------------------- >>> >> > > > -- > > ------------------------------------------------------------------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > --------------------------------------------------------------------------- > From jarkko at cyberlightning.com Mon Oct 28 09:44:30 2013 From: jarkko at cyberlightning.com (Jarkko Vatjus-Anttila) Date: Mon, 28 Oct 2013 10:44:30 +0200 Subject: [Fiware-miwi] Meeting on asset exporter pipeline In-Reply-To: <692737413.30843.1382442022344.JavaMail.open-xchange@ox6.dfki.de> References: <525F973F.4010100@dfki.de> <14dd09f4c1101fd46c683eeb5aad3415.squirrel@urho.ludocraft.com> <692737413.30843.1382442022344.JavaMail.open-xchange@ox6.dfki.de> Message-ID: Hello all, Did we plan to setup a Google Hangout for this meeting? On Tue, Oct 22, 2013 at 2:40 PM, Torsten Spieldenner < torsten.spieldenner at dfki.de> wrote: > ** > Hello, > > lets fix October 28th, 10 am for the meeting then. > > Torsten > > > Toni Alatalo hat am 22. Oktober 2013 um 08:26 > geschrieben: > > Both these are ok for our team too. > > > > After ok results with a visibility / memory management scheme (a kind of > a simple paging scene manager, a grid manager suitable for city blocks, > adopted from a unity plugin) which allows a theoretically indefinite scene > (more info about that separately a bit later, there's a demo online > already), > > > > we are currently testing how things work with the supposedly efficient > CTM format from http://openctm.sourceforge.net . Seems to work well so > far and the three.js loader for it uses workers so on-demand loading of > scene parts is pretty fluent. We haven't yet gotten it to load textures > though from our test city block so current good result is from geometry > only -- we are working on the materials part right now. > > > > ~Toni > > > > On Oct 17, 2013, at 3:35 PM, Lasse ??rni > wrote: > > > > >> This is ok for me and my team as well. I think it would be wise to > peek > > >> into OpenCollada, for example, to understand it more. We can do that > while > > >> preparing to discuss about this topic. > > >> > > >> - j > > >> > > >> > > >> On Thu, Oct 17, 2013 at 3:06 PM, Felix Klein > > >> wrote: > > >> > > >>> 28th and 29th October should be fine by me. > > > > > > Hi, > > > those are fine for me as well. > > > > > > -- > > > Lasse ??rni > > > Game Programmer > > > LudoCraft Ltd. > > > > > > > > > _______________________________________________ > > > Fiware-miwi mailing list > > > Fiware-miwi at lists.fi-ware.eu > > > https://lists.fi-ware.eu/listinfo/fiware-miwi > > > > _______________________________________________ > > Fiware-miwi mailing list > > Fiware-miwi at lists.fi-ware.eu > > https://lists.fi-ware.eu/listinfo/fiware-miwi > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -- Jarkko Vatjus-Anttila VP, Technology Cyberlightning Ltd. mobile. +358 405245142 email. jarkko at cyberlightning.com Enrich Your Presentations! New CyberSlide 2.0 released on February 27th. Get your free evaluation version and buy it now! www.cybersli.de www.cyberlightning.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From torsten.spieldenner at dfki.de Mon Oct 28 09:46:29 2013 From: torsten.spieldenner at dfki.de (Torsten Spieldenner) Date: Mon, 28 Oct 2013 09:46:29 +0100 Subject: [Fiware-miwi] Meeting on Asset Exporter Pipeline Message-ID: <526E2465.1000103@dfki.de> Hello, I've set up a Google Hangout for the meeting we've scheduled for today 10 am. Here is the link: https://plus.google.com/hangouts/_/76cpi3oj4oer73k7rjhutbbd6o See you there! Torsten From erno at playsign.net Mon Oct 28 11:10:50 2013 From: erno at playsign.net (erno at playsign.net) Date: Mon, 28 Oct 2013 12:10:50 +0200 Subject: [Fiware-miwi] Mobile texture formats (was Re: Meeting on Asset Exporter Pipeline) In-Reply-To: <526E2465.1000103@dfki.de> References: <526E2465.1000103@dfki.de> Message-ID: <20131028101050.GA5617@ee.oulu.fi> Seems the common practice on mobile is to just use ETC1 for textures (which is the only format guaranteed by GLES2 / WebGL) and work around the lack of alpha channel by using 2 textures when alpha is needed. On mobile DDS is not too commonly supported and different GPU vendors support mutually incompatible formats (PVRTC for PowerVR chips etc). Some links to various developer docs / forums about the ETC1 alpha hack: Unity: http://forum.unity3d.com/threads/73998-DXT-or-PVRTC?s=d35b7e4f1d2c1e389931b6dceb242598&p=474892&viewfull=1#post474892 ARM: http://malideveloper.arm.com/develop-for-mali/sample-code/etcv1-texture-compression-and-alpha-channels/ Flash: http://www.adobe.com/devnet/flashruntimes/articles/introducing-compressed-textures.html From erno at playsign.net Mon Oct 28 11:48:15 2013 From: erno at playsign.net (Erno Kuusela) Date: Mon, 28 Oct 2013 12:48:15 +0200 Subject: [Fiware-miwi] New version of Oulu three.js demo up Message-ID: <20131028104815.GB5617@ee.oulu.fi> Here's latest version of the Oulu scene experiment with three.js: It has an infinitely repeating city of the a single Oulu block that you can drive around. Even though we're using the same block we're reloading it each time of each grid square to simualte a big scene constisting of varying geometry. It's using the parallel CTM decoding and streaming loading/unloading of the geometry (not textures). Source can be found at . Erno From toni at playsign.net Wed Oct 30 08:08:45 2013 From: toni at playsign.net (Toni Alatalo) Date: Wed, 30 Oct 2013 09:08:45 +0200 Subject: [Fiware-miwi] XML3D versus three.js (was Re: 13:00 meeting location: CIE (Re: Oulu meet today 13:00)) In-Reply-To: <5269206C.1030207@dfki.de> References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <19841C55-EF46-4D01-AD1F-633812458545@playsign.net> <5269206C.1030207@dfki.de> Message-ID: While I don't agree that what you propose would be the correct development process for MIWI -- mostly based on the time-to-market and other business considerations discussed in the earlier thread -- I think it is totally fair for you to ask about the shortcomings in XML3d.js. Unfortunately it is not an easy question for us to answer completely. This requirements driven analysis was exactly what I started in that one MIWI reqs doc which has a subsection for rendering requirements, however that's mostly a stub still: https://docs.google.com/document/d/1P03BgfEG1Ly2dI2Cs9ODVDmBhH4A438Ynlaxc4vXn1o/edit#heading=h.8wiyysq665rx But in the asset session on Monday in the great discussion with xml3d.js developers we learned 2 new concrete missing parts which the large city model handling / paging scene manager code we have now depends on: 1. Ability to free memory: currently xml3d.js never frees memory 2. Ability to load data in the background (parse meshes in worker): currently xml3d.js loading can be synchronous only (already known: 3. Support for gpu-supported texture compression (speeds up the loading to mem too)) With three.js 1. needed a bit of figuring out (there was a demo for it though), 2. was already well implemented in a worker-using CTM loader and 3. has been there for long and most realXtend usage of three has depended on it (was also discussed within miwi regarding xml3d.js earlier). I am fairly certain that utilizing the already existing and working technology was the correct decision here. Main reason is that it was uncertain whether the whole idea can actually work: can we frequently load&unload city blocks with the UI remaining responsive and without the browsers getting choked. So far it seems even surprisingly good, is totally fluent (though there are some caveats: these are optimized blocks, heavier are coming soon, and possibly the textures are reused now (gotta check with Erno whether his texture cache is in use in the on-line version)). This way we learned it quickly -- and have been able to demonstrate it to other companies who work on businesses around the city model. Those 3 points are simple and easy to add xml3d.js, the DFKI guys took notes of them and already had ideas how to add support (there is already a resource manager which keeps track of usage counts so there's a way to free unused resources etc). However these are not an exhaustive list, only a couple simple points encountered in the scalability work. And besides technical features there are business and community aspects. But that's another discussion, this post was just to a) remind about the rendering part in the requirements doc and b) inform about those little findings discussed in the asset meet (I added them to the reqs doc too). Cheers, ~Toni On Thu, Oct 24, 2013 at 4:28 PM, Philipp Slusallek < Philipp.Slusallek at dfki.de> wrote: > Hi, > > Well, I think you identified the overlaping quite well :-). The goal of > Miwi always has been to provide the tools for declarative 3D in the Web. > While we agreed that there might be value (to be evaluated) in adding > three.js to XML3D, I am not too happy that we are investing the FI-WARE > resources into circumventing the declarative layer completely. > > When you are saying that there are limitation in XML3D, it would be good > to know what they are explicitly and jointly work on removing them. Only if > that should fail should we be looking at alternatives. > > My suggestion of adding a wrapper around the communication is exactly such > that we can evaluate XML3D against any three.js version that might be > there. There is a lot of novel stuff coming from our side that we will not > be able to integrate across this "fork" in our code base, which is a pitty. > And again, we would like to know where limitations are in XML3D -- please > tell us straight away. > > I suggest that we start to work on the shared communication layer using > the KIARA API (part of a FI-WARE GE) and add the code to make the relevant > components work in XML3D. Can someone put together a plan for this. We are > happy to help where necessary -- but from my point of view we need to do > this as part of the Open Call. > > > Best, > > Philipp > > > Am 23.10.2013 09:51, schrieb Toni Alatalo: > >> On Oct 23, 2013, at 8:00 AM, Philipp Slusallek >> wrote: >> >> BTW, what is the status with the Rendering discussion (Three.js vs. >>> xml3d.js)? I still have the feeling that we are doing parallel work here >>> that should probably be avoided. >>> >> >> I'm not aware of any overlapping work so far -- then again I'm not fully >> aware what all is up with xml3d.js. >> >> For the rendering for 3D UI, my conclusion from the discussion on this >> list was that it is best to use three.js now for the case of big complex >> fully featured scenes, i.e. typical realXtend worlds (e.g. Ludocraft's >> Circus demo or the Chesapeake Bay from the LVM project, a creative commons >> realXtend example scene). And in miwi in particular for the city model / >> app now. Basically because that's where we already got the basics working >> in the August-September work (and had earlier in realXtend web client >> codebases). That is why we implemented the scalability system on top of >> that too now -- scalability was the only thing missing. >> >> Until yesterday I thought the question was still open regarding XFlow >> integration. Latest information I got was that there was no hardware >> acceleration support for XFlow in XML3d.js either so it seemed worth a >> check whether it's better to implement it for xml3d.js or for three. >> >> Yesterday, however, we learned from Cyberlightning that work on XFlow >> hardware acceleration was already on-going in xml3d.js (I think mostly by >> DFKI so far?). And that it was decided that work within fi-ware now is >> limited to that (and we also understood that the functionality will be >> quite limited by April, or?). >> >> This obviously affects the overall situation. >> >> At least in an intermediate stage this means that we have two renderers >> for different purposes: three.js for some apps, without XFlow support, and >> xml3d.js for others, with XFlow but other things missing. This is certain >> because that is the case today and probably in the coming weeks at least. >> >> For a good final goal I think we can be clever and make an effective >> roadmap. I don't know yet what it is, though -- certainly to be discussed. >> The requirements doc -- perhaps by continuing work on it -- hopefully helps. >> >> Philipp >>> >> >> ~Toni >> >> >>> Am 22.10.2013 23:03, schrieb toni at playsign.net: >>> >>>> Just a brief note: we had some interesting preliminary discussion >>>> triggered by how the data schema that Ari O. presented for the POI >>>> system seemed at least partly similar to what the Real-Virtual >>>> interaction work had resulted in too -- and in fact about how the >>>> proposed POI schema was basically a version of the entity-component >>>> model which we?ve already been using for scenes in realXtend (it is >>>> inspired by / modeled after it, Ari told). So it can be much related to >>>> the Scene API work in the Synchronization GE too. As the action point we >>>> agreed that Ari will organize a specific work session on that. >>>> I was now thinking that it perhaps at least partly leads back to the >>>> question: how do we define (and implement) component types. I.e. what >>>> was mentioned in that entity-system post a few weeks back (with links >>>> to reX IComponent etc.). I mean: if functionality such as POIs and >>>> realworld interaction make sense as somehow resulting in custom data >>>> component types, does it mean that a key part of the framework is a way >>>> for those systems to declare their types .. so that it integrates nicely >>>> for the whole we want? I?m not sure, too tired to think it through now, >>>> but anyhow just wanted to mention that this was one topic that came up. >>>> I think Web Components is again something to check - as in XML terms reX >>>> Components are xml(3d) elements .. just ones that are usually in a group >>>> (according to the reX entity <-> xml3d group mapping). And Web >>>> Components are about defining & implementing new elements (as Erno >>>> pointed out in a different discussion about xml-html authoring in the >>>> session). >>>> BTW Thanks Kristian for the great comments in that entity system >>>> thread - was really good to learn about the alternative attribute access >>>> syntax and the validation in XML3D(.js). >>>> ~Toni >>>> P.S. for (Christof &) the DFKI folks: I?m sure you understand the >>>> rationale of these Oulu meets -- idea is ofc not to exclude you from the >>>> talks but just makes sense for us to meet live too as we are in the same >>>> city afterall etc -- naturally with the DFKI team you also talk there >>>> locally. Perhaps is a good idea that we make notes so that can post e.g. >>>> here then (I?m not volunteering though! ?) . Also, the now agreed >>>> bi-weekly setup on Tuesdays luckily works so that we can then summarize >>>> fresh in the global Wed meetings and continue the talks etc. >>>> *From:* Erno Kuusela >>>> *Sent:* ?Tuesday?, ?October? ?22?, ?2013 ?9?:?57? ?AM >>>> *To:* Fiware-miwi >>>> >>>> Kari from CIE offered to host it this time, so see you there at 13:00. >>>> >>>> Erno >>>> ______________________________**_________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>> >>>> >>>> ______________________________**_________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>> >>>> >>> >>> -- >>> >>> ------------------------------**------------------------------** >>> ------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> ------------------------------**------------------------------** >>> --------------- >>> >>> >> >> ______________________________**_________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/**listinfo/fiware-miwi >> >> > > -- > > ------------------------------**------------------------------** > ------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------**------------------------------** > --------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Wed Oct 30 08:51:24 2013 From: toni at playsign.net (Toni Alatalo) Date: Wed, 30 Oct 2013 09:51:24 +0200 Subject: [Fiware-miwi] 3D UI Usage from other GEs / epics / apps Message-ID: Hi again, new angle here: calling devs *outside* the 3D UI GE: POIs, real-virtual interaction, interface designer, virtual characters, 3d capture, synchronization etc. I think we need to proceed rapidly with integration now and propose that one next step towards that is to analyze the interfaces between 3D UI and other GEs. This is because it seems to be a central part with which many others interface: that is evident in the old 'arch.png' where we analyzed GE/Epic interdependencies: is embedded in section 2 in the Winterthur arch discussion notes which hopefully works for everyone to see, https://docs.google.com/document/d/1Sr4rg44yGxK8jj6yBsayCwfitZTq5Cdyyb_xC25vhhE/edit I propose a process where we go through the usage patterns case by case. For example so that me & Erno visit the other devs to discuss it. I think a good goal for those sessions is to define and plan the implementation of first tests / minimal use cases where the other GEs are used together with 3D UI to show something. I'd like this first pass to happen quickly so that within 2 weeks from the planning the first case is implemented. So if we get to have the sessions within 2 weeks from now, in a month we'd have demos with all parts. Let's organize this so that those who think this applies to their work contact me with private email (to not spam the list), we meet and collect the notes to the wiki and inform this list about that. One question of particular interest to me here is: can the users of 3D UI do what they need well on the entity system level (for example just add and configure mesh components), or do they need deeper access to the 3d scene and rendering (spatial queries, somehow affect the rendering pipeline etc). With Tundra we have the Scene API and the (Ogre)World API(s) to support the latter, and also access to the renderer directly. OTOH the entity system level is renderer independent. Synchronization is a special case which requires good two-way integration with 3D UI. Luckily it's something that we and especially Lasse himself knows already from how it works in Tundra (and in WebTundras). Definitely to be discussed and planned now too of course. So please if you agree that this is a good process do raise hands and let's start working on it! We can discuss this in the weekly too if needed. Cheers, ~Toni -------------- next part -------------- An HTML attachment was scrubbed... URL: From mach at zhaw.ch Wed Oct 30 09:20:56 2013 From: mach at zhaw.ch (Christof Marti) Date: Wed, 30 Oct 2013 09:20:56 +0100 Subject: [Fiware-miwi] todays WP13 meeting Message-ID: <4611E05F-9BA4-4226-ADB3-42CE3D989775@zhaw.ch> Hi everybody I had a very bad and exhausting night and spent most of the night vomiting in the bathroom. I have a doctors appointment this morning at 10:15 CET and am not able to host the meeting. But there are some points, which you can also discuss without me; like the preparation for the Uoulu meeting on Nov 11. Would be great if somebody could take over. I already started to prepare a minutes document highlighting some points to be done https://docs.google.com/document/d/1fKl-z3iu8LV1N3zjsvmQhUM3Fh3U4XBOJ8ysm55SKUE/edit# (To host the session 1 participant has to open the telco using the host access code, instead of the participant code. I added it to the connection details section of the above document.) If you could check the Specification/Architecture part and report open points would be great. We should also close this ASAP. I still owe you the consolidated version of the report document. I will send this asap today. If I have other important points I will contact you by email. Thanks. Best regards Christof ---- InIT Cloud Computing Lab - ICCLab http://cloudcomp.ch Institut of Applied Information Technology - InIT Zurich University of Applied Sciences - ZHAW School of Engineering P.O.Box, CH-8401 Winterthur Office:TD O3.18, Obere Kirchgasse 2 Phone: +41 58 934 70 63 Mail: mach at zhaw.ch Skype: christof-marti From kristian.sons at dfki.de Wed Oct 30 10:07:05 2013 From: kristian.sons at dfki.de (Kristian Sons) Date: Wed, 30 Oct 2013 10:07:05 +0100 Subject: [Fiware-miwi] XML3D versus three.js (was Re: 13:00 meeting location: CIE (Re: Oulu meet today 13:00)) In-Reply-To: References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <19841C55-EF46-4D01-AD1F-633812458545@playsign.net> <5269206C.1030207@dfki.de> Message-ID: <5270CC39.1070704@dfki.de> Dear Toni, I'm a bit surprised from your conclusion from the indeed good discussion from Monday. > 1. Ability to free memory: currently xml3d.js never frees memory This is wrong. We only talked about external loaded resources being cached for possible reuse. This is NOT related to any other memory managment in xml3d.js. The caching strategy for the raw data can be improved and I filed an issue for that: https://github.com/xml3d/xml3d.js/issues/25 Even without this we can easily load and render scenes of large sizes. > 2. Ability to load data in the background (parse meshes in worker): > currently xml3d.js loading can be synchronous only Doing this is in the responsibility of the mesh format handler plug-in and is possible today (though not tested). However, we want to provide a better integration with the resource management. https://github.com/xml3d/xml3d.js/issues/24 > (already known: 3. Support for gpu-supported texture compression > (speeds up the loading to mem too)) https://github.com/xml3d/xml3d.js/issues/23 Kristian -- _______________________________________________________________________________ Kristian Sons Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI Agenten und Simulierte Realit?t Campus, Geb. D 3 2, Raum 0.77 66123 Saarbr?cken, Germany Phone: +49 681 85775-3833 Phone: +49 681 302-3833 Fax: +49 681 85775?2235 kristian.sons at dfki.de http://www.xml3d.org Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 _______________________________________________________________________________ From toni at playsign.net Wed Oct 30 10:12:54 2013 From: toni at playsign.net (Toni Alatalo) Date: Wed, 30 Oct 2013 11:12:54 +0200 Subject: [Fiware-miwi] XML3D versus three.js (was Re: 13:00 meeting location: CIE (Re: Oulu meet today 13:00)) In-Reply-To: <5270CC39.1070704@dfki.de> References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <19841C55-EF46-4D01-AD1F-633812458545@playsign.net> <5269206C.1030207@dfki.de> <5270CC39.1070704@dfki.de> Message-ID: On 30 Oct 2013, at 11:07, Kristian Sons wrote: > I'm a bit surprised from your conclusion from the indeed good discussion from Monday. Sorry, I didn?t phrase carefully enough: >> 1. Ability to free memory: currently xml3d.js never frees memory > This is wrong. We only talked about external loaded resources being cached for possible reuse. This is NOT related to any other memory managment in xml3d.js. The caching strategy for the raw data can be improved and I filed an issue for that: > https://github.com/xml3d/xml3d.js/issues/25 Yes this referred to external loaded resources only ? as that?s what the city model blocks have, right? > Even without this we can easily load and render scenes of large sizes. But only ones that fit in memory at once, no? The new city model does not. >> 2. Ability to load data in the background (parse meshes in worker): currently xml3d.js loading can be synchronous only > Doing this is in the responsibility of the mesh format handler plug-in and is possible today (though not tested). However, we want to provide a better integration with the resource management. > https://github.com/xml3d/xml3d.js/issues/24 One of your guys did say that it?s not possible now but requires a small change ? perhaps I misunderstood. In any case is simple and certainly solvable like all these points. Thanks for the clarifications! > Kristian ~Toni > -- > _______________________________________________________________________________ > > Kristian Sons > Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI > Agenten und Simulierte Realit?t > Campus, Geb. D 3 2, Raum 0.77 > 66123 Saarbr?cken, Germany > > Phone: +49 681 85775-3833 > Phone: +49 681 302-3833 > Fax: +49 681 85775?2235 > kristian.sons at dfki.de > http://www.xml3d.org > > Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > > Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes > Amtsgericht Kaiserslautern, HRB 2313 > _______________________________________________________________________________ > From kristian.sons at dfki.de Wed Oct 30 10:35:26 2013 From: kristian.sons at dfki.de (Kristian Sons) Date: Wed, 30 Oct 2013 10:35:26 +0100 Subject: [Fiware-miwi] XML3D versus three.js (was Re: 13:00 meeting location: CIE (Re: Oulu meet today 13:00)) In-Reply-To: References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <19841C55-EF46-4D01-AD1F-633812458545@playsign.net> <5269206C.1030207@dfki.de> <5270CC39.1070704@dfki.de> Message-ID: <5270D2DE.8080608@dfki.de> Am 30.10.2013 10:12, schrieb Toni Alatalo: > Yes this referred to external loaded resources only ? as that?s what the city model blocks have, right? Yes. And only to the raw response data from the XHR. > >> >Even without this we can easily load and render scenes of large sizes. > But only ones that fit in memory at once, no? The new city model does not. How large is your city model, anyway? Kristian -- _______________________________________________________________________________ Kristian Sons Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI Agenten und Simulierte Realit?t Campus, Geb. D 3 2, Raum 0.77 66123 Saarbr?cken, Germany Phone: +49 681 85775-3833 Phone: +49 681 302-3833 Fax: +49 681 85775?2235 kristian.sons at dfki.de http://www.xml3d.org Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 _______________________________________________________________________________ From jonne at adminotech.com Wed Oct 30 14:31:51 2013 From: jonne at adminotech.com (Jonne Nauha) Date: Wed, 30 Oct 2013 15:31:51 +0200 Subject: [Fiware-miwi] 3D UI Usage from other GEs / epics / apps In-Reply-To: References: Message-ID: We are going to need a bit more structure than just a "3D client/UI" object. I think mimicking the Tundra core APIs (the ones we need at least) is a good choice. Something like what I've scribbled below. The rest of the GEs will then interact in some kind of manner with these core APIs, in most cases with renderer, scene and ui. var client = { network : Object, // Network sync, connect, disconnect etc. functionality. // Implemented by scene sync GE (Ludocraft). renderer : Object, // API for 3D rendering engine access, creating scene nodes, updating their transforms, raycasting etc. // Implemented by 3D UI (Playsign). scene : Object, // API for accessing the Entity-Component-Attribute model. // Implemented by ??? asset : Object, // Not strictly necessary for xml3d as it does asset requests for us, but for three.js this is pretty much needed. // Implemented by ??? ui : Object, // API to add/remove widgets correctly on top of the 3D rendering canvas element, window resize events etc. // Implemented by 2D/Input GE (Adminotech). input : Object // API to hook to input events occurring on top of the 3D scene. // Implemented by 2D/Input GE (Adminotech). }; Best regards, Jonne Nauha Meshmoon developer at Adminotech Ltd. www.meshmoon.com On Wed, Oct 30, 2013 at 9:51 AM, Toni Alatalo wrote: > Hi again, > > new angle here: calling devs *outside* the 3D UI GE: POIs, real-virtual > interaction, interface designer, virtual characters, 3d capture, > synchronization etc. > > I think we need to proceed rapidly with integration now and propose > that one next step towards that is to analyze the interfaces between 3D UI > and other GEs. This is because it seems to be a central part with which > many others interface: that is evident in the old 'arch.png' where we > analyzed GE/Epic interdependencies: is embedded in section 2 in the > Winterthur arch discussion notes which hopefully works for everyone to see, > https://docs.google.com/document/d/1Sr4rg44yGxK8jj6yBsayCwfitZTq5Cdyyb_xC25vhhE/edit > > I propose a process where we go through the usage patterns case by case. > For example so that me & Erno visit the other devs to discuss it. I think a > good goal for those sessions is to define and plan the implementation of > first tests / minimal use cases where the other GEs are used together with > 3D UI to show something. I'd like this first pass to happen quickly so that > within 2 weeks from the planning the first case is implemented. So if we > get to have the sessions within 2 weeks from now, in a month we'd have > demos with all parts. > > Let's organize this so that those who think this applies to their work > contact me with private email (to not spam the list), we meet and collect > the notes to the wiki and inform this list about that. > > One question of particular interest to me here is: can the users of 3D UI > do what they need well on the entity system level (for example just add and > configure mesh components), or do they need deeper access to the 3d scene > and rendering (spatial queries, somehow affect the rendering pipeline etc). > With Tundra we have the Scene API and the (Ogre)World API(s) to support the > latter, and also access to the renderer directly. OTOH the entity system > level is renderer independent. > > Synchronization is a special case which requires good two-way integration > with 3D UI. Luckily it's something that we and especially Lasse himself > knows already from how it works in Tundra (and in WebTundras). Definitely > to be discussed and planned now too of course. > > So please if you agree that this is a good process do raise hands and > let's start working on it! We can discuss this in the weekly too if needed. > > Cheers, > ~Toni > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonne at adminotech.com Wed Oct 30 14:45:20 2013 From: jonne at adminotech.com (Jonne Nauha) Date: Wed, 30 Oct 2013 15:45:20 +0200 Subject: [Fiware-miwi] XML3D versus three.js (was Re: 13:00 meeting location: CIE (Re: Oulu meet today 13:00)) In-Reply-To: <5270D2DE.8080608@dfki.de> References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <19841C55-EF46-4D01-AD1F-633812458545@playsign.net> <5269206C.1030207@dfki.de> <5270CC39.1070704@dfki.de> <5270D2DE.8080608@dfki.de> Message-ID: Yea I understood that the CPU side mem is not freed eg. the js side data from the XHR response. Did you have GPU mem unload when xml3d mesh nodes are removed from the DOM? You had the auto detection how many things are still referencing to a particular asset, so when the last one is removed it could be unloaded from GPU. It could also be nice if there is a API to unload assets from code, not by finding all the nodes that use it and removing them from the DOM. The dom nodes could be left alone but just unload the mesh under it, it would then just not render anything until the mesh is loaded back. Does this make any sense in the context of xml3d? Do you have any time estimates for custom loaders supporting async loading? What about compressed textures? These are probably the significant things if someone wants to try prototyping something with xml3d + Tundra scene model. We need custom Ogre asset loaders and we need compressed textures for efficiency on big scenes. I'll assume the CPU side mem releasing is a non issue and fixed trivially. Best regards, Jonne Nauha Meshmoon developer at Adminotech Ltd. www.meshmoon.com On Wed, Oct 30, 2013 at 11:35 AM, Kristian Sons wrote: > Am 30.10.2013 10:12, schrieb Toni Alatalo: > > Yes this referred to external loaded resources only ? as that?s what the >> city model blocks have, right? >> > Yes. And only to the raw response data from the XHR. > > > >> >Even without this we can easily load and render scenes of large sizes. >>> >> But only ones that fit in memory at once, no? The new city model does not. >> > > How large is your city model, anyway? > > Kristian > > > -- > ______________________________**______________________________** > ___________________ > > Kristian Sons > Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI > Agenten und Simulierte Realit?t > Campus, Geb. D 3 2, Raum 0.77 > 66123 Saarbr?cken, Germany > > Phone: +49 681 85775-3833 > Phone: +49 681 302-3833 > Fax: +49 681 85775?2235 > kristian.sons at dfki.de > http://www.xml3d.org > > Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > > Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes > Amtsgericht Kaiserslautern, HRB 2313 > ______________________________**______________________________** > ___________________ > > ______________________________**_________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/**listinfo/fiware-miwi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Slusallek at dfki.de Wed Oct 30 21:46:00 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Wed, 30 Oct 2013 21:46:00 +0100 Subject: [Fiware-miwi] XML3D versus three.js (was Re: 13:00 meeting location: CIE (Re: Oulu meet today 13:00)) In-Reply-To: References: <20131018141554.GA62563@ee.oulu.fi> <20131022031202.GB62563@ee.oulu.fi> <20131022065737.GD62563@ee.oulu.fi> <20131022213628.6BD0218003E@dionysos.netplaza.fi> <526757F4.60006@dfki.de> <19841C55-EF46-4D01-AD1F-633812458545@playsign.net> <5269206C.1030207@dfki.de> <5270CC39.1070704@dfki.de> <5270D2DE.8080608@dfki.de> Message-ID: <52717008.1070403@dfki.de> Hi, Good discussion. Please keep this up. Identifying issues people have or assume they would have is the best way to make progress in the group as a whole. It sounds that eliminating these issues should be fairly straight forward. Best, Philipp Am 30.10.2013 14:45, schrieb Jonne Nauha: > Yea I understood that the CPU side mem is not freed eg. the js side data > from the XHR response. Did you have GPU mem unload when xml3d mesh nodes > are removed from the DOM? You had the auto detection how many things are > still referencing to a particular asset, so when the last one is removed > it could be unloaded from GPU. > > It could also be nice if there is a API to unload assets from code, not > by finding all the nodes that use it and removing them from the DOM. The > dom nodes could be left alone but just unload the mesh under it, it > would then just not render anything until the mesh is loaded back. Does > this make any sense in the context of xml3d? > > Do you have any time estimates for custom loaders supporting async > loading? What about compressed textures? These are probably the > significant things if someone wants to try prototyping something with > xml3d + Tundra scene model. We need custom Ogre asset loaders and we > need compressed textures for efficiency on big scenes. I'll assume the > CPU side mem releasing is a non issue and fixed trivially. > > Best regards, > Jonne Nauha > Meshmoon developer at Adminotech Ltd. > www.meshmoon.com > > > On Wed, Oct 30, 2013 at 11:35 AM, Kristian Sons > wrote: > > Am 30.10.2013 10:12, schrieb Toni Alatalo: > > Yes this referred to external loaded resources only ? as that?s > what the city model blocks have, right? > > Yes. And only to the raw response data from the XHR. > > > > >Even without this we can easily load and render scenes of > large sizes. > > But only ones that fit in memory at once, no? The new city model > does not. > > > How large is your city model, anyway? > > Kristian > > > -- > ___________________________________________________________________________________ > > Kristian Sons > Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI > Agenten und Simulierte Realit?t > Campus, Geb. D 3 2, Raum 0.77 > 66123 Saarbr?cken, Germany > > Phone: +49 681 85775-3833 > Phone: +49 681 302-3833 > Fax: +49 681 85775?2235 > kristian.sons at dfki.de > http://www.xml3d.org > > Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster > (Vorsitzender) > Dr. Walter Olthoff > > Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes > Amtsgericht Kaiserslautern, HRB 2313 > ___________________________________________________________________________________ > > _________________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/__listinfo/fiware-miwi > > > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From Philipp.Slusallek at dfki.de Wed Oct 30 22:34:19 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Wed, 30 Oct 2013 22:34:19 +0100 Subject: [Fiware-miwi] 3D UI Usage from other GEs / epics / apps In-Reply-To: References: Message-ID: <52717B5B.80903@dfki.de> Hi, Thanks for taking the initiative here. I fully agree and the API is the main thing that is still missing also from our documentation. So defining them soon is obviously important. BTW, have we made the current GE documentation available to FI-WARE yet? Have they sent feedback? I do not remember seeing anything. So if could come up with good suggestions for interfaces for the GEs this would be very welcome. We can then jointly discuss them in the call or per email. Best, Philipp Am 30.10.2013 08:51, schrieb Toni Alatalo: > Hi again, > new angle here: calling devs *outside* the 3D UI GE: POIs, real-virtual > interaction, interface designer, virtual characters, 3d capture, > synchronization etc. > I think we need to proceed rapidly with integration now and propose > that one next step towards that is to analyze the interfaces between 3D > UI and other GEs. This is because it seems to be a central part with > which many others interface: that is evident in the old 'arch.png' where > we analyzed GE/Epic interdependencies: is embedded in section 2 in the > Winterthur arch discussion notes which hopefully works for everyone to > see, > https://docs.google.com/document/d/1Sr4rg44yGxK8jj6yBsayCwfitZTq5Cdyyb_xC25vhhE/edit > I propose a process where we go through the usage patterns case by case. > For example so that me & Erno visit the other devs to discuss it. I > think a good goal for those sessions is to define and plan the > implementation of first tests / minimal use cases where the other GEs > are used together with 3D UI to show something. I'd like this first pass > to happen quickly so that within 2 weeks from the planning the first > case is implemented. So if we get to have the sessions within 2 weeks > from now, in a month we'd have demos with all parts. > Let's organize this so that those who think this applies to their work > contact me with private email (to not spam the list), we meet and > collect the notes to the wiki and inform this list about that. > One question of particular interest to me here is: can the users of 3D > UI do what they need well on the entity system level (for example just > add and configure mesh components), or do they need deeper access to the > 3d scene and rendering (spatial queries, somehow affect the rendering > pipeline etc). With Tundra we have the Scene API and the (Ogre)World > API(s) to support the latter, and also access to the renderer directly. > OTOH the entity system level is renderer independent. > Synchronization is a special case which requires good two-way > integration with 3D UI. Luckily it's something that we and especially > Lasse himself knows already from how it works in Tundra (and in > WebTundras). Definitely to be discussed and planned now too of course. > So please if you agree that this is a good process do raise hands and > let's start working on it! We can discuss this in the weekly too if needed. > Cheers, > ~Toni > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From Philipp.Slusallek at dfki.de Wed Oct 30 22:35:30 2013 From: Philipp.Slusallek at dfki.de (Philipp Slusallek) Date: Wed, 30 Oct 2013 22:35:30 +0100 Subject: [Fiware-miwi] 3D UI Usage from other GEs / epics / apps In-Reply-To: References: Message-ID: <52717BA2.807@dfki.de> Hi Jonne, all, I am not sure that applying the Tudra API in the Web context is really the right approach. One of the key differences is that we already have a central "scene" data structure and it already handles rendering and input (DOM events), and other aspects. Also an API oriented approach may not be the best option in this declarative context either (even though I understands that it feels more natural when coming from C++, I had the same issues). So let me be a bit more specific: -- Network: So, yes we need a network module. It's not something that "lives" in the DOM but rather watches it and sends updates to the server to achieve sync. -- Renderer: Why do we need an object here. Its part of the DOM model. The only aspect is that we may want to set renderer-specific parameters. We currently do so through the DOM element, which seems like a good approach. The issues to be discussed here is what would be the advantages of a three.js based renderer and implement it of really needed. -- Scene: This can be done in the DOM nicely and with WebComponents its even more elegant. The scene objects are simple part of the same DOM but only some of them get rendered. I am not even sure that we need here in addition to the DOM and suitable mappings for the components. -- Asset: As you say this is already built-into the XML3D DOM. I see it a bit like the network system in that it watches missing resources in the DOM (plus attributes on priotity and such?) and implements a sort of scheduler excutes requests in some priority order. A version that only loads missing resources if is already available, one that goes even further and deletes unneeded resources could probably be ported from your resource manager. -- UI: That is why we are building on top of HTML, which is a pretty good UI layer in many requests. We have the 2D-UI GE to look into missing functionality -- Input: This also is already built in as the DOM as events traverse the DOM. It is widely used in all WEB based UIs and has proven quite useful there. Here we can nicely combine it with the 3D scene model where events are not only delivered to the 3D graphics elements but can be handled by the elements or components even before that. But maybe I am missunderstanding you here? Best, Philipp Am 30.10.2013 14:31, schrieb Jonne Nauha: > var client = > { > network : Object, // Network sync, connect, disconnect etc. > functionality. > // Implemented by scene sync GE (Ludocraft). > > renderer : Object, // API for 3D rendering engine access, creating > scene nodes, updating their transforms, raycasting etc. > // Implemented by 3D UI (Playsign). > > scene : Object, // API for accessing the > Entity-Component-Attribute model. > // Implemented by ??? > > asset : Object, // Not strictly necessary for xml3d as it does > asset requests for us, but for three.js this is pretty much needed. > // Implemented by ??? > > ui : Object, // API to add/remove widgets correctly on top > of the 3D rendering canvas element, window resize events etc. > // Implemented by 2D/Input GE (Adminotech). > > input : Object // API to hook to input events occurring on top > of the 3D scene. > // Implemented by 2D/Input GE (Adminotech). > }; > > > Best regards, > Jonne Nauha > Meshmoon developer at Adminotech Ltd. > www.meshmoon.com > > > On Wed, Oct 30, 2013 at 9:51 AM, Toni Alatalo > wrote: > > Hi again, > new angle here: calling devs *outside* the 3D UI GE: POIs, > real-virtual interaction, interface designer, virtual characters, 3d > capture, synchronization etc. > I think we need to proceed rapidly with integration now and propose > that one next step towards that is to analyze the interfaces between > 3D UI and other GEs. This is because it seems to be a central part > with which many others interface: that is evident in the old > 'arch.png' where we analyzed GE/Epic interdependencies: is embedded > in section 2 in the Winterthur arch discussion notes which hopefully > works for everyone to see, > https://docs.google.com/document/d/1Sr4rg44yGxK8jj6yBsayCwfitZTq5Cdyyb_xC25vhhE/edit > I propose a process where we go through the usage patterns case by > case. For example so that me & Erno visit the other devs to discuss > it. I think a good goal for those sessions is to define and plan the > implementation of first tests / minimal use cases where the other > GEs are used together with 3D UI to show something. I'd like this > first pass to happen quickly so that within 2 weeks from the > planning the first case is implemented. So if we get to have the > sessions within 2 weeks from now, in a month we'd have demos with > all parts. > Let's organize this so that those who think this applies to their > work contact me with private email (to not spam the list), we meet > and collect the notes to the wiki and inform this list about that. > One question of particular interest to me here is: can the users of > 3D UI do what they need well on the entity system level (for example > just add and configure mesh components), or do they need deeper > access to the 3d scene and rendering (spatial queries, somehow > affect the rendering pipeline etc). With Tundra we have the > Scene API and the (Ogre)World API(s) to support the latter, and also > access to the renderer directly. OTOH the entity system level is > renderer independent. > Synchronization is a special case which requires good two-way > integration with 3D UI. Luckily it's something that we and > especially Lasse himself knows already from how it works in Tundra > (and in WebTundras). Definitely to be discussed and planned now too > of course. > So please if you agree that this is a good process do raise hands > and let's start working on it! We can discuss this in the weekly too > if needed. > Cheers, > ~Toni > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > -- ------------------------------------------------------------------------- Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH Trippstadter Strasse 122, D-67663 Kaiserslautern Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Sitz der Gesellschaft: Kaiserslautern (HRB 2313) USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 --------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: slusallek.vcf Type: text/x-vcard Size: 441 bytes Desc: not available URL: From jonne at adminotech.com Thu Oct 31 00:03:57 2013 From: jonne at adminotech.com (Jonne Nauha) Date: Thu, 31 Oct 2013 01:03:57 +0200 Subject: [Fiware-miwi] 3D UI Usage from other GEs / epics / apps In-Reply-To: <52717BA2.807@dfki.de> References: <52717BA2.807@dfki.de> Message-ID: Well. My understanding is that we are building as easily programmable 3D SDK for the web. What you are suggesting is we offload everything to the DOM and to XML3D and forget about designing a nice JavaScript API? If we are talking about the raw DOM API I don't really consider it to be nice at all. I don't know anyone who would make a serious web application in JavaScript without using jQeury or something similar to access and manipulate the DOM? jQeury has a nicer API to do the same things that can be done with the standard functions, it just builds around it so developers have a easier time. To be more productive, write less and more understandable code. This is ofc only my view an am not implementing the 3D GE, but whatever the renderer might be it should be encapsulated as much as we can behind the "renderer" object. What happens under the hood should be an implementation detail. Using XML3D as the renderer is fine but imo all code that wants to do rendering in WebTundra should not need to know exactly how to interact with XML3D. You will use our scene-entity-component API with avatarEntity.placeable.setPosition(10,-10,0); in your application code and the right thing happens in the selected renderer. XML3D is also a javascript library, I'm assuming that you provide functions and objects that can be used to develop XML3D apps. Or is it really just auto monitoring the DOM for XML3D nodes and acts on them? Why should the even more elaborate WebTundra SDK have some kind of structure and easy to understand and convenient API? XML3D is after all a renderer (if I've understood your scope correctly). WebTundra should be a lot more complete SDK like Tundra is to develop 3D web applications. If we think that XML3D (or the DOM and XML3D acts on those manipulations) is already this perfect API I'm not sure what we are even trying to accomplish here? If we are not building a nice to use 3D SDK whats the target here? I just would not like to see a path where XML3D is tied to all the aspects of WebTundra and becomes essentially just a minimal network sync extension to XML3D. I think we all from realXtend want to develop a Tundra style SDK for the web for developers to use. This would also include hiding the rendering implementation behind the renderer, and if its done well enough there can even be multiple renderers in the future. This is one of the big problems on the desktop Tundra, we did not abstract Ogre rendering engine enough. It creeped all over the codebase and now its impossible to make another renderer implementation without breaking everything while we refactor 50% of the codebase. I'd like for us not to make the same mistake on the web now that we actually have a change to start from scratch again. Sure we need to utilize the power of the web technologies and the DOM. But I don't think this says that we should not build utilities around the 3D aspect of it to make it nice to use. Best regards, Jonne Nauha Meshmoon developer at Adminotech Ltd. www.meshmoon.com On Wed, Oct 30, 2013 at 11:35 PM, Philipp Slusallek < Philipp.Slusallek at dfki.de> wrote: > Hi Jonne, all, > > I am not sure that applying the Tudra API in the Web context is really the > right approach. One of the key differences is that we already have a > central "scene" data structure and it already handles rendering and input > (DOM events), and other aspects. Also an API oriented approach may not be > the best option in this declarative context either (even though I > understands that it feels more natural when coming from C++, I had the same > issues). > > So let me be a bit more specific: > > -- Network: So, yes we need a network module. It's not something that > "lives" in the DOM but rather watches it and sends updates to the server to > achieve sync. > > -- Renderer: Why do we need an object here. Its part of the DOM model. The > only aspect is that we may want to set renderer-specific parameters. We > currently do so through the DOM element, which seems like a good > approach. The issues to be discussed here is what would be the advantages > of a three.js based renderer and implement it of really needed. > > -- Scene: This can be done in the DOM nicely and with WebComponents its > even more elegant. The scene objects are simple part of the same DOM but > only some of them get rendered. I am not even sure that we need here in > addition to the DOM and suitable mappings for the components. > > -- Asset: As you say this is already built-into the XML3D DOM. I see it a > bit like the network system in that it watches missing resources in the DOM > (plus attributes on priotity and such?) and implements a sort of scheduler > excutes requests in some priority order. A version that only loads missing > resources if is already available, one that goes even further and deletes > unneeded resources could probably be ported from your resource manager. > > -- UI: That is why we are building on top of HTML, which is a pretty good > UI layer in many requests. We have the 2D-UI GE to look into missing > functionality > > -- Input: This also is already built in as the DOM as events traverse the > DOM. It is widely used in all WEB based UIs and has proven quite useful > there. Here we can nicely combine it with the 3D scene model where events > are not only delivered to the 3D graphics elements but can be handled by > the elements or components even before that. > > But maybe I am missunderstanding you here? > > > Best, > > Philipp > > > Am 30.10.2013 14:31, schrieb Jonne Nauha: > >> var client = >> { >> network : Object, // Network sync, connect, disconnect etc. >> functionality. >> // Implemented by scene sync GE (Ludocraft). >> >> renderer : Object, // API for 3D rendering engine access, creating >> scene nodes, updating their transforms, raycasting etc. >> // Implemented by 3D UI (Playsign). >> >> scene : Object, // API for accessing the >> Entity-Component-Attribute model. >> // Implemented by ??? >> >> asset : Object, // Not strictly necessary for xml3d as it does >> asset requests for us, but for three.js this is pretty much needed. >> // Implemented by ??? >> >> ui : Object, // API to add/remove widgets correctly on top >> of the 3D rendering canvas element, window resize events etc. >> // Implemented by 2D/Input GE (Adminotech). >> >> input : Object // API to hook to input events occurring on top >> of the 3D scene. >> // Implemented by 2D/Input GE (Adminotech). >> }; >> >> >> Best regards, >> Jonne Nauha >> Meshmoon developer at Adminotech Ltd. >> www.meshmoon.com >> >> >> >> On Wed, Oct 30, 2013 at 9:51 AM, Toni Alatalo > > wrote: >> >> Hi again, >> new angle here: calling devs *outside* the 3D UI GE: POIs, >> real-virtual interaction, interface designer, virtual characters, 3d >> capture, synchronization etc. >> I think we need to proceed rapidly with integration now and propose >> that one next step towards that is to analyze the interfaces between >> 3D UI and other GEs. This is because it seems to be a central part >> with which many others interface: that is evident in the old >> 'arch.png' where we analyzed GE/Epic interdependencies: is embedded >> in section 2 in the Winterthur arch discussion notes which hopefully >> works for everyone to see, >> https://docs.google.com/**document/d/**1Sr4rg44yGxK8jj6yBsayCwfitZTq5 >> **Cdyyb_xC25vhhE/edit >> I propose a process where we go through the usage patterns case by >> case. For example so that me & Erno visit the other devs to discuss >> it. I think a good goal for those sessions is to define and plan the >> implementation of first tests / minimal use cases where the other >> GEs are used together with 3D UI to show something. I'd like this >> first pass to happen quickly so that within 2 weeks from the >> planning the first case is implemented. So if we get to have the >> sessions within 2 weeks from now, in a month we'd have demos with >> all parts. >> Let's organize this so that those who think this applies to their >> work contact me with private email (to not spam the list), we meet >> and collect the notes to the wiki and inform this list about that. >> One question of particular interest to me here is: can the users of >> 3D UI do what they need well on the entity system level (for example >> just add and configure mesh components), or do they need deeper >> access to the 3d scene and rendering (spatial queries, somehow >> affect the rendering pipeline etc). With Tundra we have the >> Scene API and the (Ogre)World API(s) to support the latter, and also >> access to the renderer directly. OTOH the entity system level is >> renderer independent. >> Synchronization is a special case which requires good two-way >> integration with 3D UI. Luckily it's something that we and >> especially Lasse himself knows already from how it works in Tundra >> (and in WebTundras). Definitely to be discussed and planned now too >> of course. >> So please if you agree that this is a good process do raise hands >> and let's start working on it! We can discuss this in the weekly too >> if needed. >> Cheers, >> ~Toni >> >> ______________________________**_________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> > >> https://lists.fi-ware.eu/**listinfo/fiware-miwi >> >> >> >> >> >> ______________________________**_________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/**listinfo/fiware-miwi >> >> > > -- > > ------------------------------**------------------------------** > ------------- > Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH > Trippstadter Strasse 122, D-67663 Kaiserslautern > > Gesch?ftsf?hrung: > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) > Dr. Walter Olthoff > Vorsitzender des Aufsichtsrats: > Prof. Dr. h.c. Hans A. Aukes > > Sitz der Gesellschaft: Kaiserslautern (HRB 2313) > USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 > ------------------------------**------------------------------** > --------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Thu Oct 31 08:16:21 2013 From: toni at playsign.net (Toni Alatalo) Date: Thu, 31 Oct 2013 09:16:21 +0200 Subject: [Fiware-miwi] 3D UI Usage from other GEs / epics / apps In-Reply-To: References: Message-ID: Quick comments to the responsibilities & status of the parts here - the DOM & JS APIs talk in later posts I need to digest still (I think Jonne has misconceptions there but also very important points) ? but now to this: On 30 Oct 2013, at 15:31, Jonne Nauha wrote: > We are going to need a bit more structure than just a "3D client/UI" object. Yes, certainly the client is not just the 3D UI - we are just focusing on that in this particular effort (meet with users of 3D UI) as it?s our responsibility. (Note: I think it?s clear enough to use the term ?client? even though it can be used as standalone without a synchronisation server - will check how this is in the glossary). > I think mimicking the Tundra core APIs (the ones we need at least) is a good choice. It is one of the 3 APIs I?ve proposed for analysis in the requirements doc, along with xml3d.js and three.js, in ?3. Requirements breakdown - 3.1. Requirements for application functionality development - 3.1.1. Existing 3d application APIs? in https://docs.google.com/document/d/1P03BgfEG1Ly2dI2Cs9ODVDmBhH4A438Ynlaxc4vXn1o/edit?pli=1#heading=h.us4ergchk5k0 > Something like what I've scribbled below. The rest of the GEs will then interact in some kind of manner with these core APIs, in most cases with renderer, scene and ui. > renderer : Object, // API for 3D rendering engine access, creating scene nodes, updating their transforms, raycasting etc. > // Implemented by 3D UI (Playsign). I think typically scene nodes are created with the scene api (note: talking about in-browser in-memory stuff here, not to be confused with the server side rest SceneAPI) ? like in Tundra ? not directly to renderer. Same for transforms ? also in Tundra you manipulate the placeable component, don?t access renderer to move an object. I think the renderer API is needed for these kind of things: 1. Custom drawing, either in a component?s implementation or just with direct drawing commands from application code. For example a component that is a procedural tree ? or volumetric terrain. Or some custom code to draw aiming helpers in a shooting game or so, for example some kind of curves. 2. Complex scene queries. Typically they might be better via the scene though. But perhaps something that goes really deep in the renderer for example to query which areas are in shadow or so? Or visibility checks? 3. Things that need to hook into the rendering pipeline ? perhaps for things like Render-To-Texture or post-process compositing. Perhaps how XFlow integrates with rendering? Hm, now I think I actually answered Philipp?s later question about this (didn?t mean to do that yet :p) > scene : Object, // API for accessing the Entity-Component-Attribute model. > // Implemented by ??? Yes, the unclarity about the responsibility here is why I started the tread on ?the entity system? some weeks ago. I think the situation is not catastrophic as we already have 3 implementations of that: 2 in the ?WebTundras? (Chiru-WebClient and WebRocket) and also xml3d.js. Playsign can worry about this at least for now (say, the coming month). We are not writing a renderer from scratch as there are many good ones out there already so we can spend resources on this too. Let?s see whether we can just adopt one of those 3 systems for MIWI ? and hence probably as realXtend?s future official WebTundra ? or do we need to write a 4th from scratch for some reason. There are however many complex issues with rendering itself too, for example the asset pipeline for which that meeting was, and the material system which we haven?t really addressed at all yet. So we do need to be able to focus in peace on that as well. As we learned again on Monday, though, the DFKI folks are continuously advancing the state of the art with the rendering for example with the upcoming ability to write custom shaders in Javascript etc ? and they?ve thought of complex material systems for long ? so we have great help there. We are definitely open for participation here, and for someone else taking the lead on this if it fits on their plate. We already started good talks with Lasse yesterday and he?ll actually check how Chiru-WebClient?s entity system implementation looks like from the synchronisation GE?s point of view. Again, one particular question here is the ?IComponent?: how do we define new components ? aka. XML elements? As mentioned before, WebComponents can be related so is to be analysed. Adminotech is already using WebComponents in the 2D UI work so perhaps you could help with understanding this: would the way they have there work for defining reX components? Also Philipp?s comments about KIARA which has a way to define things is related. > asset : Object, // Not strictly necessary for xml3d as it does asset requests for us, but for three.js this is pretty much needed. > // Implemented by ??? I think this belongs to the 3D UI so falls in Playsign?s responsibility. Again there are the existing implementations in WebTundras and xml3d.js ? and obviously the browser does much of the work but I think we still need to track dependencies in the loading and keep track of usages for releasing resources etc. (the reason why you have the asset system in web rocket and the resource manager in xml3d.js). Thanks for the draft! Code speaks louder than words (that?s why I wrote the txml (<)-> xml3d converter), and at least for me this kind of code-like API def was very clear and helpful to read :) ~Toni > > ui : Object, // API to add/remove widgets correctly on top of the 3D rendering canvas element, window resize events etc. > // Implemented by 2D/Input GE (Adminotech). > > input : Object // API to hook to input events occurring on top of the 3D scene. > // Implemented by 2D/Input GE (Adminotech). > }; > > > Best regards, > Jonne Nauha > Meshmoon developer at Adminotech Ltd. > www.meshmoon.com > > > On Wed, Oct 30, 2013 at 9:51 AM, Toni Alatalo wrote: > Hi again, > > new angle here: calling devs *outside* the 3D UI GE: POIs, real-virtual interaction, interface designer, virtual characters, 3d capture, synchronization etc. > > I think we need to proceed rapidly with integration now and propose that one next step towards that is to analyze the interfaces between 3D UI and other GEs. This is because it seems to be a central part with which many others interface: that is evident in the old 'arch.png' where we analyzed GE/Epic interdependencies: is embedded in section 2 in the Winterthur arch discussion notes which hopefully works for everyone to see, https://docs.google.com/document/d/1Sr4rg44yGxK8jj6yBsayCwfitZTq5Cdyyb_xC25vhhE/edit > > I propose a process where we go through the usage patterns case by case. For example so that me & Erno visit the other devs to discuss it. I think a good goal for those sessions is to define and plan the implementation of first tests / minimal use cases where the other GEs are used together with 3D UI to show something. I'd like this first pass to happen quickly so that within 2 weeks from the planning the first case is implemented. So if we get to have the sessions within 2 weeks from now, in a month we'd have demos with all parts. > > Let's organize this so that those who think this applies to their work contact me with private email (to not spam the list), we meet and collect the notes to the wiki and inform this list about that. > > One question of particular interest to me here is: can the users of 3D UI do what they need well on the entity system level (for example just add and configure mesh components), or do they need deeper access to the 3d scene and rendering (spatial queries, somehow affect the rendering pipeline etc). With Tundra we have the Scene API and the (Ogre)World API(s) to support the latter, and also access to the renderer directly. OTOH the entity system level is renderer independent. > > Synchronization is a special case which requires good two-way integration with 3D UI. Luckily it's something that we and especially Lasse himself knows already from how it works in Tundra (and in WebTundras). Definitely to be discussed and planned now too of course. > > So please if you agree that this is a good process do raise hands and let's start working on it! We can discuss this in the weekly too if needed. > > Cheers, > ~Toni > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From torsten.spieldenner at dfki.de Thu Oct 31 10:23:23 2013 From: torsten.spieldenner at dfki.de (Torsten Spieldenner) Date: Thu, 31 Oct 2013 10:23:23 +0100 Subject: [Fiware-miwi] 3D UI Usage from other GEs / epics / apps In-Reply-To: References: <52717BA2.807@dfki.de> Message-ID: <5272218B.3040202@dfki.de> Hello, let me quickly comment on the topic with the DOM API. When developing an XML3D application, you are not limited to the raw API, but you are free to use jQuery or any other tool you want that makes writing the application faster and more efficient. XML3D works in a way that changing the DOM by adding or manipulating nodes results in changes in the scene. How you build the DOM, query it or manipulate it is completely up to you. We make excessive use of jQuery to work on our XML3D scenes. We also had very successful experiments with building entire 3D scenes with the backbone model-view-JavaScript framework. The framework does all the job of querying the scene from the database, and automatically creates DOM elements as view, which, in our case, were XML3D group nodes that automatically appeared at the right position. On top the capabilities of the DOM API and additional powers of sophisticated JavaScript-libraries, XML3D introduces an API extension by its own to provide a convenient way to access the DOM elements as XML3D-Elements, for example retrieving translation as XML3DVec3 or Rotation as XML3DRotation (for example, to retrieve the rotation part of an XML3D transformation, you can do this by using jQuery to query the transformation node from the DOM, and access the rotation there then: var r = $("#my_transformation").rotation). And if you want to do even more complex computations that you don't want to code entirely in JavaScript, you have Xflow on top, which helps to express these complex compations in the DOM as well. So in conclusion, XML3D is far more than just a renderer, but gives you plenty of options to conveniently operate on your scene. > If we think that XML3D (or the DOM and XML3D acts on those manipulations) > is already this perfect API I'm not sure what we are even trying to > accomplish here? If we are not building a nice to use 3D SDK whats the > target here? I totally agree that we still need to build this easily programmable 3D SDK. But XML3D makes it very simple to maintain the 3D scene in the DOM according to the scene state of the application. You may want to have a look at our example web client for our FiVES server (https://github.com/rryk/FiVES). Although I admit that the code needs some refactoring, the example of how entities are created shows this nicely : As soon as you create a new Entity object, the DOM representation of its scenegraph and its transformations are created automatically and maintained as View of the entity model. As developer, you only need to operate on the client application's API. This could be an example, of how an SDK could operate on the XML3D representation of the scene. ~ Torsten > On Wed, Oct 30, 2013 at 11:35 PM, Philipp Slusallek < > Philipp.Slusallek at dfki.de> wrote: > >> Hi Jonne, all, >> >> I am not sure that applying the Tudra API in the Web context is really the >> right approach. One of the key differences is that we already have a >> central "scene" data structure and it already handles rendering and input >> (DOM events), and other aspects. Also an API oriented approach may not be >> the best option in this declarative context either (even though I >> understands that it feels more natural when coming from C++, I had the same >> issues). >> >> So let me be a bit more specific: >> >> -- Network: So, yes we need a network module. It's not something that >> "lives" in the DOM but rather watches it and sends updates to the server to >> achieve sync. >> >> -- Renderer: Why do we need an object here. Its part of the DOM model. The >> only aspect is that we may want to set renderer-specific parameters. We >> currently do so through the DOM element, which seems like a good >> approach. The issues to be discussed here is what would be the advantages >> of a three.js based renderer and implement it of really needed. >> >> -- Scene: This can be done in the DOM nicely and with WebComponents its >> even more elegant. The scene objects are simple part of the same DOM but >> only some of them get rendered. I am not even sure that we need here in >> addition to the DOM and suitable mappings for the components. >> >> -- Asset: As you say this is already built-into the XML3D DOM. I see it a >> bit like the network system in that it watches missing resources in the DOM >> (plus attributes on priotity and such?) and implements a sort of scheduler >> excutes requests in some priority order. A version that only loads missing >> resources if is already available, one that goes even further and deletes >> unneeded resources could probably be ported from your resource manager. >> >> -- UI: That is why we are building on top of HTML, which is a pretty good >> UI layer in many requests. We have the 2D-UI GE to look into missing >> functionality >> >> -- Input: This also is already built in as the DOM as events traverse the >> DOM. It is widely used in all WEB based UIs and has proven quite useful >> there. Here we can nicely combine it with the 3D scene model where events >> are not only delivered to the 3D graphics elements but can be handled by >> the elements or components even before that. >> >> But maybe I am missunderstanding you here? >> >> >> Best, >> >> Philipp >> >> >> Am 30.10.2013 14:31, schrieb Jonne Nauha: >> >>> var client = >>> { >>> network : Object, // Network sync, connect, disconnect etc. >>> functionality. >>> // Implemented by scene sync GE (Ludocraft). >>> >>> renderer : Object, // API for 3D rendering engine access, creating >>> scene nodes, updating their transforms, raycasting etc. >>> // Implemented by 3D UI (Playsign). >>> >>> scene : Object, // API for accessing the >>> Entity-Component-Attribute model. >>> // Implemented by ??? >>> >>> asset : Object, // Not strictly necessary for xml3d as it does >>> asset requests for us, but for three.js this is pretty much needed. >>> // Implemented by ??? >>> >>> ui : Object, // API to add/remove widgets correctly on top >>> of the 3D rendering canvas element, window resize events etc. >>> // Implemented by 2D/Input GE (Adminotech). >>> >>> input : Object // API to hook to input events occurring on top >>> of the 3D scene. >>> // Implemented by 2D/Input GE (Adminotech). >>> }; >>> >>> >>> Best regards, >>> Jonne Nauha >>> Meshmoon developer at Adminotech Ltd. >>> www.meshmoon.com >>> >>> >>> >>> On Wed, Oct 30, 2013 at 9:51 AM, Toni Alatalo >> > wrote: >>> >>> Hi again, >>> new angle here: calling devs *outside* the 3D UI GE: POIs, >>> real-virtual interaction, interface designer, virtual characters, 3d >>> capture, synchronization etc. >>> I think we need to proceed rapidly with integration now and propose >>> that one next step towards that is to analyze the interfaces between >>> 3D UI and other GEs. This is because it seems to be a central part >>> with which many others interface: that is evident in the old >>> 'arch.png' where we analyzed GE/Epic interdependencies: is embedded >>> in section 2 in the Winterthur arch discussion notes which hopefully >>> works for everyone to see, >>> https://docs.google.com/**document/d/**1Sr4rg44yGxK8jj6yBsayCwfitZTq5 >>> **Cdyyb_xC25vhhE/edit >>> I propose a process where we go through the usage patterns case by >>> case. For example so that me & Erno visit the other devs to discuss >>> it. I think a good goal for those sessions is to define and plan the >>> implementation of first tests / minimal use cases where the other >>> GEs are used together with 3D UI to show something. I'd like this >>> first pass to happen quickly so that within 2 weeks from the >>> planning the first case is implemented. So if we get to have the >>> sessions within 2 weeks from now, in a month we'd have demos with >>> all parts. >>> Let's organize this so that those who think this applies to their >>> work contact me with private email (to not spam the list), we meet >>> and collect the notes to the wiki and inform this list about that. >>> One question of particular interest to me here is: can the users of >>> 3D UI do what they need well on the entity system level (for example >>> just add and configure mesh components), or do they need deeper >>> access to the 3d scene and rendering (spatial queries, somehow >>> affect the rendering pipeline etc). With Tundra we have the >>> Scene API and the (Ogre)World API(s) to support the latter, and also >>> access to the renderer directly. OTOH the entity system level is >>> renderer independent. >>> Synchronization is a special case which requires good two-way >>> integration with 3D UI. Luckily it's something that we and >>> especially Lasse himself knows already from how it works in Tundra >>> (and in WebTundras). Definitely to be discussed and planned now too >>> of course. >>> So please if you agree that this is a good process do raise hands >>> and let's start working on it! We can discuss this in the weekly too >>> if needed. >>> Cheers, >>> ~Toni >>> >>> ______________________________**_________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>> >>> >>> >>> >>> >>> ______________________________**_________________ >>> Fiware-miwi mailing list >>> Fiware-miwi at lists.fi-ware.eu >>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>> >>> >> -- >> >> ------------------------------**------------------------------** >> ------------- >> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >> Trippstadter Strasse 122, D-67663 Kaiserslautern >> >> Gesch?ftsf?hrung: >> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >> Dr. Walter Olthoff >> Vorsitzender des Aufsichtsrats: >> Prof. Dr. h.c. Hans A. Aukes >> >> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >> ------------------------------**------------------------------** >> --------------- >> > > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Thu Oct 31 10:29:01 2013 From: toni at playsign.net (Toni Alatalo) Date: Thu, 31 Oct 2013 11:29:01 +0200 Subject: [Fiware-miwi] 3D UI Usage from other GEs / epics / apps In-Reply-To: <5272218B.3040202@dfki.de> References: <52717BA2.807@dfki.de> <5272218B.3040202@dfki.de> Message-ID: <8061F3FD-814F-4F18-A74C-D82908BAC7B1@playsign.net> Thanks! Just a short echo: On 31 Oct 2013, at 11:23, Torsten Spieldenner wrote: > So in conclusion, XML3D is far more than just a renderer, but gives you plenty of options to conveniently operate on your scene. Agreed, and that is why I included in the list of existing scene api / entity system / programming api / dom integration / whatever implementations (both in these mails and the reqs doc). Even so that if we don?t end up using the renderer we could use those things (has been in our agenda to study the code you have for the things you, and Kristian earlier, described). ~Toni >> If we think that XML3D (or the DOM and XML3D acts on those manipulations) >> is already this perfect API I'm not sure what we are even trying to >> accomplish here? If we are not building a nice to use 3D SDK whats the >> target here? > I totally agree that we still need to build this easily programmable 3D SDK. But XML3D makes it very simple to maintain the 3D scene in the DOM according to the scene state of the application. > You may want to have a look at our example web client for our FiVES server (https://github.com/rryk/FiVES). Although I admit that the code needs some refactoring, the example of how entities are created shows this nicely : As soon as you create a new Entity object, the DOM representation of its scenegraph and its transformations are created automatically and maintained as View of the entity model. As developer, you only need to operate on the client application's API. > This could be an example, of how an SDK could operate on the XML3D representation of the scene. > > > ~ Torsten > >> On Wed, Oct 30, 2013 at 11:35 PM, Philipp Slusallek < >> Philipp.Slusallek at dfki.de> wrote: >> >>> Hi Jonne, all, >>> >>> I am not sure that applying the Tudra API in the Web context is really the >>> right approach. One of the key differences is that we already have a >>> central "scene" data structure and it already handles rendering and input >>> (DOM events), and other aspects. Also an API oriented approach may not be >>> the best option in this declarative context either (even though I >>> understands that it feels more natural when coming from C++, I had the same >>> issues). >>> >>> So let me be a bit more specific: >>> >>> -- Network: So, yes we need a network module. It's not something that >>> "lives" in the DOM but rather watches it and sends updates to the server to >>> achieve sync. >>> >>> -- Renderer: Why do we need an object here. Its part of the DOM model. The >>> only aspect is that we may want to set renderer-specific parameters. We >>> currently do so through the DOM element, which seems like a good >>> approach. The issues to be discussed here is what would be the advantages >>> of a three.js based renderer and implement it of really needed. >>> >>> -- Scene: This can be done in the DOM nicely and with WebComponents its >>> even more elegant. The scene objects are simple part of the same DOM but >>> only some of them get rendered. I am not even sure that we need here in >>> addition to the DOM and suitable mappings for the components. >>> >>> -- Asset: As you say this is already built-into the XML3D DOM. I see it a >>> bit like the network system in that it watches missing resources in the DOM >>> (plus attributes on priotity and such?) and implements a sort of scheduler >>> excutes requests in some priority order. A version that only loads missing >>> resources if is already available, one that goes even further and deletes >>> unneeded resources could probably be ported from your resource manager. >>> >>> -- UI: That is why we are building on top of HTML, which is a pretty good >>> UI layer in many requests. We have the 2D-UI GE to look into missing >>> functionality >>> >>> -- Input: This also is already built in as the DOM as events traverse the >>> DOM. It is widely used in all WEB based UIs and has proven quite useful >>> there. Here we can nicely combine it with the 3D scene model where events >>> are not only delivered to the 3D graphics elements but can be handled by >>> the elements or components even before that. >>> >>> But maybe I am missunderstanding you here? >>> >>> >>> Best, >>> >>> Philipp >>> >>> >>> Am 30.10.2013 14:31, schrieb Jonne Nauha: >>> >>>> var client = >>>> { >>>> network : Object, // Network sync, connect, disconnect etc. >>>> functionality. >>>> // Implemented by scene sync GE (Ludocraft). >>>> >>>> renderer : Object, // API for 3D rendering engine access, creating >>>> scene nodes, updating their transforms, raycasting etc. >>>> // Implemented by 3D UI (Playsign). >>>> >>>> scene : Object, // API for accessing the >>>> Entity-Component-Attribute model. >>>> // Implemented by ??? >>>> >>>> asset : Object, // Not strictly necessary for xml3d as it does >>>> asset requests for us, but for three.js this is pretty much needed. >>>> // Implemented by ??? >>>> >>>> ui : Object, // API to add/remove widgets correctly on top >>>> of the 3D rendering canvas element, window resize events etc. >>>> // Implemented by 2D/Input GE (Adminotech). >>>> >>>> input : Object // API to hook to input events occurring on top >>>> of the 3D scene. >>>> // Implemented by 2D/Input GE (Adminotech). >>>> }; >>>> >>>> >>>> Best regards, >>>> Jonne Nauha >>>> Meshmoon developer at Adminotech Ltd. >>>> www.meshmoon.com >>>> >>>> >>>> >>>> On Wed, Oct 30, 2013 at 9:51 AM, Toni Alatalo >>> > wrote: >>>> >>>> Hi again, >>>> new angle here: calling devs *outside* the 3D UI GE: POIs, >>>> real-virtual interaction, interface designer, virtual characters, 3d >>>> capture, synchronization etc. >>>> I think we need to proceed rapidly with integration now and propose >>>> that one next step towards that is to analyze the interfaces between >>>> 3D UI and other GEs. This is because it seems to be a central part >>>> with which many others interface: that is evident in the old >>>> 'arch.png' where we analyzed GE/Epic interdependencies: is embedded >>>> in section 2 in the Winterthur arch discussion notes which hopefully >>>> works for everyone to see, >>>> https://docs.google.com/**document/d/**1Sr4rg44yGxK8jj6yBsayCwfitZTq5 >>>> **Cdyyb_xC25vhhE/edit >>>> I propose a process where we go through the usage patterns case by >>>> case. For example so that me & Erno visit the other devs to discuss >>>> it. I think a good goal for those sessions is to define and plan the >>>> implementation of first tests / minimal use cases where the other >>>> GEs are used together with 3D UI to show something. I'd like this >>>> first pass to happen quickly so that within 2 weeks from the >>>> planning the first case is implemented. So if we get to have the >>>> sessions within 2 weeks from now, in a month we'd have demos with >>>> all parts. >>>> Let's organize this so that those who think this applies to their >>>> work contact me with private email (to not spam the list), we meet >>>> and collect the notes to the wiki and inform this list about that. >>>> One question of particular interest to me here is: can the users of >>>> 3D UI do what they need well on the entity system level (for example >>>> just add and configure mesh components), or do they need deeper >>>> access to the 3d scene and rendering (spatial queries, somehow >>>> affect the rendering pipeline etc). With Tundra we have the >>>> Scene API and the (Ogre)World API(s) to support the latter, and also >>>> access to the renderer directly. OTOH the entity system level is >>>> renderer independent. >>>> Synchronization is a special case which requires good two-way >>>> integration with 3D UI. Luckily it's something that we and >>>> especially Lasse himself knows already from how it works in Tundra >>>> (and in WebTundras). Definitely to be discussed and planned now too >>>> of course. >>>> So please if you agree that this is a good process do raise hands >>>> and let's start working on it! We can discuss this in the weekly too >>>> if needed. >>>> Cheers, >>>> ~Toni >>>> >>>> ______________________________**_________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>> >>>> >>>> >>>> >>>> >>>> ______________________________**_________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>> >>>> >>> -- >>> >>> ------------------------------**------------------------------** >>> ------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> ------------------------------**------------------------------** >>> --------------- >>> >> >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni at playsign.net Thu Oct 31 10:42:40 2013 From: toni at playsign.net (Toni Alatalo) Date: Thu, 31 Oct 2013 11:42:40 +0200 Subject: [Fiware-miwi] DOM as API vs as UI (Re: 3D UI Usage from other GEs / epics / apps) In-Reply-To: <5272218B.3040202@dfki.de> References: <52717BA2.807@dfki.de> <5272218B.3040202@dfki.de> Message-ID: <9B2CC7C7-0418-45D8-B298-97676672BA77@playsign.net> On 31 Oct 2013, at 11:23, Torsten Spieldenner wrote: > On top the capabilities of the DOM API and additional powers of sophisticated JavaScript-libraries, XML3D introduces an API extension by its own to provide a convenient way to access the DOM elements as XML3D-Elements, for example retrieving translation as XML3DVec3 or Rotation as XML3DRotation (for example, to retrieve the rotation part of an XML3D transformation, you can do this by using jQuery to query the transformation node from the DOM, and access the rotation there then: var r = $("#my_transformation").rotation). What confuses me here is: earlier it was concluded that ?the DOM is the UI?, I understood meaning how it works for people to a) author apps ? e.g. declare that oulu3d scene and reX avatar & chat apps are used in my html, along this nice christmas themed thing I just created (like txml is used in reX now) b) see and manipulate the state in the browser view-source & developer / debugger DOM views (like the Scene Structure editor in Tundra) c) (something else that escaped me now) Anyhow the point being that intensive manipulations such as creating and manipulating tens of thousands of entities are not done via it. This was the response to our initial ?massive dom manipulation? perf test. Manipulating transformation is a typical example where that happens ? I know that declarative ways can often be a good way to deal with e.g. moving objects, like the PhysicsMotor in Tundra and I think what XFlow (targets to) cover(s) too, but not always nor for everything so I think the point is still valid. So do you use a different API for heavy tasks and the DOM for other things or how does it go? ~Toni >> If we think that XML3D (or the DOM and XML3D acts on those manipulations) >> is already this perfect API I'm not sure what we are even trying to >> accomplish here? If we are not building a nice to use 3D SDK whats the >> target here? > I totally agree that we still need to build this easily programmable 3D SDK. But XML3D makes it very simple to maintain the 3D scene in the DOM according to the scene state of the application. > You may want to have a look at our example web client for our FiVES server (https://github.com/rryk/FiVES). Although I admit that the code needs some refactoring, the example of how entities are created shows this nicely : As soon as you create a new Entity object, the DOM representation of its scenegraph and its transformations are created automatically and maintained as View of the entity model. As developer, you only need to operate on the client application's API. > This could be an example, of how an SDK could operate on the XML3D representation of the scene. > > > ~ Torsten > >> On Wed, Oct 30, 2013 at 11:35 PM, Philipp Slusallek < >> Philipp.Slusallek at dfki.de> wrote: >> >>> Hi Jonne, all, >>> >>> I am not sure that applying the Tudra API in the Web context is really the >>> right approach. One of the key differences is that we already have a >>> central "scene" data structure and it already handles rendering and input >>> (DOM events), and other aspects. Also an API oriented approach may not be >>> the best option in this declarative context either (even though I >>> understands that it feels more natural when coming from C++, I had the same >>> issues). >>> >>> So let me be a bit more specific: >>> >>> -- Network: So, yes we need a network module. It's not something that >>> "lives" in the DOM but rather watches it and sends updates to the server to >>> achieve sync. >>> >>> -- Renderer: Why do we need an object here. Its part of the DOM model. The >>> only aspect is that we may want to set renderer-specific parameters. We >>> currently do so through the DOM element, which seems like a good >>> approach. The issues to be discussed here is what would be the advantages >>> of a three.js based renderer and implement it of really needed. >>> >>> -- Scene: This can be done in the DOM nicely and with WebComponents its >>> even more elegant. The scene objects are simple part of the same DOM but >>> only some of them get rendered. I am not even sure that we need here in >>> addition to the DOM and suitable mappings for the components. >>> >>> -- Asset: As you say this is already built-into the XML3D DOM. I see it a >>> bit like the network system in that it watches missing resources in the DOM >>> (plus attributes on priotity and such?) and implements a sort of scheduler >>> excutes requests in some priority order. A version that only loads missing >>> resources if is already available, one that goes even further and deletes >>> unneeded resources could probably be ported from your resource manager. >>> >>> -- UI: That is why we are building on top of HTML, which is a pretty good >>> UI layer in many requests. We have the 2D-UI GE to look into missing >>> functionality >>> >>> -- Input: This also is already built in as the DOM as events traverse the >>> DOM. It is widely used in all WEB based UIs and has proven quite useful >>> there. Here we can nicely combine it with the 3D scene model where events >>> are not only delivered to the 3D graphics elements but can be handled by >>> the elements or components even before that. >>> >>> But maybe I am missunderstanding you here? >>> >>> >>> Best, >>> >>> Philipp >>> >>> >>> Am 30.10.2013 14:31, schrieb Jonne Nauha: >>> >>>> var client = >>>> { >>>> network : Object, // Network sync, connect, disconnect etc. >>>> functionality. >>>> // Implemented by scene sync GE (Ludocraft). >>>> >>>> renderer : Object, // API for 3D rendering engine access, creating >>>> scene nodes, updating their transforms, raycasting etc. >>>> // Implemented by 3D UI (Playsign). >>>> >>>> scene : Object, // API for accessing the >>>> Entity-Component-Attribute model. >>>> // Implemented by ??? >>>> >>>> asset : Object, // Not strictly necessary for xml3d as it does >>>> asset requests for us, but for three.js this is pretty much needed. >>>> // Implemented by ??? >>>> >>>> ui : Object, // API to add/remove widgets correctly on top >>>> of the 3D rendering canvas element, window resize events etc. >>>> // Implemented by 2D/Input GE (Adminotech). >>>> >>>> input : Object // API to hook to input events occurring on top >>>> of the 3D scene. >>>> // Implemented by 2D/Input GE (Adminotech). >>>> }; >>>> >>>> >>>> Best regards, >>>> Jonne Nauha >>>> Meshmoon developer at Adminotech Ltd. >>>> www.meshmoon.com >>>> >>>> >>>> >>>> On Wed, Oct 30, 2013 at 9:51 AM, Toni Alatalo >>> > wrote: >>>> >>>> Hi again, >>>> new angle here: calling devs *outside* the 3D UI GE: POIs, >>>> real-virtual interaction, interface designer, virtual characters, 3d >>>> capture, synchronization etc. >>>> I think we need to proceed rapidly with integration now and propose >>>> that one next step towards that is to analyze the interfaces between >>>> 3D UI and other GEs. This is because it seems to be a central part >>>> with which many others interface: that is evident in the old >>>> 'arch.png' where we analyzed GE/Epic interdependencies: is embedded >>>> in section 2 in the Winterthur arch discussion notes which hopefully >>>> works for everyone to see, >>>> https://docs.google.com/**document/d/**1Sr4rg44yGxK8jj6yBsayCwfitZTq5 >>>> **Cdyyb_xC25vhhE/edit >>>> I propose a process where we go through the usage patterns case by >>>> case. For example so that me & Erno visit the other devs to discuss >>>> it. I think a good goal for those sessions is to define and plan the >>>> implementation of first tests / minimal use cases where the other >>>> GEs are used together with 3D UI to show something. I'd like this >>>> first pass to happen quickly so that within 2 weeks from the >>>> planning the first case is implemented. So if we get to have the >>>> sessions within 2 weeks from now, in a month we'd have demos with >>>> all parts. >>>> Let's organize this so that those who think this applies to their >>>> work contact me with private email (to not spam the list), we meet >>>> and collect the notes to the wiki and inform this list about that. >>>> One question of particular interest to me here is: can the users of >>>> 3D UI do what they need well on the entity system level (for example >>>> just add and configure mesh components), or do they need deeper >>>> access to the 3d scene and rendering (spatial queries, somehow >>>> affect the rendering pipeline etc). With Tundra we have the >>>> Scene API and the (Ogre)World API(s) to support the latter, and also >>>> access to the renderer directly. OTOH the entity system level is >>>> renderer independent. >>>> Synchronization is a special case which requires good two-way >>>> integration with 3D UI. Luckily it's something that we and >>>> especially Lasse himself knows already from how it works in Tundra >>>> (and in WebTundras). Definitely to be discussed and planned now too >>>> of course. >>>> So please if you agree that this is a good process do raise hands >>>> and let's start working on it! We can discuss this in the weekly too >>>> if needed. >>>> Cheers, >>>> ~Toni >>>> >>>> ______________________________**_________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>> >>>> >>>> >>>> >>>> >>>> ______________________________**_________________ >>>> Fiware-miwi mailing list >>>> Fiware-miwi at lists.fi-ware.eu >>>> https://lists.fi-ware.eu/**listinfo/fiware-miwi >>>> >>>> >>> -- >>> >>> ------------------------------**------------------------------** >>> ------------- >>> Deutsches Forschungszentrum f?r K?nstliche Intelligenz (DFKI) GmbH >>> Trippstadter Strasse 122, D-67663 Kaiserslautern >>> >>> Gesch?ftsf?hrung: >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) >>> Dr. Walter Olthoff >>> Vorsitzender des Aufsichtsrats: >>> Prof. Dr. h.c. Hans A. Aukes >>> >>> Sitz der Gesellschaft: Kaiserslautern (HRB 2313) >>> USt-Id.Nr.: DE 148646973, Steuernummer: 19/673/0060/3 >>> ------------------------------**------------------------------** >>> --------------- >>> >> >> >> _______________________________________________ >> Fiware-miwi mailing list >> Fiware-miwi at lists.fi-ware.eu >> https://lists.fi-ware.eu/listinfo/fiware-miwi > > _______________________________________________ > Fiware-miwi mailing list > Fiware-miwi at lists.fi-ware.eu > https://lists.fi-ware.eu/listinfo/fiware-miwi -------------- next part -------------- An HTML attachment was scrubbed... URL: