[Fiware-miwi] SceneAPI features: original intent of the EPIC vs . current plans?

Toni Alatalo toni at playsign.net
Thu Jan 2 15:08:09 CET 2014


On 02 Jan 2014, at 15:37, Philipp Slusallek <Philipp.Slusallek at dfki.de> wrote:

Hi again and thanks for the remarks, quick reply to one point again (before traveling with the kids in the evening, am away till Sunday then):

> There are clearly some similarities to the synchronization protocol. But that it assumes that a server is in full control over the client (or at least parts of it) and knows what to do. With the SceneAPI it may also be a matter of finding out things about the target scene first before making changes. An AI service could for example find the location to go to before starting to move the character. This would not be possible with pure Synchronization.

With the usage of Sync GE for connecting separate e.g. simulation or AI servers to the scene that would work. With the current WebTundra you could do this in a non-graphical AI instance, for example running as a node.js app:

1. use the library to connect to the scene server
2. use the WebTundra Scene API (in-memory JS) to examine the scene, for example to do pathfinding — the whole scene data is replicated to this AI client so the object positions etc. all are there automatically (just the ec-data though, not assets, so it’s not that heavy)
3. move a character, either by modifying the position directly in the client (again using the scene api there) or by sending commands to the server

I think what I’m saying in your terms is that the scene model and the client code in general in WebTundra is not pure synchronization (which is just the network messaging part) but, well, a client with scene replication.

As mentioned before, with the Second Life protocol used with Opensimulator there’s the LibOMV (Open Metaverse) which gives the same for e.g. AI bots — a headless client which connects to a server, gets the scene state automatically and provides it as an easy to use in-memory API that e.g. AI code can use to query and modify the scene. AFAIK people have been happy and productive with that.

> One obvious issue that Toni already talked about is the direction. Contacting a browser instance is not possible without a server as we know from Server-Based Rendering. But then we could design an interface that allows for querying and changing a scene.

Again the Sync GE does provide that, both in form of a WebSocket protocol and a JS client lib made on top of that — you can query the scene and modify it.

But I think is a good idea for us (Lasse perhaps but I’m probably too curious to skip it too :) to check that article to understand more of what you are after.

Cheers,
~Toni

> 
> There is already is such a suggestion by Behr et, al (http://dl.acm.org/citation.cfm?id=1836057&CFID=394541370&CFTOKEN=82263824) that may be a good lpace to start. It based on HTTP/REST, though, which would not work for a connection towards the browser, so we should use WebRTC instead.
> 
> 
> Best,
> 
> 	Philipp
> 
> Am 21.11.2013 22:24, schrieb Toni Alatalo:
>> On 21 Nov 2013, at 10:54, Lasse Öörni <lasse.oorni at ludocraft.com> wrote:
>>> If there is a good concrete plan to how it should be done instead it's not
>>> at all too late to change (as any implementation has not began), and if
>>> it's administratively OK, for example our architecture pictures now
>>> include the REST scene API described.
>> 
>> I talked today with a guy who is working on the vw / visualization front in the university project with the traffic sensors in the city.
>> 
>> We agreed preliminarily that could use their data and system as a use case for this SceneAPI biz on the fi-ware side — if you and others here find it’s a good idea.
>> 
>> Their data currently updates once per hour, though, so http would work :)
>> 
>> But he had already proposed as a next step a visualisation where traffic is simulated / visualized as actual individual cars. We could have that simulation service as a user of the scene api / sync biz to control the cars so we’d get streaming nature for the data and much harder reqs for the usage (in the spirit of the EPIC).
>> 
>> I still have to confirm with the prof whose leading that project that this all would be ok. We could do it so that the actual implementation of the visualization and even the integration comes from the uni and fi-ware (Ludocraft) only provides the API. I can use some of my university time for this as the integration of the city model and the traffic data is good to get there.
>> 
>> This is not a must and I don’t mean to overcomplicate thing but just figured that a real use case would help to make the *concrete plan* that you called for above.
>> 
>> The experience with the quick and simple POI & 3DUI integration (completed yesterday) was great, I’ll post about it tomorrow hopefully (the POI guys checked the demo today on ok’d it as this first minimal step). So I hope more integrations and usage of the GEs takes us well forward.
>> 
>>> Lasse Öörni
>> 
>> Cheers,
>> ~Toni
>> 
>> _______________________________________________
>> Fiware-miwi mailing list
>> Fiware-miwi at lists.fi-ware.eu
>> https://lists.fi-ware.eu/listinfo/fiware-miwi
>> 
> 
> 
> -- 
> 
> -------------------------------------------------------------------------
> Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) GmbH
> Trippstadter Strasse 122, D-67663 Kaiserslautern
> 
> Geschäftsführung:
>  Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
>  Dr. Walter Olthoff
> Vorsitzender des Aufsichtsrats:
>  Prof. Dr. h.c. Hans A. Aukes
> 
> Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
> USt-Id.Nr.: DE 148646973, Steuernummer:  19/673/0060/3
> ---------------------------------------------------------------------------
> <slusallek.vcf>




More information about the Fiware-miwi mailing list

You can get more information about our cookies and privacy policies clicking on the following links: Privacy policy   Cookies policy