[Backlogmanager] [FIWARE-JIRA] (HELP-19691) [fiware-stackoverflow] FIWARE ORION MONGODB DOCKER

Jason Fox (JIRA) jira-help-desk at jira.fiware.org
Wed Jun 29 14:26:00 CEST 2022


     [ https://jira.fiware.org/browse/HELP-19691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jason Fox updated HELP-19691:
-----------------------------
    Status: In Progress  (was: Open)

> [fiware-stackoverflow] FIWARE ORION MONGODB DOCKER
> --------------------------------------------------
>
>                 Key: HELP-19691
>                 URL: https://jira.fiware.org/browse/HELP-19691
>             Project: Help-Desk
>          Issue Type: Monitor
>          Components: FIWARE-TECH-HELP
>            Reporter: Backlog Manager
>            Assignee: Jason Fox
>              Labels: fiware, mqtt, python
>
> Created question in FIWARE Q/A platform on 10-06-2022 at 14:06
> {color: red}Please, ANSWER this question AT{color} https://stackoverflow.com/questions/72575722/fiware-orion-mongodb-docker
> +Question:+
> FIWARE ORION MONGODB DOCKER
> +Description:+
> I want to set up my raspberry pi3 as a MQTT publisher(JSON data), here is the code:
> import json
> import random
> import time
> import paho.mqtt.client as mqtt
> THE_BROKER = ""
> THE_TOPIC = ""
> CLIENT_ID = ""
> # The callback for when the client receives a CONNACK response from the server.
> def on_connect(client, userdata, flags, rc):
>     print("Connected to ", client._host, "port: ", client._port)
>     print("Flags: ", flags, "returned code: ", rc)
> # The callback for when a message is published.
> def on_publish(client, userdata, mid):
>     print("sipub: msg published (mid={})".format(mid))
> client = mqtt.Client(client_id=CLIENT_ID, 
>                      clean_session=True, 
>                      userdata=None, 
>                      protocol=mqtt.MQTTv311, 
>                      transport="tcp")
> client.on_connect = on_connect
> client.on_publish = on_publish
> client.username_pw_set("", password=None)
> client.connect(THE_BROKER, port=1883, keepalive=60)
> client.loop_start()
> while True:
>   s = {}
>   s['the_variable'] = random.randint(0,100)
>   j = json.dumps(s)
>   
>   msg_to_be_sent = j
>   print(j)
>   client.publish(THE_TOPIC, 
>                    payload=msg_to_be_sent, 
>                    qos=0, 
>                    retain=False)
>   time.sleep(15)
> client.loop_stop()
> And i want to use Orion ,MongoDB that are running on the Docker as a subscriber, here is the code:
> import json
> import time
> import sys
> import struct
> import paho.mqtt.client as mqtt
> THE_BROKER = ""
> THE_TOPIC = ""
> CLIENT_ID = ""
> # The callback for when the client receives a CONNACK response from the server.
> def on_connect(client, userdata, flags, rc):
>     print("Connected to ", client._host, "port: ", client._port)
>     print("Flags: ", flags, "returned code: ", rc)
>     client.subscribe(THE_TOPIC, qos=0)
> # The callback for when a message is received from the server.
> def on_message(client, userdata, msg):
>     print("sisub: msg received with topic: {} and payload: {}".format(msg.topic, str(msg.payload)))
> client = mqtt.Client(client_id=CLIENT_ID, 
>                      clean_session=True, 
>                      userdata=None, 
>                      protocol=mqtt.MQTTv311, 
>                      transport="tcp")
> client.on_connect = on_connect
> client.on_message = on_message
> client.username_pw_set(None, password=None)
> client.connect(THE_BROKER, port=1883, keepalive=60)
> # Blocking call that processes network traffic, dispatches callbacks and
> # handles reconnecting.
> client.loop_forever()
> First question, How can I add a mqtt-subscriber to the Orion?
> Second question, yes I need to set up a broker(mosquitto will be used), how can i generate JSON data as a publisher? That are sending data in JSON format in every 30 seconds? I need your help!
> Thank you very much for your help!!!



--
This message was sent by Atlassian JIRA
(v6.4.1#64016)


More information about the Backlogmanager mailing list

You can get more information about our cookies and privacy policies clicking on the following links: Privacy policy   Cookies policy