[Backlogmanager] [FIWARE-JIRA] (HELP-9256) [fiware-stackoverflow] Storing SQL in Cosmos

Fernando Lopez (JIRA) jira-help-desk at jira.fiware.org
Sat May 27 11:44:00 CEST 2017


     [ https://jira.fiware.org/browse/HELP-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Fernando Lopez resolved HELP-9256.
----------------------------------
    Resolution: Done

> [fiware-stackoverflow] Storing SQL in Cosmos
> --------------------------------------------
>
>                 Key: HELP-9256
>                 URL: https://jira.fiware.org/browse/HELP-9256
>             Project: Help-Desk
>          Issue Type: Monitor
>          Components: FIWARE-TECH-HELP
>            Reporter: Backlog Manager
>            Assignee: Francisco Romero
>              Labels: fiware, fiware-cygnus
>
> Created question in FIWARE Q/A platform on 22-10-2014 at 19:10
> {color: red}Please, ANSWER this question AT{color} https://stackoverflow.com/questions/26513757/storing-sql-in-cosmos
> +Question:+
> Storing SQL in Cosmos
> +Description:+
> I need to persist data in Cosmos in SQL tables instead of HDFS files.
> I've deployed a VM in Cloud section of FI-Lab where I've installed the 0.14.0 Orion version and 0.3 of Cygnus. I've configured Cygnus to store data in HDFS and SQL... Problem is, persistence in HDFS files works fine but it is not possible in SQL tables despite of in the past I got it. That's why I'm confused
> I guess if HDFS persistence works, it should be problem of cygnus.config file. So I show it below:
>     # APACHE_FLUME_HOME/conf/cygnus.conf
> # The next tree fields set the sources, sinks and channels used by Cygnus. You could use different names than the
> # ones suggested below, but in that case make sure you keep coherence in properties names along the configuration file.
> # Regarding sinks, you can use multiple ones at the same time; the only requirement is to provide a channel for each
> # one of them (this example shows how to configure 3 sinks at the same time).
> cygnusagent.sources = http-source
> cygnusagent.sinks = hdfs-sink mysql-sink
> cygnusagent.channels = hdfs-channel mysql-channel
> #=============================================
> # source configuration
> # channel name where to write the notification events
> cygnusagent.sources.http-source.channels = hdfs-channel mysql-channel
> # source class, must not be changed
> cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
> # listening port the Flume source will use for receiving incoming notifications
> cygnusagent.sources.http-source.port = 5050
> # Flume handler that will parse the notifications, must not be changed
> cygnusagent.sources.http-source.handler = es.tid.fiware.fiwareconnectors.cygnus.handlers.OrionRestHandler
> # URL target
> cygnusagent.sources.http-source.handler.notification_target = /notify
> # Default organization (organization semantic depend on the persistence sink)
> cygnusagent.sources.http-source.handler.default_organization = org42
> # ============================================
> # OrionHDFSSink configuration
> # channel name from where to read notification events
> cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
> # sink class, must not be changed
> cygnusagent.sinks.hdfs-sink.type = es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionHDFSSink
> # The FQDN/IP address of the Cosmos deployment where the notification events will be persisted
> cygnusagent.sinks.hdfs-sink.cosmos_host = 130.206.80.46
> # port of the Cosmos service listening for persistence operations; 14000 for httpfs, 50070 for webhdfs and free choice for inifinty
> cygnusagent.sinks.hdfs-sink.cosmos_port = 14000
> # default username allowed to write in HDFS
> cygnusagent.sinks.hdfs-sink.cosmos_default_username = quiquehz
> # default password for the default username
> cygnusagent.sinks.hdfs-sink.cosmos_default_password = 'password'
> # HDFS backend type (webhdfs, httpfs or infinity)
> cygnusagent.sinks.hdfs-sink.hdfs_api = httpfs
> # how the attributes are stored, either per row either per column (row, column)
> cygnusagent.sinks.hdfs-sink.attr_persistence = column
> # prefix for the database and table names, empty if no prefix is desired
> cygnusagent.sinks.hdfs-sink.naming_prefix =
> # Hive port for Hive external table provisioning
> cygnusagent.sinks.hdfs-sink.hive_port = 10000
> # ============================================
> # OrionMySQLSink configuration
> # channel name from where to read notification events
> cygnusagent.sinks.mysql-sink.channel = mysql-channel
> # sink class, must not be changed
> cygnusagent.sinks.mysql-sink.type = es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionMySQLSink
> # the FQDN/IP address where the MySQL server runs
> cygnusagent.sinks.mysql-sink.mysql_host = 130.206.80.46
> # the port where the MySQL server listes for incomming connections
> cygnusagent.sinks.mysql-sink.mysql_port = 3306
> # a valid user in the MySQL server
> cygnusagent.sinks.mysql-sink.mysql_username = quiquehz
> # password for the user above
> cygnusagent.sinks.mysql-sink.mysql_password = 'password'
> # how the attributes are stored, either per row either per column (row, column)
> cygnusagent.sinks.mysql-sink.attr_persistence = column
> # prefix for the database and table names, empty if no prefix is desired
> cygnusagent.sinks.mysql-sink.naming_prefix =
> #=============================================
> # hdfs-channel configuration
> # channel type (must not be changed)
> cygnusagent.channels.hdfs-channel.type = memory
> # capacity of the channel
> cygnusagent.channels.hdfs-channel.capacity = 1000
> # amount of bytes that can be sent per transaction
> cygnusagent.channels.hdfs-channel.transactionCapacity = 100
> #=============================================
> # mysql-channel configuration
> # channel type (must not be changed)
> cygnusagent.channels.mysql-channel.type = memory
> # capacity of the channel
> cygnusagent.channels.mysql-channel.capacity = 1000
> # amount of bytes that can be sent per transaction
> cygnusagent.channels.mysql-channel.transactionCapacity = 100



--
This message was sent by Atlassian JIRA
(v6.4.1#64016)


More information about the Backlogmanager mailing list

You can get more information about our cookies and privacy policies clicking on the following links: Privacy policy   Cookies policy