[ https://jira.fiware.org/browse/HELP-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Fernando Lopez deleted HELP-9512:
---------------------------------
> [fiware-stackoverflow] Cygnus not starting as a service
> -------------------------------------------------------
>
> Key: HELP-9512
> URL: https://jira.fiware.org/browse/HELP-9512
> Project: Help-Desk
> Issue Type: Monitor
> Reporter: Backlog Manager
> Labels: fiware, fiware-cygnus
>
> Created question in FIWARE Q/A platform on 18-08-2015 at 16:08
> {color: red}Please, ANSWER this question AT{color} https://stackoverflow.com/questions/32075670/cygnus-not-starting-as-a-service
> +Question:+
> Cygnus not starting as a service
> +Description:+
> I've been checking other people's questions regarding config files for cygnus, but still I couldn't make mine work.
> Starting cygnus with "service cygnus start" fails.
> When I try to start the service the log at /var/log/cygnus/cygnus.log says:
> Warning: JAVA_HOME is not set!
> + exec /usr/bin/java -Xmx20m -Dflume.log.file=cygnus.log -cp '/usr/cygnus/conf:/usr/cygnus/lib/*:/usr/cygnus/plugins.d/cygnus/lib/*:/usr/cygnus/plugins.d/cygnus/libext/*' -Djava.library.path= com.telefonica.iot.cygnus.nodes.CygnusApplication -p 8081 -f /usr/cygnus/conf/agent_1.conf -n cygnusagent
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in [jar:file:/usr/cygnus/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in [jar:file:/usr/cygnus/plugins.d/cygnus/lib/cygnus-0.8.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> log4j:ERROR setFile(null,true) call failed.
> java.io.FileNotFoundException: ./logs/cygnus.log (No such file or directory)
> at java.io.FileOutputStream.openAppend(Native Method)
> at java.io.FileOutputStream.<init>(FileOutputStream.java:210)
> at java.io.FileOutputStream.<init>(FileOutputStream.java:131)
> at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
> at org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)
> at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
> at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
> at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
> at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
> at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:809)
> at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:735)
> at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:615)
> at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:502)
> at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:547)
> at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:483)
> at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
> at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:73)
> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:242)
> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:254)
> at org.apache.flume.node.Application.<clinit>(Application.java:58)
> Starting an ordered shutdown of Cygnus
> Stopping sources
> All the channels are empty
> Stopping channels
> Stopping hdfs-channel (lyfecycle state=START)
> Stopping sinks
> Stopping hdfs-sink (lyfecycle state=START)
> JAVA_HOME is set and I think the issue is with the config files:
> agent_1.conf:
> cygnusagent.sources = http-source
> cygnusagent.sinks = hdfs-sink
> cygnusagent.channels = hdfs-channel
> #=============================================
> # source configuration
> # channel name where to write the notification events
> cygnusagent.sources.http-source.channels = hdfs-channel
> # source class, must not be changed
> cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
> # listening port the Flume source will use for receiving incoming notifications
> cygnusagent.sources.http-source.port = 5050
> # Flume handler that will parse the notifications, must not be changed
> cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
> # URL target
> cygnusagent.sources.http-source.handler.notification_target = /notify
> # Default service (service semantic depends on the persistence sink)
> cygnusagent.sources.http-source.handler.default_service = def_serv
> # Default service path (service path semantic depends on the persistence sink)
> cygnusagent.sources.http-source.handler.default_service_path = def_servpath
> # Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
> cygnusagent.sources.http-source.handler.events_ttl = 10
> # Source interceptors, do not change
> cygnusagent.sources.http-source.interceptors = ts gi
> # TimestampInterceptor, do not change
> cygnusagent.sources.http-source.interceptors.ts.type = timestamp
> # GroupinInterceptor, do not change
> cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
> # Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
> # See the doc/design/interceptors document for more details
> cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
> # ============================================
> # OrionHDFSSink configuration
> # channel name from where to read notification events
> cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
> # sink class, must not be changed
> cygnusagent.sinks.hdfs-sink.type = com.telefonica.iot.cygnus.sinks.OrionHDFSSink
> # Comma-separated list of FQDN/IP address regarding the HDFS Namenode endpoints
> # If you are using Kerberos authentication, then the usage of FQDNs instead of IP addresses is mandatory
> cygnusagent.sinks.hdfs-sink.hdfs_host = cosmos.lab.fiware.org
> # port of the HDFS service listening for persistence operations; 14000 for httpfs, 50070 for webhdfs
> cygnusagent.sinks.hdfs-sink.hdfs_port = 14000
> # username allowed to write in HDFS
> cygnusagent.sinks.hdfs-sink.hdfs_username = MYUSERNAME
> # OAuth2 token
> cygnusagent.sinks.hdfs-sink.oauth2_token = MYTOKEN
> # how the attributes are stored, either per row either per column (row, column)
> cygnusagent.sinks.hdfs-sink.attr_persistence = column
> # Hive FQDN/IP address of the Hive server
> cygnusagent.sinks.hdfs-sink.hive_host = cosmos.lab.fiware.org
> # Hive port for Hive external table provisioning
> cygnusagent.sinks.hdfs-sink.hive_port = 10000
> # Kerberos-based authentication enabling
> cygnusagent.sinks.hdfs-sink.krb5_auth = false
> # Kerberos username
> cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_user = krb5_username
> # Kerberos password
> cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_password = xxxxxxxxxxxxx
> # Kerberos login file
> cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_login_conf_file = /usr/cygnus/conf/krb5_login.conf
> # Kerberos configuration file
> cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_conf_file = /usr/cygnus/conf/krb5.conf
> #=============================================
> # hdfs-channel configuration
> # channel type (must not be changed)
> cygnusagent.channels.hdfs-channel.type = memory
> # capacity of the channel
> cygnusagent.channels.hdfs-channel.capacity = 1000
> # amount of bytes that can be sent per transaction
> cygnusagent.channels.hdfs-channel.transactionCapacity = 100
> And cygnus_instance_1.conf:
> CYGNUS_USER=cygnus
> CONFIG_FOLDER=/usr/cygnus/conf
> CONFIG_FILE=/usr/cygnus/conf/agent_1.conf
> # Name of the agent. The name of the agent is not trivial, since it is the base for the Flume parameters
> # naming conventions, e.g. it appears in .sources.http-source.channels=...
> AGENT_NAME=cygnusagent
> # Name of the logfile located at /var/log/cygnus.
> LOGFILE_NAME=cygnus.log
> # Administration port. Must be unique per instance
> ADMIN_PORT=8081
> # Polling interval (seconds) for the configuration reloading
> POLLING_INTERVAL=30
> I hope it's a simple issue. If more info is needed please let me know.
> BTW, I got my token following the instructions on this link.
> Isn't there supposed to be a password field for accessing COSMOS global instance? Or is the token enough?
> Thank you
--
This message was sent by Atlassian JIRA
(v6.4.1#64016)
You can get more information about our cookies and privacy policies clicking on the following links: Privacy policy Cookies policy