[Backlogmanager] [FIWARE-JIRA] (HELP-9560) [fiware-stackoverflow] ERROR 503: Service not available at persist HDFS

Fernando Lopez (JIRA) jira-help-desk at jira.fiware.org
Fri Jun 9 10:27:00 CEST 2017


     [ https://jira.fiware.org/browse/HELP-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Fernando Lopez deleted HELP-9560:
---------------------------------


> [fiware-stackoverflow] ERROR 503: Service not available at persist HDFS
> -----------------------------------------------------------------------
>
>                 Key: HELP-9560
>                 URL: https://jira.fiware.org/browse/HELP-9560
>             Project: Help-Desk
>          Issue Type: Monitor
>            Reporter: Backlog Manager
>              Labels: fiware, fiware-cygnus, hdfs
>
> Created question in FIWARE Q/A platform on 04-05-2015 at 09:05
> {color: red}Please, ANSWER this question AT{color} https://stackoverflow.com/questions/30024796/error-503-service-not-available-at-persist-hdfs
> +Question:+
> ERROR 503: Service not available at persist HDFS
> +Description:+
> I have an Orion instance with Cygnus at filab; subcription and notify run fine but I can not persist data to cosmos.lab.fi-ware.org.
> Cygnus returns this error:
> [ERROR - es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionSink.process(OrionSink.java:139)] Persistence error (The talky/talkykar/room6_room directory could not be created in HDFS. HttpFS response: 503 Service unavailable)
> This is my agent_a.conf file:
> cygnusagent.sources = http-source
> cygnusagent.sinks = hdfs-sink
> cygnusagent.channels = hdfs-channel
> #=============================================
> # source configuration
> # channel name where to write the notification events
> cygnusagent.sources.http-source.channels = hdfs-channel
> # source class, must not be changed
> cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
> # listening port the Flume source will use for receiving incoming notifications
> cygnusagent.sources.http-source.port = 5050
> # Flume handler that will parse the notifications, must not be changed
> cygnusagent.sources.http-source.handler = es.tid.fiware.fiwareconnectors.cygnus.handlers.OrionRestHandler
> # URL target
> cygnusagent.sources.http-source.handler.notification_target = /notify
> # Default service (service semantic depends on the persistence sink)
> cygnusagent.sources.http-source.handler.default_service = talky
> # Default service path (service path semantic depends on the persistence sink)
> cygnusagent.sources.http-source.handler.default_service_path = talkykar
> # Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
> cygnusagent.sources.http-source.handler.events_ttl = 10
> # Source interceptors, do not change
> cygnusagent.sources.http-source.interceptors = ts de
> # Timestamp interceptor, do not change
> cygnusagent.sources.http-source.interceptors.ts.type = timestamp
> # Destination extractor interceptor, do not change
> cygnusagent.sources.http-source.interceptors.de.type = es.tid.fiware.fiwareconnectors.cygnus.interceptors.DestinationExtractor$Builder
> # Matching table for the destination extractor interceptor, put the right absolute path to the file if necessary
> # See the doc/design/interceptors document for more details
> cygnusagent.sources.http-source.interceptors.de.matching_table = /usr/cygnus/conf/matching_table.conf
> # ============================================
> # OrionHDFSSink configuration
> # channel name from where to read notification events
> cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
> # sink class, must not be changed
> cygnusagent.sinks.hdfs-sink.type = es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionHDFSSink
> # Comma-separated list of FQDN/IP address regarding the Cosmos Namenode endpoints
> # If you are using Kerberos authentication, then the usage of FQDNs instead of IP addresses is mandatory
> cygnusagent.sinks.hdfs-sink.cosmos_host = http://cosmos.lab.fi-ware.org
> # port of the Cosmos service listening for persistence operations; 14000 for httpfs, 50070 for webhdfs and free choice for inifinty
> cygnusagent.sinks.hdfs-sink.cosmos_port = 14000
> # default username allowed to write in HDFS
> cygnusagent.sinks.hdfs-sink.cosmos_default_username = myuser
> # default password for the default username
> cygnusagent.sinks.hdfs-sink.cosmos_default_password = mypass
> # HDFS backend type (webhdfs, httpfs or infinity)
> cygnusagent.sinks.hdfs-sink.hdfs_api = httpfs
> # how the attributes are stored, either per row either per column (row, column)
> cygnusagent.sinks.hdfs-sink.attr_persistence = row
> # Hive FQDN/IP address of the Hive server
> cygnusagent.sinks.hdfs-sink.hive_host = http://cosmos.lab.fi-ware.org
> # Hive port for Hive external table provisioning
> cygnusagent.sinks.hdfs-sink.hive_port = 10000
> # Kerberos-based authentication enabling
> cygnusagent.sinks.hdfs-sink.krb5_auth = false
> # Kerberos username
> cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_user = krb5_username
> # Kerberos password
> cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_password = xxxxxxxxxxxxx
> # Kerberos login file
> cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_login_conf_file = /usr/cygnus/conf/krb5_login.conf
> # Kerberos configuration file
> cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_conf_file = /usr/cygnus/conf/krb5.conf
> #=============================================
> And this is the Cygnus log:
> 2015-05-04 09:05:10,434 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionHDFSSink.persist(OrionHDFSSink.java:315)] [hdfs-sink] Persisting data at OrionHDFSSink. HDFS file (talky/talkykar/room6_room/room6_room.txt), Data ({"recvTimeTs":"1430723069","recvTime":"2015-05-04T09:04:29.819","entityId":"Room6","entityType":"Room","attrName":"temperature","attrType":"float","attrValue":"26.5","attrMd":[]})
> 2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - es.tid.fiware.fiwareconnectors.cygnus.backends.hdfs.HDFSBackendImpl.doHDFSRequest(HDFSBackendImpl.java:255)] HDFS request: PUT http://http://cosmos.lab.fi-ware.org:14000/webhdfs/v1/user/mped.mlg/talky/talkykar/room6_room?op=mkdirs&user.name=mped.mlg HTTP/1.1
> 2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.requestConnection(PoolingClientConnectionManager.java:186)] Connection request: [route: {}->http://http][total kept alive: 0; route allocated: 0 of 100; total allocated: 0 of 500]
> 2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(PoolingClientConnectionManager.java:220)] Connection leased: [id: 21][route: {}->http://http][total kept alive: 0; route allocated: 1 of 100; total allocated: 1 of 500]
> 2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.DefaultClientConnection.close(DefaultClientConnection.java:169)] Connection org.apache.http.impl.conn.DefaultClientConnection at 5700187d closed
> 2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.DefaultClientConnection.shutdown(DefaultClientConnection.java:154)] Connection org.apache.http.impl.conn.DefaultClientConnection at 5700187d shut down
> 2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.releaseConnection(PoolingClientConnectionManager.java:272)] Connection [id: 21][route: {}->http://http] can be kept alive for 9223372036854775807 MILLISECONDS
> 2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.DefaultClientConnection.close(DefaultClientConnection.java:169)] Connection org.apache.http.impl.conn.DefaultClientConnection at 5700187d closed
> 2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.releaseConnection(PoolingClientConnectionManager.java:278)] Connection released: [id: 21][route: {}->http://http][total kept alive: 0; route allocated: 0 of 100; total allocated: 0 of 500]
> 2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - es.tid.fiware.fiwareconnectors.cygnus.backends.hdfs.HDFSBackendImpl.doHDFSRequest(HDFSBackendImpl.java:191)] The used HDFS endpoint is not active, trying another one (host=http://cosmos.lab.fi-ware.org)
> 2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionSink.process(OrionSink.java:139)] Persistence error (The talky/talkykar/room6_room directory could not be created in HDFS. HttpFS response: 503 Service unavailable)
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.4.1#64016)


More information about the Backlogmanager mailing list

You can get more information about our cookies and privacy policies clicking on the following links: Privacy policy   Cookies policy