[Backlogmanager] [FIWARE-JIRA] (HELP-9198) [fiware-stackoverflow] Error at defining a specific field in a Hive Query

Fernando Lopez (JIRA) jira-help-desk at jira.fiware.org
Fri May 26 10:21:01 CEST 2017


     [ https://jira.fiware.org/browse/HELP-9198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Fernando Lopez resolved HELP-9198.
----------------------------------
    Resolution: Done

> [fiware-stackoverflow] Error at defining a specific field in a Hive Query
> -------------------------------------------------------------------------
>
>                 Key: HELP-9198
>                 URL: https://jira.fiware.org/browse/HELP-9198
>             Project: Help-Desk
>          Issue Type: Monitor
>          Components: FIWARE-TECH-HELP
>            Reporter: Backlog Manager
>            Assignee: Francisco Romero
>              Labels: fiware, fiware-cosmos
>
> Created question in FIWARE Q/A platform on 08-07-2015 at 14:07
> {color: red}Please, ANSWER this question AT{color} https://stackoverflow.com/questions/31293270/error-at-defining-a-specific-field-in-a-hive-query
> +Question:+
> Error at defining a specific field in a Hive Query
> +Description:+
> I have an Orion Context Broker connected with Cosmos by cygnus.
> It works ok, I mean I send new elements to Context Broker and cygnus send them to Cosmos and save them in files.
> The problem I have is when I try to do some searchs.
> I start hive, and I see that there are some tables created related with the files that cosmos have created, so I launch some querys.
> The simple one works fine:
> select * from Table_name;
> Hive doesn't launch any mapReduce jobs.
> but when I want to filter, join, count, or get only some fields. That is what happens:
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> Starting Job = JOB_NAME, Tracking URL = JOB_DETAILS_URL
> Kill Command = /usr/lib/hadoop-0.20/bin/hadoop job  -kill JOB_NAME
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
> 2015-07-08 14:35:12,723 Stage-1 map = 0%,  reduce = 0%
> 2015-07-08 14:35:38,943 Stage-1 map = 100%,  reduce = 100%
> Ended Job = JOB_NAME with errors
> Error during job, obtaining debugging information...
> Examining task ID: TASK_NAME (and more) from job JOB_NAME
> Task with the most failures(4): 
> -----
> Task ID:
>   task_201409031055_6337_m_000000
> URL: TASK_DETAIL_URL
> -----
> FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
> MapReduce Jobs Launched: 
> Job 0: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
> I have found that the files that are created by Cygnus has difference between tha other files, because in the case of the cygnus, they have to be deserialize with a jar.
> So, I have the doubt if in those cases I have to apply any MapReduce method or if is already any general method to do this.



--
This message was sent by Atlassian JIRA
(v6.4.1#64016)


More information about the Backlogmanager mailing list

You can get more information about our cookies and privacy policies clicking on the following links: Privacy policy   Cookies policy