[Fiware-lab-help] [FINESCE-WP4] COSMOS : Error after upgrading to HiveServer2

Pasquale Andriani pasquale.andriani at eng.it
Mon Aug 31 12:08:36 CEST 2015


Dear Francisco,
I'm quite disappointed about the current status.

I understand the need of upgrading your infrastructure, but you should bear
in mind that FINESCE is one of your user and right now, after the change to
HiveServer2 and without any in-memory computation, we are having only
performance issues without seeing any other advantage and making our
application useless due to the very high queries response time.

We have to show a working demo (as it worked till just before the change)
during the FINESCE final event on September 14th. How can you help us in
solving the performance issue in reasonable time?

For example, would it be possible an upgrade to the latest version of Spark
with Thrift JDBC/ODBC server on SparkSQL instead of testing the oldest
version of shark/spark with Hive 0.13?

Kind regards,
P.

Pasquale Andriani
Direzione Ricerca e Innovazione - Research & Development Lab
pasquale.andriani at eng.it

Engineering Ingegneria Informatica spa
Via Riccardo Morandi, 32 - 00148 Roma
Tel. +39-06.87594138
Mob. +39 3924698746
Fax. +39-06.83074408
www.eng.it

On Fri, Aug 28, 2015 at 3:56 PM, FRANCISCO ROMERO BUENO <
francisco.romerobueno at telefonica.com> wrote:

> Hi Dario,
>
> The performance of the server is what it is; this depends on the available
> infrastructure (not so "big" as we would like) and the number of users
> doing analysis at the same time (not only Hive queries but MapReduce jobs
> and many other kind of scripts at the same time).
>
> We decided to stop using Shark because it is discontinued by the Apache
> community, and because the compiled version of Hive for Shark was pretty
> old (Hive 0.9.0 for Spark/Shark 0.8) and a lot of interesting features in
> terms of functionality, security, etc were missing. In fact, our aim was to
> upgrade to a more recent version of Hive, but we found Hive 0.13.0 was the
> latest working with our also pretty old Hadoop.
>
> We are losing the in-memory computations, OK, but HiveServer2 is supposed
> to be faster than old Hive because it accepts concurrent queries; is there
> any way to take advantage of this feature by modifying your code?
>
> Being said that, I could start the old Shark server just for you,
> listening in a different port than TCP/10000 and only accepting queries
> from your machine. However, the already deployed version of Spark/Shark, as
> already said, is 0.8, and it was compiled for working with Hive 0.9.0. Now
> Hive is 0.13.0 and I think the Hive metastore is not compatible with
> Spark/Shark anymore. I had to check for it, and make some tests, even, a
> parallel metastore could be created just for you. But this requires time.
>
> The problem is I am supposed to be on holydays until September the 14th :)
> This week I was attending the email and supporting FIWARE users because I
> had a broadband connection available; but this will change from tomorrow
> until the 7th of September, due to I'll be out of Spain. From the 7th to
> 14th I'll continue on holydays... but I'll be at home. Well, my wife will
> kill me, but I think I can do those tests then.
>
> Best regards,
> Francisco
> ________________________________________
> De: Pellegrino Dario <dario.pellegrino at eng.it>
> Enviado: jueves, 27 de agosto de 2015 16:43:33
> Para: FRANCISCO ROMERO BUENO
> Cc: Leandro Lombardo; Massimiliano Nigrelli; Luigi Briguglio; Pasquale
> Andriani; fiware-lab-help at lists.fi-ware.org
> Asunto: R: [Fiware-lab-help] [FINESCE-WP4] COSMOS : Error after upgrading
> to HiveServer2
>
> Hi Francisco,
> at the moment the beeline connection works properly but we are still
> having the performance issues (as I have written yesterday in my email
> below) that we do not allow a regular usage of our application.
> At the middle of September we are going to prepare the final event session
> and the final review of our research project and we will be presenting a
> live demo to the review panel.
> As you can surely understand, for us it is very important that our
> application works properly like before the Hive Server update.
> Are you working to solve this issue?
> Best regards,
> Dario
>
>
> Dario Pellegrino
> Direzione Ricerca e Innovazione - R&D Lab
> dario.pellegrino at eng.it
>
> Engineering Ingegneria Informatica spa
> Viale Regione Siciliana, 7275 - 90146 Palermo
> Tel. +39-091.7511847
> Mob. +39-346.5325257
> www.eng.it
>
> -----Messaggio originale-----
> Da: Pellegrino Dario
> Inviato: giovedì 27 agosto 2015 11:08
> A: 'FRANCISCO ROMERO BUENO'
> Cc: 'Leandro Lombardo'; 'Massimiliano Nigrelli'; 'Luigi Briguglio';
> 'Pasquale Andriani'; 'fiware-lab-help at lists.fi-ware.org'
> Oggetto: R: [Fiware-lab-help] [FINESCE-WP4] COSMOS : Error after upgrading
> to HiveServer2
>
> Hi Francisco,
> right now I have been facing a connection issue with hive2.
> Is Hive Server down?
> Regards,
> Dario
>
> Dario Pellegrino
> Direzione Ricerca e Innovazione - R&D Lab dario.pellegrino at eng.it
>
> Engineering Ingegneria Informatica spa
> Viale Regione Siciliana, 7275 - 90146 Palermo Tel. +39-091.7511847 Mob.
> +39-346.5325257 www.eng.it
>
> -----Messaggio originale-----
> Da: Pellegrino Dario
> Inviato: mercoledì 26 agosto 2015 17:57
> A: 'FRANCISCO ROMERO BUENO'
> Cc: Leandro Lombardo; Massimiliano Nigrelli; Luigi Briguglio; Pasquale
> Andriani; fiware-lab-help at lists.fi-ware.org
> Oggetto: R: [Fiware-lab-help] [FINESCE-WP4] COSMOS : Error after upgrading
> to HiveServer2
>
> Hi Francisco,
> first of all thanks for your reply.
> The problem was not in our connection string but it is generated because
> using hive2 the query response time has significantly increased.
> I solved the fatal error in my application but the performances are not
> yet acceptable.  For example the response time for a simple query is now 40
> sec while before the upgrade to hive2 was only 5 sec.
> Could you verify why the performances have decreased?
> Best regards,
> Dario
>
> Dario Pellegrino
> Direzione Ricerca e Innovazione - R&D Lab dario.pellegrino at eng.it
>
> Engineering Ingegneria Informatica spa
> Viale Regione Siciliana, 7275 - 90146 Palermo Tel. +39-091.7511847 Mob.
> +39-346.5325257 www.eng.it
>
>
> -----Messaggio originale-----
> Da: FRANCISCO ROMERO BUENO [mailto:francisco.romerobueno at telefonica.com]
> Inviato: mercoledì 26 agosto 2015 13:02
> A: Pellegrino Dario; agalani at unipi.gr; fiware-lab-help at lists.fi-ware.org
> Cc: Leandro Lombardo; Massimiliano Nigrelli; Luigi Briguglio; Pasquale
> Andriani
> Oggetto: Re: [Fiware-lab-help] [FINESCE-WP4] COSMOS : Error after
> upgrading to HiveServer2
>
> Dear Dario,
>
> I will need to know the code around this trace:
> at
> eu.finesce.emarketplace.client.HiveClient.getHiveConnection(HiveClient.java:102)
>
> It should be something similar to/inspired by the attached code that is
> working for me:
>
> Connecting to jdbc:hive2://
> 130.206.80.46:10000/frb?user=frb&password=XXXXXXXXXX
> remotehive> select * from frb_one;
> 1431949600,2015-05-18T11:46:40.171Z,Room1,Room,temperature,centigrade,26.5
> 1431949749,2015-05-18T11:49:09.506Z,Room1,Room,temperature,centigrade,26.5
> 1432014378,2015-05-19T05:46:18.361Z,Room1,Room,temperature,centigrade,26.5
> 1432221197,2015-05-21T15:13:17.979Z,Room1,Room,temperature,centigrade,26.5
> remotehive>
>
> Regarding beeline, the connection must be done without the comma character:
>
> $ beeline
> Beeline version 0.13.0 by Apache Hive
> beeline> !connect jdbc:hive2://localhost:10000 frb XXXXXXXX
> beeline> org.apache.hive.jdbc.HiveDriver
> Connecting to jdbc:hive2://localhost:10000 Connected to: Apache Hive
> (version 0.13.0)
> Driver: Hive JDBC (version 0.13.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> 0: jdbc:hive2://localhost:10000> select * from frb.frb_one;
>
> +---------------------+---------------------------+-------------------+---------------------+-------------------+-------------------+--------------------+
> | frb_one.recvtimets  |     frb_one.recvtime      | frb_one.entityid  |
> frb_one.entitytype  | frb_one.attrname  | frb_one.attrtype  |
> frb_one.attrvalue  |
>
> +---------------------+---------------------------+-------------------+---------------------+-------------------+-------------------+--------------------+
> | 1431949600          | 2015-05-18T11:46:40.171Z  | Room1             |
> Room                | temperature       | centigrade        | 26.5
>      |
> | 1431949749          | 2015-05-18T11:49:09.506Z  | Room1             |
> Room                | temperature       | centigrade        | 26.5
>      |
> | 1432014378          | 2015-05-19T05:46:18.361Z  | Room1             |
> Room                | temperature       | centigrade        | 26.5
>      |
> | 1432221197          | 2015-05-21T15:13:17.979Z  | Room1             |
> Room                | temperature       | centigrade        | 26.5
>      |
>
> +---------------------+---------------------------+-------------------+---------------------+-------------------+-------------------+--------------------+
> 4 rows selected (0.368 seconds)
> 0: jdbc:hive2://localhost:10000>
>
> Attached code:
>
> package com.telefonica.iot.hivebasicclient;
>
> import java.io.BufferedReader;
> import java.io.IOException;
> import java.io.InputStreamReader;
> import java.sql.Connection;
> import java.sql.DriverManager;
> import java.sql.ResultSet;
> import java.sql.SQLException;
> import java.sql.Statement;
>
> /**
>  *
>  * @author Francisco Romero Bueno frb at tid.es
>  *
>  * Basic remote client for Hive mimicing the native Hive CLI behaviour.
>  *
>  * Can be used as the base for more complex clients, interactive or not
> interactive.
>  */
> public final class HiveBasicClient {
>     // JDBC driver required for Hive connections
>     private static final String DRIVERNAME =
> "org.apache.hive.jdbc.HiveDriver";
>     private static Connection con;
>
>     /**
>      * Constructor.
>      */
>     private HiveBasicClient() {
>     } // HiveBasicClient
>
>     /**
>      *
>      * @param hiveServer
>      * @param hivePort
>      * @param dbName
>      * @param hadoopUser
>      * @param hadoopPassword
>      * @return
>      */
>     private static Connection getConnection(String hiveServer, String
> hivePort, String dbName,
>             String hadoopUser, String hadoopPassword) {
>         try {
>             // dynamically load the Hive JDBC driver
>             Class.forName(DRIVERNAME);
>         } catch (ClassNotFoundException e) {
>             System.out.println(e.getMessage());
>             return null;
>         } // try catch
>
>         try {
>             System.out.println("Connecting to jdbc:hive2://" + hiveServer
> + ":" + hivePort
>                     + "/" + dbName + "?user=" + hadoopUser +
> "&password=XXXXXXXXXX");
>             // return a connection based on the Hive JDBC driver
>             return DriverManager.getConnection("jdbc:hive2://" +
> hiveServer + ":" + hivePort + "/"
>                     + dbName, hadoopUser, hadoopPassword);
>         } catch (SQLException e) {
>             System.out.println(e.getMessage());
>             return null;
>         } // try catch
>     } // getConnection
>
>     /**
>      *
>      * @param query
>      */
>     private static void doExecute(String query) {
>         try {
>             // from here on, everything is SQL!
>             Statement stmt = con.createStatement();
>             ResultSet res = stmt.executeQuery(query);
>
>             // iterate on the result
>             while (res.next()) {
>                 String s = "";
>
>                 for (int i = 1; i < res.getMetaData().getColumnCount();
> i++) {
>                     s += res.getString(i) + ",";
>                 } // for
>
>                 s += res.getString(res.getMetaData().getColumnCount());
>                 System.out.println(s);
>             } // while
>
>             // close everything
>             res.close();
>             stmt.close();
>         } catch (SQLException e) {
>             System.out.println(e.getMessage());
>         } // try catch
>     } // doExecute
>
>         /**
>      *
>      * @param query
>      */
>     private static void doUpdate(String query) {
>         try {
>             // from here on, everything is SQL!
>             Statement stmt = con.createStatement();
>             stmt.executeUpdate(query);
>
>             // close everything
>             stmt.close();
>         } catch (SQLException e) {
>             System.out.println(e.getMessage());
>         } // try catch
>     } // doUpdate
>
>     /**
>      *
>      * @param args
>      */
>     public static void main(String[] args) {
>         // get the arguments
>         String hiveServer = args[0];
>         String hivePort = args[1];
>         String dbName = args[2];
>         String cosmosUser = args[3];
>         String cosmosPassword = args[4];
>
>         // get a connection to the Hive server running on the specified IP
> address, listening on 10000/TCP port
>         // authenticate using my credentials
>         con = getConnection(hiveServer, hivePort, dbName, cosmosUser,
> cosmosPassword);
>
>         if (con == null) {
>             System.out.println("Could not connect to the Hive server!");
>             System.exit(-1);
>         } // if
>
>         // add JSON serde
>         doUpdate("add JAR
> /usr/local/apache-hive-0.13.0-bin/lib/json-serde-1.3.1-SNAPSHOT-jar-with-dependencies.jar");
>
>         // use the database
>         doUpdate("use " + dbName);
>
>         while (true) {
>             // prompt the user for a set of HiveQL sentence (';' separated)
>             System.out.print("remotehive> ");
>
>             // open the standard input
>             BufferedReader br = new BufferedReader(new
> InputStreamReader(System.in));
>
>             // read the HiveQL sentences from the standard input
>             String hiveqlSentences = null;
>
>             try {
>                 hiveqlSentences = br.readLine();
>             } catch (IOException e) {
>                 System.out.println("IO error trying to read a HiveQL
> query: " + e.getMessage());
>                 System.exit(1);
>             } // try catch
>
>             if (hiveqlSentences != null) {
>                 // get all the queries within the input HiveQL sentences
>                 String[] queries = hiveqlSentences.split(";");
>
>                 // for each query, execute it
>                 for (String querie : queries) {
>                     doExecute(querie);
>                 } // for
>             } // if
>         } // while
>     } // main
>
> } //HiveClientTest
>
> Best regards,
> Francisco
>
> ________________________________________
> De: Pellegrino Dario <dario.pellegrino at eng.it>
> Enviado: miércoles, 26 de agosto de 2015 12:09
> Para: agalani at unipi.gr; fiware-lab-help at lists.fi-ware.org
> Cc: Leandro Lombardo; FRANCISCO ROMERO BUENO; Massimiliano Nigrelli; Luigi
> Briguglio; Pasquale Andriani
> Asunto: R: [Fiware-lab-help] [FINESCE-WP4] COSMOS : Error after upgrading
> to HiveServer2
>
> Hi Aristi,
> I have done other tests. Please, could you send the information below to a
> second level support .
>
> 1) TOMCAT Log
> Aug 26, 2015 11:29:43 AM eu.finesce.emarketplace.client.HiveClient
> getHiveConnection
> SEVERE: HIVE Connection Error
> java.sql.SQLException: Could not open connection to jdbc:hive2://
> 130.206.80.46:10000: java.net.SocketException: Connection reset
>         at
> org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:206)
>         at
> org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:178)
>         at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
>         at java.sql.DriverManager.getConnection(DriverManager.java:571)
>         at java.sql.DriverManager.getConnection(DriverManager.java:215)
>         at
> eu.finesce.emarketplace.client.HiveClient.getHiveConnection(HiveClient.java:102)
>         at
> eu.finesce.emarketplace.client.HiveClient.getloadpredictionBySector(HiveClient.java:1017)
>         at
> eu.finesce.emarketplace.RestHive2Cosmos.getLoadPredictionbySector(RestHive2Cosmos.java:251)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
>         at
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:151)
>         at
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:171)
>         at
> org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:195)
>         at
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:104)
>         at
> org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:402)
>         at
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:349)
>         at
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:106)
>         at
> org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:259)
>         at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
>         at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
>         at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>         at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>         at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
>         at
> org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:318)
>         at
> org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:236)
>         at
> org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1010)
>         at
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:373)
>         at
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:382)
>         at
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:345)
>         at
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:220)
>         at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
>         at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
>         at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:224)
>         at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
>         at
> org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
>         at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
>         at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
>         at
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927)
>         at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
>         at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
>         at
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:987)
>         at
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:579)
>         at
> org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.thrift.transport.TTransportException:
> java.net.SocketException: Connection reset
>         at
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>         at
> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>         at
> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:178)
>         at
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:288)
>         at
> org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
>         at
> org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:203)
>         ... 48 more
> Caused by: java.net.SocketException: Connection reset
>         at java.net.SocketInputStream.read(SocketInputStream.java:196)
>         at java.net.SocketInputStream.read(SocketInputStream.java:122)
>         at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>         at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
>         at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>         at
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>         ... 53 more
>
> 2) BEELINE TEST
> I tried to use the Jdbc Hive2 connection in "Beeline Hive Client" on COSMOS
>
>  -bash-4.1$ beeline
> Beeline version 0.13.0 by Apache Hive
> beeline> !connect jdbc:hive2://130.206.80.46:10000", "FINESCE-WP4",
> "password"
> scan complete in 24ms
> Connecting to jdbc:hive2://130.206.80.46:10000",
> Error: Invalid URL: jdbc:hive2://130.206.80.46:10000",
> (state=08S01,code=0)
>
>
> Best regards,
> Dario
>
>
>
> Dario Pellegrino
> Direzione Ricerca e Innovazione - R&D Lab dario.pellegrino at eng.it
>
> Engineering Ingegneria Informatica spa
> Viale Regione Siciliana, 7275 - 90146 Palermo Tel. +39-091.7511847 Mob.
> +39-346.5325257 www.eng.it
>
> -----Messaggio originale-----
> Da: Aristi Galani [mailto:agalani at unipi.gr]
> Inviato: martedì 25 agosto 2015 18:29
> A: Pellegrino Dario
> Cc: fiware-lab-help at lists.fi-ware.org; Leandro Lombardo; FRANCISCO ROMERO
> BUENO (francisco.romerobueno at telefonica.com); Massimiliano Nigrelli;
> Luigi Briguglio; SERGIO GARCIA GOMEZ (sergio.garciagomez at telefonica.com);
> Pasquale Andriani
> Oggetto: Re: [Fiware-lab-help] [FINESCE-WP4] COSMOS : Error after
> upgrading to HiveServer2
>
> Dear Dario,
>
> We forwarded your request to second level support.
>
> Kind regards
> IWAVE team, on behalf of helpdesk team
>
>
> > Dear all,
> > a few days ago we have received your mail about the Hive Server
> > upgrade
> > (HiveServer2 instead of Shark) and we have modified our Java code as
> > you recommended.
> > In particular we have load the new driver
> > "org.apache.hive.jdbc.HiveDriver" , we have modified the JDBC connection
> > "return DriverManager.getConnection("jdbc:hive2://" + hiveServer      +
> > ":" + hivePort + "/default", hdfsUser, hdfsPwd);" and we have changed
> > the file POM.xml (dependencies Hive 0.13.0).
> > Unfortunately, after changes our application doesn't work.
> > You can find our error message below:
> >
> > 25-ago-2015 15.34.01 org.apache.catalina.core.StandardWrapperValve
> > invoke
> > GRAVE: Servlet.service() for servlet
> > [eu.finesce.emarketplace.RestHiveInputApplication] in context with
> > path [/rest2cosmos] threw exception
> > [java.lang.IllegalMonitorStateException]
> > with root cause
> > java.lang.IllegalMonitorStateException
> >        at
> > java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(Unknown
> > Source)
> >        at
> > java.util.concurrent.locks.AbstractQueuedSynchronizer.release(Unknown
> > Source)
> >        at java.util.concurrent.locks.ReentrantLock.unlock(Unknown Source)
> >        at
> >
> org.apache.hive.jdbc.HiveStatement.closeClientOperation(HiveStatement.java:175)
> >        at
> >
> org.apache.hive.jdbc.HiveQueryResultSet.close(HiveQueryResultSet.java:293)
> >        at
> >
> eu.finesce.emarketplace.client.HiveClient.getmeterDetails(HiveClient.java:1386)
> >        at
> >
> eu.finesce.emarketplace.RestHive2Cosmos.getMeterDetails(RestHive2Cosmos.java:299)
> >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> >        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> >        at java.lang.reflect.Method.invoke(Unknown Source)
> >        at
> >
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
> >        at
> >
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:151)
> >        at
> >
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:171)
> >        at
> >
> org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:195)
> >        at
> >
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:104)
> >        at
> >
> org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:402)
> >        at
> >
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:349)
> >        at
> >
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:106)
> >        at
> > org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:259)
> >        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
> >        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
> >        at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
> >        at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
> >        at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
> >        at
> >
> org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:318)
> >        at
> > org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:236)
> >        at
> >
> org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1010)
> >        at
> > org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:373)
> >        at
> >
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:382)
> >        at
> >
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:345)
> >        at
> >
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:220)
> >        at
> >
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
> >        at
> >
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
> >        at
> >
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
> >        at
> >
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
> >        at
> >
> org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501)
> >        at
> >
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
> >        at
> >
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
> >        at
> > org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
> >        at
> >
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
> >        at
> >
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
> >        at
> >
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1040)
> >        at
> >
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607)
> >        at
> >
> org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:313)
> >        at
> > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
> > Source)
> >        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> > Source)
> >        at java.lang.Thread.run(Unknown Source)
> >
> > Waiting for your feedback, thank you in advance.
> > Best regards,
> > Dario Pellegrino
> >
> > Dario Pellegrino
> > Direzione Ricerca e Innovazione - R&D Lab
> > dario.pellegrino at eng.it<mailto:dario.pellegrino at eng.it>
> >
> > Engineering Ingegneria Informatica spa Viale Regione Siciliana, 7275 -
> > 90146 Palermo Tel. +39-091.7511847 Mob. +39-346.5325257
> > www.eng.it<http://www.eng.it/>
> >
> > _______________________________________________
> > Fiware-lab-help mailing list
> > Fiware-lab-help at lists.fi-ware.org
> > https://lists.fi-ware.org/listinfo/fiware-lab-help
> >
>
>
>
> ________________________________
>
> Este mensaje y sus adjuntos se dirigen exclusivamente a su destinatario,
> puede contener información privilegiada o confidencial y es para uso
> exclusivo de la persona o entidad de destino. Si no es usted. el
> destinatario indicado, queda notificado de que la lectura, utilización,
> divulgación y/o copia sin autorización puede estar prohibida en virtud de
> la legislación vigente. Si ha recibido este mensaje por error, le rogamos
> que nos lo comunique inmediatamente por esta misma vía y proceda a su
> destrucción.
>
> The information contained in this transmission is privileged and
> confidential information intended only for the use of the individual or
> entity named above. If the reader of this message is not the intended
> recipient, you are hereby notified that any dissemination, distribution or
> copying of this communication is strictly prohibited. If you have received
> this transmission in error, do not read it. Please immediately reply to the
> sender that you have received this communication in error and then delete
> it.
>
> Esta mensagem e seus anexos se dirigem exclusivamente ao seu destinatário,
> pode conter informação privilegiada ou confidencial e é para uso exclusivo
> da pessoa ou entidade de destino. Se não é vossa senhoria o destinatário
> indicado, fica notificado de que a leitura, utilização, divulgação e/ou
> cópia sem autorização pode estar proibida em virtude da legislação vigente.
> Se recebeu esta mensagem por erro, rogamos-lhe que nos o comunique
> imediatamente por esta mesma via e proceda a sua destruição
>
> ________________________________
>
> Este mensaje y sus adjuntos se dirigen exclusivamente a su destinatario,
> puede contener información privilegiada o confidencial y es para uso
> exclusivo de la persona o entidad de destino. Si no es usted. el
> destinatario indicado, queda notificado de que la lectura, utilización,
> divulgación y/o copia sin autorización puede estar prohibida en virtud de
> la legislación vigente. Si ha recibido este mensaje por error, le rogamos
> que nos lo comunique inmediatamente por esta misma vía y proceda a su
> destrucción.
>
> The information contained in this transmission is privileged and
> confidential information intended only for the use of the individual or
> entity named above. If the reader of this message is not the intended
> recipient, you are hereby notified that any dissemination, distribution or
> copying of this communication is strictly prohibited. If you have received
> this transmission in error, do not read it. Please immediately reply to the
> sender that you have received this communication in error and then delete
> it.
>
> Esta mensagem e seus anexos se dirigem exclusivamente ao seu destinatário,
> pode conter informação privilegiada ou confidencial e é para uso exclusivo
> da pessoa ou entidade de destino. Se não é vossa senhoria o destinatário
> indicado, fica notificado de que a leitura, utilização, divulgação e/ou
> cópia sem autorização pode estar proibida em virtude da legislação vigente.
> Se recebeu esta mensagem por erro, rogamos-lhe que nos o comunique
> imediatamente por esta mesma via e proceda a sua destruição
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.fiware.org/private/fiware-lab-help/attachments/20150831/3f57311e/attachment.html>


More information about the Fiware-lab-help mailing list

You can get more information about our cookies and privacy policies clicking on the following links: Privacy policy   Cookies policy