[ https://jira.fiware.org/browse/HELP-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Fernando Lopez updated HELP-9087:
---------------------------------
Description:
Created question in FIWARE Q/A platform on 28-11-2015 at 19:11
{color: red}Please, ANSWER this question AT{color} https://stackoverflow.com/questions/33974754/how-can-i-read-and-transfer-chunks-of-file-with-hadoop-webhdfs
+Question:+
How can I Read and Transfer chunks of file with Hadoop WebHDFS?
+Description:+
I need to transfer big files (at least 14MB) from the Cosmos instance of the FIWARE Lab to my backend.
I used the Spring RestTemplate as a client interface for the Hadoop WebHDFS REST API described here but I run into an IO Exception:
Exception in thread "main" org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/<user.name>/<path>?op=open&user.name=<user.name>":Truncated chunk ( expected size: 14744230; actual size: 11285103); nested exception is org.apache.http.TruncatedChunkException: Truncated chunk ( expected size: 14744230; actual size: 11285103)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:580)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:545)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:466)
This is the actual code that generates the Exception:
RestTemplate restTemplate = new RestTemplate();
restTemplate.setRequestFactory(new HttpComponentsClientHttpRequestFactory());
restTemplate.getMessageConverters().add(new ByteArrayHttpMessageConverter());
HttpEntity<?> entity = new HttpEntity<>(headers);
UriComponentsBuilder builder =
UriComponentsBuilder.fromHttpUrl(hdfs_path)
.queryParam("op", "OPEN")
.queryParam("user.name", user_name);
ResponseEntity<byte[]> response =
restTemplate
.exchange(builder.build().encode().toUri(), HttpMethod.GET, entity, byte[].class);
FileOutputStream output = new FileOutputStream(new File(local_path));
IOUtils.write(response.getBody(), output);
output.close();
I think this is due to a transfer timeout on the Cosmos instance, so I tried to
send a curl on the path by specifying offset, buffer and length parameters, but they seem to be ignored: I got the whole file.
Thanks in advance.
was:
Created question in FIWARE Q/A platform on 28-11-2015 at 19:11
{color: red}Please, ANSWER this question AT{color} https://stackoverflow.com/questions/33974754/how-can-i-read-and-transfer-chunks-of-file-with-hadoop-webhdfs
+Question:+
How can I Read and Transfer chunks of file with Hadoop WebHDFS?
+Description:+
I need to transfer big files (at least 14MB) from the Cosmos instance of the FIWARE Lab to my backend.
I used the Spring RestTemplate as a client interface for the Hadoop WebHDFS REST API described here but I run into an IO Exception:
Exception in thread "main" org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/<user.name>/<path>?op=open&user.name=<user.name>":Truncated chunk ( expected size: 14744230; actual size: 11285103); nested exception is org.apache.http.TruncatedChunkException: Truncated chunk ( expected size: 14744230; actual size: 11285103)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:580)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:545)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:466)
This is the actual code that generates the Exception:
RestTemplate restTemplate = new RestTemplate();
restTemplate.setRequestFactory(new HttpComponentsClientHttpRequestFactory());
restTemplate.getMessageConverters().add(new ByteArrayHttpMessageConverter());
HttpEntity<?> entity = new HttpEntity<>(headers);
UriComponentsBuilder builder =
UriComponentsBuilder.fromHttpUrl(hdfs_path)
.queryParam("op", "OPEN")
.queryParam("user.name", user_name);
ResponseEntity<byte[]> response =
restTemplate
.exchange(builder.build().encode().toUri(), HttpMethod.GET, entity, byte[].class);
FileOutputStream output = new FileOutputStream(new File(local_path));
IOUtils.write(response.getBody(), output);
output.close();
I think this is due to a transfer timeout on the Cosmos instance, so I tried to
send a curl on the path by specifying offset, buffer and length parameters, but they seem to be ignored: I got the whole file.
Thanks in advance.
HD-Enabler: Cosmos
> [fiware-stackoverflow] How can I Read and Transfer chunks of file with Hadoop WebHDFS?
> --------------------------------------------------------------------------------------
>
> Key: HELP-9087
> URL: https://jira.fiware.org/browse/HELP-9087
> Project: Help-Desk
> Issue Type: Monitor
> Components: FIWARE-TECH-HELP
> Reporter: Backlog Manager
> Labels: fiware, fiware-cosmos, hadoop, httpclient, webhdfs
>
> Created question in FIWARE Q/A platform on 28-11-2015 at 19:11
> {color: red}Please, ANSWER this question AT{color} https://stackoverflow.com/questions/33974754/how-can-i-read-and-transfer-chunks-of-file-with-hadoop-webhdfs
> +Question:+
> How can I Read and Transfer chunks of file with Hadoop WebHDFS?
> +Description:+
> I need to transfer big files (at least 14MB) from the Cosmos instance of the FIWARE Lab to my backend.
> I used the Spring RestTemplate as a client interface for the Hadoop WebHDFS REST API described here but I run into an IO Exception:
> Exception in thread "main" org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/<user.name>/<path>?op=open&user.name=<user.name>":Truncated chunk ( expected size: 14744230; actual size: 11285103); nested exception is org.apache.http.TruncatedChunkException: Truncated chunk ( expected size: 14744230; actual size: 11285103)
> at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:580)
> at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:545)
> at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:466)
> This is the actual code that generates the Exception:
> RestTemplate restTemplate = new RestTemplate();
> restTemplate.setRequestFactory(new HttpComponentsClientHttpRequestFactory());
> restTemplate.getMessageConverters().add(new ByteArrayHttpMessageConverter());
> HttpEntity<?> entity = new HttpEntity<>(headers);
> UriComponentsBuilder builder =
> UriComponentsBuilder.fromHttpUrl(hdfs_path)
> .queryParam("op", "OPEN")
> .queryParam("user.name", user_name);
> ResponseEntity<byte[]> response =
> restTemplate
> .exchange(builder.build().encode().toUri(), HttpMethod.GET, entity, byte[].class);
> FileOutputStream output = new FileOutputStream(new File(local_path));
> IOUtils.write(response.getBody(), output);
> output.close();
> I think this is due to a transfer timeout on the Cosmos instance, so I tried to
> send a curl on the path by specifying offset, buffer and length parameters, but they seem to be ignored: I got the whole file.
> Thanks in advance.
--
This message was sent by Atlassian JIRA
(v6.4.1#64016)
You can get more information about our cookies and privacy policies clicking on the following links: Privacy policy Cookies policy