Discussion:
Error putting files in the HDFS
(too old to reply)
Basu,Indrashish
2013-10-08 17:42:45 UTC
Permalink
Hello,

My name is Indrashish Basu and I am a Masters student in the Department
of Electrical and Computer Engineering.

Currently I am doing my research project on Hadoop implementation on
ARM processor and facing an issue while trying to run a sample Hadoop
source code on the same. Every time I am trying to put some files in the
HDFS, I am getting the below error.


13/10/07 11:31:29 WARN hdfs.DFSClient: DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/root/bin/cpu-kmeans2D could only be replicated to 0 nodes, instead
of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:739)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)

13/10/07 11:31:29 WARN hdfs.DFSClient: Error Recovery for block null
bad datanode[0] nodes == null
13/10/07 11:31:29 WARN hdfs.DFSClient: Could not get block locations.
Source file "/user/root/bin/cpu-kmeans2D" - Aborting...
put: java.io.IOException: File /user/root/bin/cpu-kmeans2D could only
be replicated to 0 nodes, instead of 1


I tried replicating the namenode and datanode by deleting all the old
logs on the master and the slave nodes as well as the folders under
/app/hadoop/, after which I formatted the namenode and started the
process again (bin/start-all.sh), but still no luck with the same.

I tried generating the admin report(pasted below) after doing the
restart, it seems the data node is not getting started.

-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)

***@tegra-ubuntu:~/hadoop-gpu-master/hadoop-gpu-0.20.1# bin/hadoop
dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: �%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)


I have tried the following methods to debug the process :

1) I logged in to the HADOOP home directory and removed all the old
logs (rm -rf logs/*)

2) Next I deleted the contents of the directory on all my slave and
master nodes (rm -rf /app/hadoop/*)

3) I formatted the namenode (bin/hadoop namenode -format)

4) I started all the processes - first the namenode, datanode and then
the map - reduce. I typed jps on the terminal to ensure that all the
processes (Namenode, Datanode, JobTracker, Task Tracker) are up and
running.

5) Now doing this, I recreated the directories in the dfs.

However still no luck with the process.


Can you kindly assist regarding this ? I am new to Hadoop and I am
having no idea as how I can proceed with this.




Regards,
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida
Jitendra Yadav
2013-10-08 17:55:25 UTC
Permalink
As per your dfs report, available DataNodes count is ZERO in you cluster.

Please check your data node logs.

Regards
Jitendra
Post by Basu,Indrashish
Hello,
My name is Indrashish Basu and I am a Masters student in the Department
of Electrical and Computer Engineering.
Currently I am doing my research project on Hadoop implementation on
ARM processor and facing an issue while trying to run a sample Hadoop
source code on the same. Every time I am trying to put some files in the
HDFS, I am getting the below error.
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/root/bin/cpu-kmeans2D could only be replicated to 0 nodes, instead
of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
at org.apache.hadoop.ipc.Client.call(Client.java:739)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
13/10/07 11:31:29 WARN hdfs.DFSClient: Error Recovery for block null
bad datanode[0] nodes == null
13/10/07 11:31:29 WARN hdfs.DFSClient: Could not get block locations.
Source file "/user/root/bin/cpu-kmeans2D" - Aborting...
put: java.io.IOException: File /user/root/bin/cpu-kmeans2D could only
be replicated to 0 nodes, instead of 1
I tried replicating the namenode and datanode by deleting all the old
logs on the master and the slave nodes as well as the folders under
/app/hadoop/, after which I formatted the namenode and started the
process again (bin/start-all.sh), but still no luck with the same.
I tried generating the admin report(pasted below) after doing the
restart, it seems the data node is not getting started.
-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)
dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: �%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)
1) I logged in to the HADOOP home directory and removed all the old
logs (rm -rf logs/*)
2) Next I deleted the contents of the directory on all my slave and
master nodes (rm -rf /app/hadoop/*)
3) I formatted the namenode (bin/hadoop namenode -format)
4) I started all the processes - first the namenode, datanode and then
the map - reduce. I typed jps on the terminal to ensure that all the
processes (Namenode, Datanode, JobTracker, Task Tracker) are up and
running.
5) Now doing this, I recreated the directories in the dfs.
However still no luck with the process.
Can you kindly assist regarding this ? I am new to Hadoop and I am
having no idea as how I can proceed with this.
Regards,
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida
Basu,Indrashish
2013-10-08 18:01:19 UTC
Permalink
Hi Jitendra,

This is what I am getting in the datanode logs :

2013-10-07 11:27:41,960 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/app/hadoop/tmp/dfs/data is not formatted.
2013-10-07 11:27:41,961 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2013-10-07 11:27:42,094 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
FSDatasetStatusMBean
2013-10-07 11:27:42,099 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
50010
2013-10-07 11:27:42,107 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2013-10-07 11:27:42,369 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2013-10-07 11:27:42,632 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is
-1. Opening the listener on 50075
2013-10-07 11:27:42,633 INFO org.apache.hadoop.http.HttpServer:
listener.getLocalPort() returned 50075
webServer.getConnectors()[0].getLocalPort() returned 50075
2013-10-07 11:27:42,634 INFO org.apache.hadoop.http.HttpServer: Jetty
bound to port 50075
2013-10-07 11:27:42,634 INFO org.mortbay.log: jetty-6.1.14
2013-10-07 11:31:29,821 INFO org.mortbay.log: Started
***@0.0.0.0:50075
2013-10-07 11:31:29,843 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=DataNode, sessionId=null
2013-10-07 11:31:29,912 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
Initializing RPC Metrics with hostName=DataNode, port=50020
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 50020: starting
2013-10-07 11:31:29,934 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
DatanodeRegistration(tegra-ubuntu:50010, storageID=, infoPort=50075,
ipcPort=50020)
2013-10-07 11:31:29,971 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id
DS-1027334635-127.0.1.1-50010-1381170689938 is assigned to data-node
10.227.56.195:50010
2013-10-07 11:31:29,973 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(10.227.56.195:50010,
storageID=DS-1027334635-127.0.1.1-50010-1381170689938, infoPort=50075,
ipcPort=50020)In DataNode.run, data = FSDataset
{dirpath='/app/hadoop/tmp/dfs/data/current'}
2013-10-07 11:31:29,974 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: using
BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2013-10-07 11:31:30,032 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks
got processed in 19 msecs
2013-10-07 11:31:30,035 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block
scanner.
2013-10-07 11:41:42,222 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks
got processed in 20 msecs
2013-10-07 12:41:43,482 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks
got processed in 22 msecs
2013-10-07 13:41:44,755 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks
got processed in 13 msecs


I restarted the datanode and made sure that it is up and running (typed
jps command).

Regards,
Indrashish
Post by Jitendra Yadav
As per your dfs report, available DataNodes count is ZERO in you
cluster.
Please check your data node logs.
Regards
Jitendra
Post by Basu,Indrashish
Hello,
My name is Indrashish Basu and I am a Masters student in the
Department
of Electrical and Computer Engineering.
Currently I am doing my research project on Hadoop implementation on
ARM processor and facing an issue while trying to run a sample
Hadoop
source code on the same. Every time I am trying to put some files in
the
HDFS, I am getting the below error.
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/root/bin/cpu-kmeans2D could only be replicated to 0 nodes,
instead
of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
at org.apache.hadoop.ipc.Client.call(Client.java:739)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
13/10/07 11:31:29 WARN hdfs.DFSClient: Error Recovery for block null
bad datanode[0] nodes == null
13/10/07 11:31:29 WARN hdfs.DFSClient: Could not get block
locations.
Source file "/user/root/bin/cpu-kmeans2D" - Aborting...
put: java.io.IOException: File /user/root/bin/cpu-kmeans2D could
only
be replicated to 0 nodes, instead of 1
I tried replicating the namenode and datanode by deleting all the
old
logs on the master and the slave nodes as well as the folders under
/app/hadoop/, after which I formatted the namenode and started the
process again (bin/start-all.sh), but still no luck with the same.
I tried generating the admin report(pasted below) after doing the
restart, it seems the data node is not getting started.
-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)
dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: �%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)
1) I logged in to the HADOOP home directory and removed all the old
logs (rm -rf logs/*)
2) Next I deleted the contents of the directory on all my slave and
master nodes (rm -rf /app/hadoop/*)
3) I formatted the namenode (bin/hadoop namenode -format)
4) I started all the processes - first the namenode, datanode and
then
the map - reduce. I typed jps on the terminal to ensure that all the
processes (Namenode, Datanode, JobTracker, Task Tracker) are up and
running.
5) Now doing this, I recreated the directories in the dfs.
However still no luck with the process.
Can you kindly assist regarding this ? I am new to Hadoop and I am
having no idea as how I can proceed with this.
Regards,
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida
Basu,Indrashish
2013-10-08 18:17:39 UTC
Permalink
Hi ,

Just to update on this, I have deleted all the old logs and files from
the /tmp and /app/hadoop directory, and restarted all the nodes, I have
now 1 datanode available as per the below information :

Configured Capacity: 3665985536 (3.41 GB)
Present Capacity: 24576 (24 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 24576 (24 KB)
DFS Used%: 100%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)

Name: 10.227.56.195:50010
Decommission Status : Normal
Configured Capacity: 3665985536 (3.41 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 3665960960 (3.41 GB)
DFS Remaining: 0(0 KB)
DFS Used%: 0%
DFS Remaining%: 0%
Last contact: Tue Oct 08 11:12:19 PDT 2013


However when I tried putting the files back in HDFS, I am getting the
same error as stated earlier. Do I need to clear some space for the HDFS
?

Regards,
Indrashish
Post by Basu,Indrashish
Hi Jitendra,
2013-10-07 11:27:41,960 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/app/hadoop/tmp/dfs/data is not formatted.
2013-10-07 11:27:41,961 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2013-10-07 11:27:42,094 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
FSDatasetStatusMBean
2013-10-07 11:27:42,099 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
50010
2013-10-07 11:27:42,107 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2013-10-07 11:27:42,369 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2013-10-07 11:27:42,632 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open()
is -1. Opening the listener on 50075
listener.getLocalPort() returned 50075
webServer.getConnectors()[0].getLocalPort() returned 50075
2013-10-07 11:27:42,634 INFO org.apache.hadoop.http.HttpServer: Jetty
bound to port 50075
2013-10-07 11:27:42,634 INFO org.mortbay.log: jetty-6.1.14
2013-10-07 11:31:29,821 INFO org.mortbay.log: Started
2013-10-07 11:31:29,843 INFO
org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics
with processName=DataNode, sessionId=null
2013-10-07 11:31:29,912 INFO
org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics
with hostName=DataNode, port=50020
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 50020: starting
2013-10-07 11:31:29,934 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
DatanodeRegistration(tegra-ubuntu:50010, storageID=, infoPort=50075,
ipcPort=50020)
2013-10-07 11:31:29,971 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id
DS-1027334635-127.0.1.1-50010-1381170689938 is assigned to data-node
10.227.56.195:50010
2013-10-07 11:31:29,973 INFO
DatanodeRegistration(10.227.56.195:50010,
storageID=DS-1027334635-127.0.1.1-50010-1381170689938,
infoPort=50075,
ipcPort=50020)In DataNode.run, data = FSDataset
{dirpath='/app/hadoop/tmp/dfs/data/current'}
2013-10-07 11:31:29,974 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: using
BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2013-10-07 11:31:30,032 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 19 msecs
2013-10-07 11:31:30,035 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic
block scanner.
2013-10-07 11:41:42,222 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 20 msecs
2013-10-07 12:41:43,482 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 22 msecs
2013-10-07 13:41:44,755 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 13 msecs
I restarted the datanode and made sure that it is up and running
(typed jps command).
Regards,
Indrashish
Post by Jitendra Yadav
As per your dfs report, available DataNodes count is ZERO in you
cluster.
Please check your data node logs.
Regards
Jitendra
Post by Basu,Indrashish
Hello,
My name is Indrashish Basu and I am a Masters student in the
Department
of Electrical and Computer Engineering.
Currently I am doing my research project on Hadoop implementation on
ARM processor and facing an issue while trying to run a sample
Hadoop
source code on the same. Every time I am trying to put some files
in the
HDFS, I am getting the below error.
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/root/bin/cpu-kmeans2D could only be replicated to 0 nodes,
instead
of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
at org.apache.hadoop.ipc.Client.call(Client.java:739)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
13/10/07 11:31:29 WARN hdfs.DFSClient: Error Recovery for block null
bad datanode[0] nodes == null
13/10/07 11:31:29 WARN hdfs.DFSClient: Could not get block
locations.
Source file "/user/root/bin/cpu-kmeans2D" - Aborting...
put: java.io.IOException: File /user/root/bin/cpu-kmeans2D could
only
be replicated to 0 nodes, instead of 1
I tried replicating the namenode and datanode by deleting all the
old
logs on the master and the slave nodes as well as the folders under
/app/hadoop/, after which I formatted the namenode and started the
process again (bin/start-all.sh), but still no luck with the same.
I tried generating the admin report(pasted below) after doing the
restart, it seems the data node is not getting started.
-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)
dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: �%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)
1) I logged in to the HADOOP home directory and removed all the old
logs (rm -rf logs/*)
2) Next I deleted the contents of the directory on all my slave and
master nodes (rm -rf /app/hadoop/*)
3) I formatted the namenode (bin/hadoop namenode -format)
4) I started all the processes - first the namenode, datanode and
then
the map - reduce. I typed jps on the terminal to ensure that all the
processes (Namenode, Datanode, JobTracker, Task Tracker) are up and
running.
5) Now doing this, I recreated the directories in the dfs.
However still no luck with the process.
Can you kindly assist regarding this ? I am new to Hadoop and I am
having no idea as how I can proceed with this.
Regards,
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida
Mohammad Tariq
2013-10-08 18:21:30 UTC
Permalink
You don't have any more space left in your HDFS. Delete some old data or
add additional storage.

Warm Regards,
Tariq
cloudfront.blogspot.com
Hi ,
Just to update on this, I have deleted all the old logs and files from the
/tmp and /app/hadoop directory, and restarted all the nodes, I have now 1
Configured Capacity: 3665985536 (3.41 GB)
Present Capacity: 24576 (24 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 24576 (24 KB)
DFS Used%: 100%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
------------------------------**-------------------
Datanodes available: 1 (1 total, 0 dead)
Name: 10.227.56.195:50010
Decommission Status : Normal
Configured Capacity: 3665985536 (3.41 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 3665960960 (3.41 GB)
DFS Remaining: 0(0 KB)
DFS Used%: 0%
DFS Remaining%: 0%
Last contact: Tue Oct 08 11:12:19 PDT 2013
However when I tried putting the files back in HDFS, I am getting the same
error as stated earlier. Do I need to clear some space for the HDFS ?
Regards,
Indrashish
Post by Basu,Indrashish
Hi Jitendra,
2013-10-07 11:27:41,960 INFO
org.apache.hadoop.hdfs.server.**common.Storage: Storage directory
/app/hadoop/tmp/dfs/data is not formatted.
2013-10-07 11:27:41,961 INFO
org.apache.hadoop.hdfs.server.**common.Storage: Formatting ...
2013-10-07 11:27:42,094 INFO
org.apache.hadoop.hdfs.server.**datanode.DataNode: Registered
FSDatasetStatusMBean
2013-10-07 11:27:42,099 INFO
org.apache.hadoop.hdfs.server.**datanode.DataNode: Opened info server at
50010
2013-10-07 11:27:42,107 INFO
org.apache.hadoop.hdfs.server.**datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2013-10-07 11:27:42,369 INFO org.mortbay.log: Logging to
org.slf4j.impl.**Log4jLoggerAdapter(org.**mortbay.log) via
org.mortbay.log.Slf4jLog
2013-10-07 11:27:42,632 INFO org.apache.hadoop.http.**HttpServer: Port
returned by webServer.getConnectors()[0].**getLocalPort() before open()
is -1. Opening the listener on 50075
listener.getLocalPort() returned 50075
webServer.getConnectors()[0].**getLocalPort() returned 50075
2013-10-07 11:27:42,634 INFO org.apache.hadoop.http.**HttpServer: Jetty
bound to port 50075
2013-10-07 11:27:42,634 INFO org.mortbay.log: jetty-6.1.14
2013-10-07 11:31:29,821 INFO org.mortbay.log: Started
2013-10-07 11:31:29,843 INFO
org.apache.hadoop.metrics.jvm.**JvmMetrics: Initializing JVM Metrics
with processName=DataNode, sessionId=null
2013-10-07 11:31:29,912 INFO
org.apache.hadoop.ipc.metrics.**RpcMetrics: Initializing RPC Metrics
with hostName=DataNode, port=50020
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 50020: starting
2013-10-07 11:31:29,934 INFO
org.apache.hadoop.hdfs.server.**datanode.DataNode: dnRegistration =
DatanodeRegistration(tegra-**ubuntu:50010, storageID=, infoPort=50075,
ipcPort=50020)
2013-10-07 11:31:29,971 INFO
org.apache.hadoop.hdfs.server.**datanode.DataNode: New storage id
DS-1027334635-127.0.1.1-50010-**1381170689938 is assigned to data-node
10.227.56.195:50010
2013-10-07 11:31:29,973 INFO
DatanodeRegistration(10.227.**56.195:50010 <http://10.227.56.195:50010>,
storageID=DS-1027334635-127.0.**1.1-50010-1381170689938, infoPort=50075,
ipcPort=50020)In DataNode.run, data = FSDataset
{dirpath='/app/hadoop/tmp/dfs/**data/current'}
2013-10-07 11:31:29,974 INFO
org.apache.hadoop.hdfs.server.**datanode.DataNode: using
BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2013-10-07 11:31:30,032 INFO
org.apache.hadoop.hdfs.server.**datanode.DataNode: BlockReport of 0
blocks got processed in 19 msecs
2013-10-07 11:31:30,035 INFO
org.apache.hadoop.hdfs.server.**datanode.DataNode: Starting Periodic
block scanner.
2013-10-07 11:41:42,222 INFO
org.apache.hadoop.hdfs.server.**datanode.DataNode: BlockReport of 0
blocks got processed in 20 msecs
2013-10-07 12:41:43,482 INFO
org.apache.hadoop.hdfs.server.**datanode.DataNode: BlockReport of 0
blocks got processed in 22 msecs
2013-10-07 13:41:44,755 INFO
org.apache.hadoop.hdfs.server.**datanode.DataNode: BlockReport of 0
blocks got processed in 13 msecs
I restarted the datanode and made sure that it is up and running
(typed jps command).
Regards,
Indrashish
Post by Jitendra Yadav
As per your dfs report, available DataNodes count is ZERO in you
cluster.
Please check your data node logs.
Regards
Jitendra
Post by Basu,Indrashish
Hello,
My name is Indrashish Basu and I am a Masters student in the Department
of Electrical and Computer Engineering.
Currently I am doing my research project on Hadoop implementation on
ARM processor and facing an issue while trying to run a sample Hadoop
source code on the same. Every time I am trying to put some files in the
HDFS, I am getting the below error.
org.apache.hadoop.ipc.**RemoteException: java.io.IOException: File
/user/root/bin/cpu-kmeans2D could only be replicated to 0 nodes, instead
of 1
at
org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.**
getAdditionalBlock(**FSNamesystem.java:1267)
at
org.apache.hadoop.hdfs.server.**namenode.NameNode.addBlock(**
NameNode.java:422)
at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native Method)
at
sun.reflect.**NativeMethodAccessorImpl.**invoke(**
NativeMethodAccessorImpl.java:**57)
at
sun.reflect.**DelegatingMethodAccessorImpl.**invoke(**
DelegatingMethodAccessorImpl.**java:43)
at java.lang.reflect.Method.**invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$**Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$**Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$**Handler$1.run(Server.java:955)
at java.security.**AccessController.doPrivileged(**Native Method)
at javax.security.auth.Subject.**doAs(Subject.java:415)
at org.apache.hadoop.ipc.Server$**Handler.run(Server.java:953)
at org.apache.hadoop.ipc.Client.**call(Client.java:739)
at org.apache.hadoop.ipc.RPC$**Invoker.invoke(RPC.java:220)
at com.sun.proxy.$Proxy0.**addBlock(Unknown Source)
at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native Method)
at
sun.reflect.**NativeMethodAccessorImpl.**invoke(**
NativeMethodAccessorImpl.java:**57)
at
sun.reflect.**DelegatingMethodAccessorImpl.**invoke(**
DelegatingMethodAccessorImpl.**java:43)
at java.lang.reflect.Method.**invoke(Method.java:606)
at
org.apache.hadoop.io.retry.**RetryInvocationHandler.**invokeMethod(**
RetryInvocationHandler.java:**82)
at
org.apache.hadoop.io.retry.**RetryInvocationHandler.invoke(**
RetryInvocationHandler.java:**59)
at com.sun.proxy.$Proxy0.**addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.**DFSClient$DFSOutputStream.**
locateFollowingBlock(**DFSClient.java:2904)
at
org.apache.hadoop.hdfs.**DFSClient$DFSOutputStream.**
nextBlockOutputStream(**DFSClient.java:2786)
at
org.apache.hadoop.hdfs.**DFSClient$DFSOutputStream.**
access$2000(DFSClient.java:**2076)
at
org.apache.hadoop.hdfs.**DFSClient$DFSOutputStream$**
DataStreamer.run(DFSClient.**java:2262)
13/10/07 11:31:29 WARN hdfs.DFSClient: Error Recovery for block null
bad datanode[0] nodes == null
13/10/07 11:31:29 WARN hdfs.DFSClient: Could not get block locations.
Source file "/user/root/bin/cpu-kmeans2D" - Aborting...
put: java.io.IOException: File /user/root/bin/cpu-kmeans2D could only
be replicated to 0 nodes, instead of 1
I tried replicating the namenode and datanode by deleting all the old
logs on the master and the slave nodes as well as the folders under
/app/hadoop/, after which I formatted the namenode and started the
process again (bin/start-all.sh), but still no luck with the same.
I tried generating the admin report(pasted below) after doing the
restart, it seems the data node is not getting started.
------------------------------**-------------------
Datanodes available: 0 (0 total, 0 dead)
dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: ᅵ%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
------------------------------**-------------------
Datanodes available: 0 (0 total, 0 dead)
1) I logged in to the HADOOP home directory and removed all the old
logs (rm -rf logs/*)
2) Next I deleted the contents of the directory on all my slave and
master nodes (rm -rf /app/hadoop/*)
3) I formatted the namenode (bin/hadoop namenode -format)
4) I started all the processes - first the namenode, datanode and then
the map - reduce. I typed jps on the terminal to ensure that all the
processes (Namenode, Datanode, JobTracker, Task Tracker) are up and
running.
5) Now doing this, I recreated the directories in the dfs.
However still no luck with the process.
Can you kindly assist regarding this ? I am new to Hadoop and I am
having no idea as how I can proceed with this.
Regards,
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida
Jitendra Yadav
2013-10-08 18:26:14 UTC
Permalink
Yes

Thanks
Jitendra
Post by Basu,Indrashish
Hi ,
Just to update on this, I have deleted all the old logs and files from
the /tmp and /app/hadoop directory, and restarted all the nodes, I have
Configured Capacity: 3665985536 (3.41 GB)
Present Capacity: 24576 (24 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 24576 (24 KB)
DFS Used%: 100%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Name: 10.227.56.195:50010
Decommission Status : Normal
Configured Capacity: 3665985536 (3.41 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 3665960960 (3.41 GB)
DFS Remaining: 0(0 KB)
DFS Used%: 0%
DFS Remaining%: 0%
Last contact: Tue Oct 08 11:12:19 PDT 2013
However when I tried putting the files back in HDFS, I am getting the
same error as stated earlier. Do I need to clear some space for the HDFS
?
Regards,
Indrashish
Post by Basu,Indrashish
Hi Jitendra,
2013-10-07 11:27:41,960 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/app/hadoop/tmp/dfs/data is not formatted.
2013-10-07 11:27:41,961 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2013-10-07 11:27:42,094 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
FSDatasetStatusMBean
2013-10-07 11:27:42,099 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
50010
2013-10-07 11:27:42,107 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2013-10-07 11:27:42,369 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2013-10-07 11:27:42,632 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open()
is -1. Opening the listener on 50075
listener.getLocalPort() returned 50075
webServer.getConnectors()[0].getLocalPort() returned 50075
2013-10-07 11:27:42,634 INFO org.apache.hadoop.http.HttpServer: Jetty
bound to port 50075
2013-10-07 11:27:42,634 INFO org.mortbay.log: jetty-6.1.14
2013-10-07 11:31:29,821 INFO org.mortbay.log: Started
2013-10-07 11:31:29,843 INFO
org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics
with processName=DataNode, sessionId=null
2013-10-07 11:31:29,912 INFO
org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics
with hostName=DataNode, port=50020
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 50020: starting
2013-10-07 11:31:29,934 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
DatanodeRegistration(tegra-ubuntu:50010, storageID=, infoPort=50075,
ipcPort=50020)
2013-10-07 11:31:29,971 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id
DS-1027334635-127.0.1.1-50010-1381170689938 is assigned to data-node
10.227.56.195:50010
2013-10-07 11:31:29,973 INFO
DatanodeRegistration(10.227.56.195:50010,
storageID=DS-1027334635-127.0.1.1-50010-1381170689938,
infoPort=50075,
ipcPort=50020)In DataNode.run, data = FSDataset
{dirpath='/app/hadoop/tmp/dfs/data/current'}
2013-10-07 11:31:29,974 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: using
BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2013-10-07 11:31:30,032 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 19 msecs
2013-10-07 11:31:30,035 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic
block scanner.
2013-10-07 11:41:42,222 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 20 msecs
2013-10-07 12:41:43,482 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 22 msecs
2013-10-07 13:41:44,755 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 13 msecs
I restarted the datanode and made sure that it is up and running
(typed jps command).
Regards,
Indrashish
Post by Jitendra Yadav
As per your dfs report, available DataNodes count is ZERO in you
cluster.
Please check your data node logs.
Regards
Jitendra
Post by Basu,Indrashish
Hello,
My name is Indrashish Basu and I am a Masters student in the
Department
of Electrical and Computer Engineering.
Currently I am doing my research project on Hadoop implementation on
ARM processor and facing an issue while trying to run a sample
Hadoop
source code on the same. Every time I am trying to put some files
in the
HDFS, I am getting the below error.
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/root/bin/cpu-kmeans2D could only be replicated to 0 nodes,
instead
of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
at org.apache.hadoop.ipc.Client.call(Client.java:739)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
13/10/07 11:31:29 WARN hdfs.DFSClient: Error Recovery for block null
bad datanode[0] nodes == null
13/10/07 11:31:29 WARN hdfs.DFSClient: Could not get block
locations.
Source file "/user/root/bin/cpu-kmeans2D" - Aborting...
put: java.io.IOException: File /user/root/bin/cpu-kmeans2D could
only
be replicated to 0 nodes, instead of 1
I tried replicating the namenode and datanode by deleting all the
old
logs on the master and the slave nodes as well as the folders under
/app/hadoop/, after which I formatted the namenode and started the
process again (bin/start-all.sh), but still no luck with the same.
I tried generating the admin report(pasted below) after doing the
restart, it seems the data node is not getting started.
-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)
dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: �%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)
1) I logged in to the HADOOP home directory and removed all the old
logs (rm -rf logs/*)
2) Next I deleted the contents of the directory on all my slave and
master nodes (rm -rf /app/hadoop/*)
3) I formatted the namenode (bin/hadoop namenode -format)
4) I started all the processes - first the namenode, datanode and
then
the map - reduce. I typed jps on the terminal to ensure that all the
processes (Namenode, Datanode, JobTracker, Task Tracker) are up and
running.
5) Now doing this, I recreated the directories in the dfs.
However still no luck with the process.
Can you kindly assist regarding this ? I am new to Hadoop and I am
having no idea as how I can proceed with this.
Regards,
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida
Basu,Indrashish
2013-10-08 18:29:57 UTC
Permalink
Hi Tariq,

Thanks a lot for your help.

Can you please let me
know the path where I can check the old files in the HDFS and remove
them accordingly. I am sorry to bother with these questions, I am
absolutely new to Hadoop.

Thanks again for your time and pateince.


Regards,

Indrashish

On Tue, 8 Oct 2013 23:51:30 +0530, Mohammad
Post by Mohammad Tariq
You don't have any more space left in your HDFS. Delete
some old data or add additional storage.
Post by Mohammad Tariq
Warm Regards,
Tariq
cloudfront.blogspot.com [6]
Post by Mohammad Tariq
On Tue, Oct 8, 2013 at 11:47 PM,
Post by Basu,Indrashish
Hi ,
Just to update on this, I have
deleted all the old logs and files from the /tmp and /app/hadoop
directory, and restarted all the nodes, I have now 1 datanode available
Post by Mohammad Tariq
Post by Basu,Indrashish
Configured Capacity: 3665985536
(3.41 GB)
Post by Mohammad Tariq
Post by Basu,Indrashish
Present Capacity: 24576 (24 KB)
DFS Remaining: 0 (0
KB) DFS Used: 24576 (24 KB)
Post by Mohammad Tariq
Post by Basu,Indrashish
DFS Used%: 100%
Under replicated
blocks: 0
Post by Mohammad Tariq
Post by Basu,Indrashish
Blocks with corrupt replicas: 0
Missing blocks: 0
------------------------------------------------- Datanodes available: 1
(1 total, 0 dead)
Post by Mohammad Tariq
Post by Basu,Indrashish
Name: 10.227.56.195:50010 [5]
Decommission
Status : Normal
Post by Mohammad Tariq
Post by Basu,Indrashish
Configured Capacity: 3665985536 (3.41 GB)
DFS
Used: 24576 (24 KB)
Post by Mohammad Tariq
Post by Basu,Indrashish
Non DFS Used: 3665960960 (3.41 GB)
DFS
Remaining: 0(0 KB)
Post by Mohammad Tariq
Post by Basu,Indrashish
DFS Used%: 0%
DFS Remaining%: 0%
Last
contact: Tue Oct 08 11:12:19 PDT 2013
Post by Mohammad Tariq
Post by Basu,Indrashish
However when I tried
putting the files back in HDFS, I am getting the same error as stated
earlier. Do I need to clear some space for the HDFS ?
Post by Mohammad Tariq
Post by Basu,Indrashish
Regards,
Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
On Tue, 08 Oct 2013 14:01:19 -0400, Basu,Indrashish
Post by Basu,Indrashish
Hi Jitendra,
This is what I am getting in the
2013-10-07 11:27:41,960 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/app/hadoop/tmp/dfs/data is not formatted.
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:27:41,961
INFO
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
org.apache.hadoop.hdfs.server.common.Storage: Formatting
...
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:27:42,094 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
FSDatasetStatusMBean
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:27:42,099 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server
at
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
50010
2013-10-07 11:27:42,107 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith
is
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
1048576 bytes/s
Logging to
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)
via
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
org.mortbay.log.Slf4jLog
2013-10-07 11:27:42,632 INFO
org.apache.hadoop.http.HttpServer: Port
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
returned by
webServer.getConnectors()[0].getLocalPort() before open()
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
is -1.
Opening the listener on 50075
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:27:42,633 INFO
listener.getLocalPort() returned
50075
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
webServer.getConnectors()[0].getLocalPort() returned 50075
2013-10-07 11:27:42,634 INFO org.apache.hadoop.http.HttpServer:
Jetty
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
bound to port 50075
2013-10-07 11:27:42,634 INFO
org.mortbay.log: jetty-6.1.14
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:31:29,821 INFO
org.mortbay.log: Started
[2]
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:31:29,843 INFO
org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics
with processName=DataNode, sessionId=null
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:31:29,912
INFO
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC
Metrics
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
with hostName=DataNode, port=50020
2013-10-07
11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:31:29,922 INFO
org.apache.hadoop.ipc.Server: IPC Server
starting
IPC Server
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
handler 0 on 50020: starting
2013-10-07 11:31:29,933
INFO org.apache.hadoop.ipc.Server: IPC Server
starting
IPC Server
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
handler 2 on 50020: starting
2013-10-07 11:31:29,934
INFO
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
DatanodeRegistration(tegra-ubuntu:50010, storageID=,
infoPort=50075,
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
ipcPort=50020)
2013-10-07 11:31:29,971 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id
DS-1027334635-127.0.1.1-50010-1381170689938 is assigned to data-node
10.227.56.195:50010 [3]
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:31:29,973 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(10.227.56.195:50010 [4],
storageID=DS-1027334635-127.0.1.1-50010-1381170689938,
infoPort=50075,
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
ipcPort=50020)In DataNode.run, data = FSDataset
{dirpath='/app/hadoop/tmp/dfs/data/current'}
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:31:29,974
INFO
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
org.apache.hadoop.hdfs.server.datanode.DataNode: using
BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07
11:31:30,032 INFO
BlockReport of 0
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
blocks got processed in 19 msecs
2013-10-07
11:31:30,035 INFO
Starting Periodic
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
block scanner.
2013-10-07 11:41:42,222
INFO
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 20 msecs
2013-10-07 12:41:43,482
INFO
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 22 msecs
2013-10-07 13:41:44,755
INFO
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 13 msecs
I restarted the datanode
and made sure that it is up and running
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
(typed jps command).
Regards,
Indrashish
On Tue, 8 Oct 2013 23:25:25 +0530,
Post by Jitendra Yadav
As per your dfs report, available
DataNodes count is ZERO in you cluster.
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Please check your
data node logs.
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Regards
Jitendra
On 10/8/13,
Post by Basu,Indrashish
Hello,
My name is
Indrashish Basu and I am a Masters student in the Department
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
of
Electrical and Computer Engineering.
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
Currently I am doing
my research project on Hadoop implementation on
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
ARM processor and
facing an issue while trying to run a sample Hadoop
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
source code on
the same. Every time I am trying to put some files in the
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
HDFS, I
am getting the below error.
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
13/10/07 11:31:29 WARN
hdfs.DFSClient: DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/root/bin/cpu-kmeans2D could only be replicated to 0 nodes,
instead
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
java.security.AccessController.doPrivileged(Native Method)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
javax.security.auth.Subject.doAs(Subject.java:415)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
at org.apache.hadoop.ipc.Client.call(Client.java:739)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
com.sun.proxy.$Proxy0.addBlock(Unknown Source)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
13/10/07 11:31:29 WARN hdfs.DFSClient: Error Recovery for block null
bad datanode[0] nodes == null
13/10/07 11:31:29 WARN
hdfs.DFSClient: Could not get block locations.
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
Source file
"/user/root/bin/cpu-kmeans2D" - Aborting...
java.io.IOException: File /user/root/bin/cpu-kmeans2D could only
be replicated to 0 nodes, instead of 1
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
I tried replicating
the namenode and datanode by deleting all the old
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
logs on the
master and the slave nodes as well as the folders under
/app/hadoop/, after which I formatted the namenode and started the
process again (bin/start-all.sh), but still no luck with the same.
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
I tried generating the admin report(pasted below) after doing the
restart, it seems the data node is not getting started.
-------------------------------------------------
Datanodes
available: 0 (0 total, 0 dead)
***@tegra-ubuntu:~/hadoop-gpu-master/hadoop-gpu-0.20.1#
bin/hadoop
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
dfsadmin -report
Configured Capacity: 0 (0
KB)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
DFS Used%: ᅵ%
0
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes
available: 0 (0 total, 0 dead)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
I have tried the following
1) I logged in to the HADOOP
home directory and removed all the old
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
logs (rm -rf logs/*)
2) Next I deleted the contents of the directory on all my slave and
master nodes (rm -rf /app/hadoop/*)
3) I
formatted the namenode (bin/hadoop namenode -format)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
4) I
started all the processes - first the namenode, datanode and then
the map - reduce. I typed jps on the terminal to ensure that all
the
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
processes (Namenode, Datanode, JobTracker, Task Tracker) are up and
running.
5) Now doing this, I recreated the
directories in the dfs.
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
However still no luck with the
process.
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
Can you kindly assist regarding this ? I am new to
Hadoop and I am
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
having no idea as how I can proceed with
this.
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
Regards,
--
Indrashish Basu
Graduate Student
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
Department of Electrical and Computer
Engineering
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
University of Florida
--
Indrashish Basu
Graduate Student
Post by Mohammad Tariq
Post by Basu,Indrashish
Department of Electrical and Computer Engineering
University of Florida
--
Indrashish Basu
Graduate Student

Department of Electrical and Computer Engineering
University of
Florida



Links:
------
[1] mailto:indrashish-***@public.gmane.org
[2]
http://***@0.0.0.0:50075
[3]
http://10.227.56.195:50010
[4] http://10.227.56.195:50010
[5]
http://10.227.56.195:50010
[6] http://cloudfront.blogspot.com
[7]
mailto:indrashish-***@public.gmane.org
Mohammad Tariq
2013-10-08 20:08:52 UTC
Permalink
You are welcome Basu.

Not a problem. You can use *bin/hadoop fs -lsr /* to list down all the HDFS
files and directories. See which files are no longer required and delete
them using *bin/hadoop fs -rm /path/to/the/file*

Warm Regards,
Tariq
cloudfront.blogspot.com
Post by Basu,Indrashish
**
Hi Tariq,
Thanks a lot for your help.
Can you please let me know the path where I can check the old files in the
HDFS and remove them accordingly. I am sorry to bother with these
questions, I am absolutely new to Hadoop.
Thanks again for your time and pateince.
Regards,
Indrashish
You don't have any more space left in your HDFS. Delete some old data or
add additional storage.
Warm Regards,
Tariq
cloudfront.blogspot.com
Post by Basu,Indrashish
Hi ,
Just to update on this, I have deleted all the old logs and files from
the /tmp and /app/hadoop directory, and restarted all the nodes, I have now
Configured Capacity: 3665985536 (3.41 GB)
Present Capacity: 24576 (24 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 24576 (24 KB)
DFS Used%: 100%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Name: 10.227.56.195:50010
Decommission Status : Normal
Configured Capacity: 3665985536 (3.41 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 3665960960 (3.41 GB)
DFS Remaining: 0(0 KB)
DFS Used%: 0%
DFS Remaining%: 0%
Last contact: Tue Oct 08 11:12:19 PDT 2013
However when I tried putting the files back in HDFS, I am getting the
same error as stated earlier. Do I need to clear some space for the HDFS ?
Regards,
Indrashish
Post by Basu,Indrashish
Hi Jitendra,
2013-10-07 11:27:41,960 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/app/hadoop/tmp/dfs/data is not formatted.
2013-10-07 11:27:41,961 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2013-10-07 11:27:42,094 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
FSDatasetStatusMBean
2013-10-07 11:27:42,099 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
50010
2013-10-07 11:27:42,107 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2013-10-07 11:27:42,369 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2013-10-07 11:27:42,632 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open()
is -1. Opening the listener on 50075
listener.getLocalPort() returned 50075
webServer.getConnectors()[0].getLocalPort() returned 50075
2013-10-07 11:27:42,634 INFO org.apache.hadoop.http.HttpServer: Jetty
bound to port 50075
2013-10-07 11:27:42,634 INFO org.mortbay.log: jetty-6.1.14
2013-10-07 11:31:29,821 INFO org.mortbay.log: Started
2013-10-07 11:31:29,843 INFO
org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics
with processName=DataNode, sessionId=null
2013-10-07 11:31:29,912 INFO
org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics
with hostName=DataNode, port=50020
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 50020: starting
2013-10-07 11:31:29,934 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
DatanodeRegistration(tegra-ubuntu:50010, storageID=, infoPort=50075,
ipcPort=50020)
2013-10-07 11:31:29,971 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id
DS-1027334635-127.0.1.1-50010-1381170689938 is assigned to data-node
10.227.56.195:50010
2013-10-07 11:31:29,973 INFO
DatanodeRegistration(10.227.56.195:50010,
storageID=DS-1027334635-127.0.1.1-50010-1381170689938, infoPort=50075,
ipcPort=50020)In DataNode.run, data = FSDataset
{dirpath='/app/hadoop/tmp/dfs/data/current'}
2013-10-07 11:31:29,974 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: using
BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2013-10-07 11:31:30,032 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 19 msecs
2013-10-07 11:31:30,035 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic
block scanner.
2013-10-07 11:41:42,222 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 20 msecs
2013-10-07 12:41:43,482 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 22 msecs
2013-10-07 13:41:44,755 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 13 msecs
I restarted the datanode and made sure that it is up and running
(typed jps command).
Regards,
Indrashish
Post by Jitendra Yadav
As per your dfs report, available DataNodes count is ZERO in you
cluster.
Please check your data node logs.
Regards
Jitendra
Post by Basu,Indrashish
Hello,
My name is Indrashish Basu and I am a Masters student in the Department
of Electrical and Computer Engineering.
Currently I am doing my research project on Hadoop implementation on
ARM processor and facing an issue while trying to run a sample Hadoop
source code on the same. Every time I am trying to put some files in
the
HDFS, I am getting the below error.
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/root/bin/cpu-kmeans2D could only be replicated to 0 nodes,
instead
of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
getAdditionalBlock(FSNamesystem.java:1267)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(
NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(
NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(
DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
at org.apache.hadoop.ipc.Client.call(Client.java:739)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(
NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(
DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(
RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(
RetryInvocationHandler.java:59)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(
DFSClient.java:2904)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.
nextBlockOutputStream(DFSClient.java:2786)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.
access$2000(DFSClient.java:2076)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$
DataStreamer.run(DFSClient.java:2262)
13/10/07 11:31:29 WARN hdfs.DFSClient: Error Recovery for block null
bad datanode[0] nodes == null
13/10/07 11:31:29 WARN hdfs.DFSClient: Could not get block locations.
Source file "/user/root/bin/cpu-kmeans2D" - Aborting...
put: java.io.IOException: File /user/root/bin/cpu-kmeans2D could only
be replicated to 0 nodes, instead of 1
I tried replicating the namenode and datanode by deleting all the old
logs on the master and the slave nodes as well as the folders under
/app/hadoop/, after which I formatted the namenode and started the
process again (bin/start-all.sh), but still no luck with the same.
I tried generating the admin report(pasted below) after doing the
restart, it seems the data node is not getting started.
-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)
dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: ᅵ%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)
1) I logged in to the HADOOP home directory and removed all the old
logs (rm -rf logs/*)
2) Next I deleted the contents of the directory on all my slave and
master nodes (rm -rf /app/hadoop/*)
3) I formatted the namenode (bin/hadoop namenode -format)
4) I started all the processes - first the namenode, datanode and then
the map - reduce. I typed jps on the terminal to ensure that all the
processes (Namenode, Datanode, JobTracker, Task Tracker) are up and
running.
5) Now doing this, I recreated the directories in the dfs.
However still no luck with the process.
Can you kindly assist regarding this ? I am new to Hadoop and I am
having no idea as how I can proceed with this.
Regards,
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida
Basu,Indrashish
2013-10-08 20:46:04 UTC
Permalink
Hi Tariq,

Thanks for your help again.

I tried deleting the old
HDFS files and directories as you suggested , and then do the
reformatting and starting all the nodes. However after running the
dfsadmin report I am again seeing that datanode is not generated.


***@tegra-ubuntu:~/hadoop-gpu-master/hadoop-gpu-0.20.1# bin/hadoop
dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0
KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: ᅵ%
Under
replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks:
0

-------------------------------------------------
Datanodes
available: 0 (0 total, 0 dead)

However when I typed jps, it is showing
that datanode is up and running. Below are the datanode logs generated
for the given time stamp. Can you kindly assist regarding this ?


2013-10-08 13:35:55,680 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/app/hadoop/tmp/dfs/data is not formatted.
2013-10-08 13:35:55,680 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2013-10-08
13:35:55,814 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Registered FSDatasetStatusMBean
2013-10-08 13:35:55,820 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
50010
2013-10-08 13:35:55,828 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2013-10-08 13:35:56,153 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2013-10-08 13:35:56,497 INFO
org.apache.hadoop.http.HttpServer: Port returned by
webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening
the listener on 50075
2013-10-08 13:35:56,498 INFO
org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned
50075 webServer.getConnectors()[0].getLocalPort() returned
50075
2013-10-08 13:35:56,513 INFO org.apache.hadoop.http.HttpServer:
Jetty bound to port 50075
2013-10-08 13:35:56,514 INFO org.mortbay.log:
jetty-6.1.14
2013-10-08 13:40:45,127 INFO org.mortbay.log: Started
***@0.0.0.0:50075
2013-10-08 13:40:45,139 INFO
org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with
processName=DataNode, sessionId=null
2013-10-08 13:40:45,189 INFO
org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with
hostName=DataNode, port=50020
2013-10-08 13:40:45,198 INFO
org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2013-10-08
13:40:45,201 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on
50020: starting
2013-10-08 13:40:45,201 INFO
org.apache.hadoop.ipc.Server: IPC Server listener on 50020:
starting
2013-10-08 13:40:45,202 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
DatanodeRegistration(tegra-ubuntu:50010, storageID=, infoPort=50075,
ipcPort=50020)
2013-10-08 13:40:45,206 INFO
org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020:
starting
2013-10-08 13:40:45,207 INFO org.apache.hadoop.ipc.Server: IPC
Server handler 1 on 50020: starting
2013-10-08 13:40:45,234 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id
DS-863644283-127.0.1.1-50010-1381264845208 is assigned to data-node
10.227.56.195:50010
2013-10-08 13:40:45,235 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(10.227.56.195:50010,
storageID=DS-863644283-127.0.1.1-50010-1381264845208, infoPort=50075,
ipcPort=50020)In DataNode.run, data =
FSDataset{
dirpath='/app/hadoop/tmp/dfs/data/current'}
2013-10-08
13:40:45,235 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using
BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2013-10-08
13:40:45,275 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
BlockReport of 0 blocks got processed in 14 msecs
2013-10-08
13:40:45,277 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Starting Periodic block scanner.

Regards,

Indrashish

On Wed, 9 Oct
Post by Mohammad Tariq
You are welcome Basu.
Not a problem. You can use BIN/HADOOP FS -LSR / to list down all the
HDFS files and directories. See which files are no longer required and
delete them using BIN/HADOOP FS -RM /PATH/TO/THE/FILE
Post by Mohammad Tariq
Warm
Regards,
Post by Mohammad Tariq
Tariq
cloudfront.blogspot.com [8]
On Tue, Oct 8,
Post by Basu,Indrashish
Hi Tariq,
Thanks
a lot for your help.
Post by Mohammad Tariq
Post by Basu,Indrashish
Can you please let me know the path where I
can check the old files in the HDFS and remove them accordingly. I am
sorry to bother with these questions, I am absolutely new to Hadoop.
Post by Mohammad Tariq
Post by Basu,Indrashish
Thanks again for your time and pateince.
Regards,
Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
On Tue, 8 Oct 2013 23:51:30 +0530, Mohammad Tariq
Post by Mohammad Tariq
You don't have any more space left in your HDFS. Delete
some old data or add additional storage.
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Warm Regards,
Tariq
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
cloudfront.blogspot.com [6]
On Tue, Oct 8, 2013 at
Post by Basu,Indrashish
Hi ,
Just to
update on this, I have deleted all the old logs and files from the /tmp
and /app/hadoop directory, and restarted all the nodes, I have now 1
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Configured
Capacity: 3665985536 (3.41 GB)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Present Capacity: 24576 (24 KB)
DFS Remaining: 0 (0 KB) DFS Used: 24576 (24 KB)
DFS
Used%: 100%
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Under replicated blocks: 0
Blocks with
corrupt replicas: 0
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Missing blocks: 0
------------------------------------------------- Datanodes available: 1
(1 total, 0 dead)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Name: 10.227.56.195:50010 [5]
Decommission Status : Normal
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Configured Capacity: 3665985536 (3.41
GB)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
DFS Used: 24576 (24 KB)
Non DFS Used: 3665960960 (3.41
GB)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
DFS Remaining: 0(0 KB)
DFS Used%: 0%
0%
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Last contact: Tue Oct 08 11:12:19 PDT 2013
However
when I tried putting the files back in HDFS, I am getting the same error
as stated earlier. Do I need to clear some space for the HDFS ?
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Regards,
Indrashish
On Tue, 08 Oct 2013 14:01:19
Post by Basu,Indrashish
Hi Jitendra,
This
2013-10-07
11:27:41,960 INFO
Storage directory
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
/app/hadoop/tmp/dfs/data is not formatted.
2013-10-07 11:27:41,961 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2013-10-07 11:27:42,094 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
FSDatasetStatusMBean
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:27:42,099 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server
at
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
50010
2013-10-07 11:27:42,107 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith
is
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
1048576 bytes/s
2013-10-07 11:27:42,369 INFO
org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:27:42,632 INFO
org.apache.hadoop.http.HttpServer: Port
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
returned by
webServer.getConnectors()[0].getLocalPort() before open()
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
is -1.
Opening the listener on 50075
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:27:42,633 INFO
listener.getLocalPort()
returned 50075
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
webServer.getConnectors()[0].getLocalPort()
returned 50075
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:27:42,634 INFO
org.apache.hadoop.http.HttpServer: Jetty
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
bound to port 50075
2013-10-07 11:27:42,634 INFO org.mortbay.log: jetty-6.1.14
2013-10-07 11:31:29,821 INFO org.mortbay.log: Started
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:31:29,843
INFO
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM
Metrics
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
with processName=DataNode, sessionId=null
2013-10-07
11:31:29,912 INFO
Initializing RPC Metrics
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
with hostName=DataNode, port=50020
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC
Server
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Responder: starting
2013-10-07 11:31:29,922 INFO
org.apache.hadoop.ipc.Server: IPC Server
starting
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:31:29,933 INFO
org.apache.hadoop.ipc.Server: IPC Server
starting
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:31:29,933 INFO
org.apache.hadoop.ipc.Server: IPC Server
starting
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:31:29,933 INFO
org.apache.hadoop.ipc.Server: IPC Server
starting
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:31:29,934 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
DatanodeRegistration(tegra-ubuntu:50010, storageID=,
infoPort=50075,
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
ipcPort=50020)
2013-10-07 11:31:29,971
INFO
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id
DS-1027334635-127.0.1.1-50010-1381170689938 is assigned to
data-node
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
10.227.56.195:50010 [3]
2013-10-07 11:31:29,973
INFO
DatanodeRegistration(10.227.56.195:50010 [4],
storageID=DS-1027334635-127.0.1.1-50010-1381170689938,
infoPort=50075,
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
ipcPort=50020)In DataNode.run, data =
FSDataset
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
{dirpath='/app/hadoop/tmp/dfs/data/current'}
2013-10-07 11:31:29,974 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: using
BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2013-10-07 11:31:30,032 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 19 msecs
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 11:31:30,035
INFO
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting
Periodic
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
block scanner.
2013-10-07 11:41:42,222 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 20 msecs
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
2013-10-07 12:41:43,482
INFO
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 22 msecs
2013-10-07
13:41:44,755 INFO
BlockReport of 0
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
blocks got processed in 13 msecs
I
restarted the datanode and made sure that it is up and running
(typed jps command).
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Regards,
Indrashish
On Tue, 8 Oct 2013 23:25:25 +0530, Jitendra Yadav wrote:
As per your dfs report, available DataNodes count is ZERO in you
cluster.
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Please check your data node logs.
Regards
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Jitendra
On 10/8/13, Basu,Indrashish
Post by Basu,Indrashish
Hello,
My name is Indrashish
Basu and I am a Masters student in the Department
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
of Electrical
and Computer Engineering.
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
Currently I am doing my
research project on Hadoop implementation on
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
ARM processor and
facing an issue while trying to run a sample Hadoop
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
source code
on the same. Every time I am trying to put some files in the
HDFS, I am getting the below error.
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
13/10/07 11:31:29
WARN hdfs.DFSClient: DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/root/bin/cpu-kmeans2D could only be replicated to 0 nodes,
instead
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
java.security.AccessController.doPrivileged(Native Method)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
javax.security.auth.Subject.doAs(Subject.java:415)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at org.apache.hadoop.ipc.Client.call(Client.java:739)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
com.sun.proxy.$Proxy0.addBlock(Unknown Source)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
13/10/07 11:31:29 WARN hdfs.DFSClient: Error Recovery for block null
bad datanode[0] nodes == null
13/10/07
11:31:29 WARN hdfs.DFSClient: Could not get block locations.
Source file "/user/root/bin/cpu-kmeans2D" - Aborting...
java.io.IOException: File /user/root/bin/cpu-kmeans2D could only
be replicated to 0 nodes, instead of 1
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
I tried
replicating the namenode and datanode by deleting all the old
logs on the master and the slave nodes as well as the folders
under
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
/app/hadoop/, after which I formatted the namenode and
started the
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
process again (bin/start-all.sh), but still no luck
with the same.
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
I tried generating the admin
report(pasted below) after doing the
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
restart, it seems the data
node is not getting started.
-------------------------------------------------
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
Datanodes
available: 0 (0 total, 0 dead)
***@tegra-ubuntu:~/hadoop-gpu-master/hadoop-gpu-0.20.1#
bin/hadoop
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
dfsadmin -report
Configured Capacity: 0 (0
KB)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0
KB)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
DFS Used: 0 (0 KB)
DFS Used%: ᅵ%
Under
replicated blocks: 0
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
Datanodes
available: 0 (0 total, 0 dead)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
I have tried the
1) I logged in
to the HADOOP home directory and removed all the old
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
logs (rm
-rf logs/*)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
2) Next I deleted the contents of the
directory on all my slave and
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
master nodes (rm -rf
/app/hadoop/*)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
3) I formatted the namenode (bin/hadoop
namenode -format)
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
4) I started all the processes -
first the namenode, datanode and then
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
the map - reduce. I typed
jps on the terminal to ensure that all the
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
processes (Namenode,
Datanode, JobTracker, Task Tracker) are up and
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
running.
5) Now doing this, I recreated the directories in the
dfs.
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
However still no luck with the process.
Can you kindly assist regarding this ? I am new to Hadoop and I am
having no idea as how I can proceed with this.
Regards,
--
Indrashish Basu
Graduate Student
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
Department of Electrical and Computer
Engineering
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Basu,Indrashish
Post by Jitendra Yadav
Post by Basu,Indrashish
University of Florida
--
Indrashish
Basu
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
Graduate Student
Department of Electrical and Computer
Engineering
Post by Mohammad Tariq
Post by Basu,Indrashish
Post by Mohammad Tariq
Post by Basu,Indrashish
University of Florida
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer
Engineering
Post by Mohammad Tariq
Post by Basu,Indrashish
University of Florida
--
Indrashish Basu
Graduate
Student
Department of Electrical and Computer Engineering
University
of Florida



Links:
------
[1] mailto:indrashish-***@public.gmane.org
[2]
http://***@0.0.0.0:50075
[3]
http://10.227.56.195:50010
[4] http://10.227.56.195:50010
[5]
http://10.227.56.195:50010
[6] http://cloudfront.blogspot.com
[7]
mailto:indrashish-***@public.gmane.org
[8] http://cloudfront.blogspot.com
[9]
mailto:indrashish-***@public.gmane.org
hadoop hive
2013-10-14 08:24:02 UTC
Permalink
Hi Indrashish,

Can you please check if your you DN is accessible by nn , and the other
this is hdfs-site.xml of DN is NN ip is given or not becoz if DN is up and
running the issue is DN is not able to attached to NN for getting register.

You can add DN in include file as well .

thanks
Vikas Srivastava
Post by Basu,Indrashish
**
Hi Tariq,
Thanks for your help again.
I tried deleting the old HDFS files and directories as you suggested , and
then do the reformatting and starting all the nodes. However after running
the dfsadmin report I am again seeing that datanode is not generated.
dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: ᅵ%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)
However when I typed jps, it is showing that datanode is up and running.
Below are the datanode logs generated for the given time stamp. Can you
kindly assist regarding this ?
Storage directory /app/hadoop/tmp/dfs/data is not formatted.
Formatting ...
2013-10-08 13:35:55,814 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
FSDatasetStatusMBean
2013-10-08 13:35:55,820 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
2013-10-08 13:35:55,828 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2013-10-08 13:35:56,153 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2013-10-08 13:35:56,497 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is
-1. Opening the listener on 50075
listener.getLocalPort() returned 50075
webServer.getConnectors()[0].getLocalPort() returned 50075
2013-10-08 13:35:56,513 INFO org.apache.hadoop.http.HttpServer: Jetty
bound to port 50075
2013-10-08 13:35:56,514 INFO org.mortbay.log: jetty-6.1.14
2013-10-08 13:40:45,127 INFO org.mortbay.log: Started
Initializing JVM Metrics with processName=DataNode, sessionId=null
Initializing RPC Metrics with hostName=DataNode, port=50020
2013-10-08 13:40:45,198 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2013-10-08 13:40:45,201 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 50020: starting
2013-10-08 13:40:45,201 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 50020: starting
2013-10-08 13:40:45,202 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
DatanodeRegistration(tegra-ubuntu:50010, storageID=, infoPort=50075,
ipcPort=50020)
2013-10-08 13:40:45,206 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 50020: starting
2013-10-08 13:40:45,207 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 50020: starting
2013-10-08 13:40:45,234 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id
DS-863644283-127.0.1.1-50010-1381264845208 is assigned to data-node
10.227.56.195:50010
2013-10-08 13:40:45,235 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
10.227.56.195:50010,
storageID=DS-863644283-127.0.1.1-50010-1381264845208, infoPort=50075,
ipcPort=50020)In DataNode.run, data = FSDataset{
dirpath='/app/hadoop/tmp/dfs/data/current'}
2013-10-08 13:40:45,235 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL
of 3600000msec Initial delay: 0msec
2013-10-08 13:40:45,275 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks
got processed in 14 msecs
2013-10-08 13:40:45,277 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block
scanner.
Regards,
Indrashish
You are welcome Basu.
Not a problem. You can use *bin/hadoop fs -lsr /* to list down all the
HDFS files and directories. See which files are no longer required and
delete them using *bin/hadoop fs -rm /path/to/the/file*
Warm Regards,
Tariq
cloudfront.blogspot.com
Post by Basu,Indrashish
Hi Tariq,
Thanks a lot for your help.
Can you please let me know the path where I can check the old files in
the HDFS and remove them accordingly. I am sorry to bother with these
questions, I am absolutely new to Hadoop.
Thanks again for your time and pateince.
Regards,
Indrashish
You don't have any more space left in your HDFS. Delete some old data or
add additional storage.
Warm Regards,
Tariq
cloudfront.blogspot.com
Post by Basu,Indrashish
Hi ,
Just to update on this, I have deleted all the old logs and files from
the /tmp and /app/hadoop directory, and restarted all the nodes, I have now
Configured Capacity: 3665985536 (3.41 GB)
Present Capacity: 24576 (24 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 24576 (24 KB)
DFS Used%: 100%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Name: 10.227.56.195:50010
Decommission Status : Normal
Configured Capacity: 3665985536 (3.41 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 3665960960 (3.41 GB)
DFS Remaining: 0(0 KB)
DFS Used%: 0%
DFS Remaining%: 0%
Last contact: Tue Oct 08 11:12:19 PDT 2013
However when I tried putting the files back in HDFS, I am getting the
same error as stated earlier. Do I need to clear some space for the HDFS ?
Regards,
Indrashish
Post by Basu,Indrashish
Hi Jitendra,
2013-10-07 11:27:41,960 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/app/hadoop/tmp/dfs/data is not formatted.
2013-10-07 11:27:41,961 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2013-10-07 11:27:42,094 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
FSDatasetStatusMBean
2013-10-07 11:27:42,099 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
50010
2013-10-07 11:27:42,107 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2013-10-07 11:27:42,369 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2013-10-07 11:27:42,632 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open()
is -1. Opening the listener on 50075
listener.getLocalPort() returned 50075
webServer.getConnectors()[0].getLocalPort() returned 50075
2013-10-07 11:27:42,634 INFO org.apache.hadoop.http.HttpServer: Jetty
bound to port 50075
2013-10-07 11:27:42,634 INFO org.mortbay.log: jetty-6.1.14
2013-10-07 11:31:29,821 INFO org.mortbay.log: Started
2013-10-07 11:31:29,843 INFO
org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics
with processName=DataNode, sessionId=null
2013-10-07 11:31:29,912 INFO
org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics
with hostName=DataNode, port=50020
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 50020: starting
2013-10-07 11:31:29,934 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
DatanodeRegistration(tegra-ubuntu:50010, storageID=, infoPort=50075,
ipcPort=50020)
2013-10-07 11:31:29,971 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id
DS-1027334635-127.0.1.1-50010-1381170689938 is assigned to data-node
10.227.56.195:50010
2013-10-07 11:31:29,973 INFO
DatanodeRegistration(10.227.56.195:50010,
storageID=DS-1027334635-127.0.1.1-50010-1381170689938, infoPort=50075,
ipcPort=50020)In DataNode.run, data = FSDataset
{dirpath='/app/hadoop/tmp/dfs/data/current'}
2013-10-07 11:31:29,974 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: using
BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2013-10-07 11:31:30,032 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 19 msecs
2013-10-07 11:31:30,035 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic
block scanner.
2013-10-07 11:41:42,222 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 20 msecs
2013-10-07 12:41:43,482 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 22 msecs
2013-10-07 13:41:44,755 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 13 msecs
I restarted the datanode and made sure that it is up and running
(typed jps command).
Regards,
Indrashish
Post by Jitendra Yadav
As per your dfs report, available DataNodes count is ZERO in you
cluster.
Please check your data node logs.
Regards
Jitendra
Post by Basu,Indrashish
Hello,
My name is Indrashish Basu and I am a Masters student in the
Department
of Electrical and Computer Engineering.
Currently I am doing my research project on Hadoop implementation on
ARM processor and facing an issue while trying to run a sample Hadoop
source code on the same. Every time I am trying to put some files in
the
HDFS, I am getting the below error.
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/root/bin/cpu-kmeans2D could only be replicated to 0 nodes,
instead
of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
getAdditionalBlock(FSNamesystem.java:1267)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(
NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(
NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(
DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
at org.apache.hadoop.ipc.Client.call(Client.java:739)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(
NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(
DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(
RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(
RetryInvocationHandler.java:59)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.
locateFollowingBlock(DFSClient.java:2904)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.
nextBlockOutputStream(DFSClient.java:2786)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.
access$2000(DFSClient.java:2076)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$
DataStreamer.run(DFSClient.java:2262)
13/10/07 11:31:29 WARN hdfs.DFSClient: Error Recovery for block null
bad datanode[0] nodes == null
13/10/07 11:31:29 WARN hdfs.DFSClient: Could not get block locations.
Source file "/user/root/bin/cpu-kmeans2D" - Aborting...
put: java.io.IOException: File /user/root/bin/cpu-kmeans2D could only
be replicated to 0 nodes, instead of 1
I tried replicating the namenode and datanode by deleting all the old
logs on the master and the slave nodes as well as the folders under
/app/hadoop/, after which I formatted the namenode and started the
process again (bin/start-all.sh), but still no luck with the same.
I tried generating the admin report(pasted below) after doing the
restart, it seems the data node is not getting started.
-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)
dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: ᅵ%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)
1) I logged in to the HADOOP home directory and removed all the old
logs (rm -rf logs/*)
2) Next I deleted the contents of the directory on all my slave and
master nodes (rm -rf /app/hadoop/*)
3) I formatted the namenode (bin/hadoop namenode -format)
4) I started all the processes - first the namenode, datanode and then
the map - reduce. I typed jps on the terminal to ensure that all the
processes (Namenode, Datanode, JobTracker, Task Tracker) are up and
running.
5) Now doing this, I recreated the directories in the dfs.
However still no luck with the process.
Can you kindly assist regarding this ? I am new to Hadoop and I am
having no idea as how I can proceed with this.
Regards,
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida
Continue reading on narkive:
Loading...