Hadoop集群配置

首先恢复单机配置,可以参考教程:
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
本文大部分参考了这个《Hadoop集群配置教程》,非常感谢!

Master:10.182.165.114 node1 (Namenode 和 JobTracker)
Slave:10.182.165.156 node2

1、下载、创建用户

/usr/sbin/adduser hadoop
cd /home/hadoop
#切换为hadoop用户
su hadoop
wget http://apache.mirrors.tds.net//hadoop/common/hadoop-0.20.203.0/hadoop-0.20.203.0rc1.tar.gz
tar -xzvf hadoop-0.20.203.0rc1.tar.gz
mv hadoop-0.20.203.0rc1 hadoop_home

2、设置为ssh为:无需密码的、自动登录

ssh-keygen -t rsa -P ""
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
cat "StrictHostKeyChecking no" >> ~/.ssh/config
#测试一下是否无需密码自动登录
ssh localhost

3、安装JDK,设置环境变量

#安装JDK
sudo apt-get install sun-java6-jdk

#HADOOP_HOME和JAVA_HOME
sudo vim ~/.bashrc
#添加如下
#ENV for Hadoop
export HADOOP_HOME=/home/hadoop/hadoop_home
export JAVA_HOME=/usr/lib/jvm/java-6-sun/

#hadoop_home/conf下env也要配置
vim ./hadoop-env.sh
#添加
export JAVA_HOME=/usr/lib/jvm/java-6-sun/

4、配置conf(Master和Slave)

配置文件:conf/core-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://node1:54310</value>
</property>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/home/hadoop/hadoop_home/var</value>
</property>
</configuration>

配置文件:conf/mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>node1:54311</value>
</property>
<property>
  <name>mapred.local.dir</name>
  <value>/home/hadoop/hadoop_home/var</value>
</property>
</configuration>

配置文件:conf/hdfs-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
  <name>dfs.replication</name>
  <!-- 我们的集群又两个结点,所以rep两份 -->
  <value>2</value>
</property>
</configuration>

5、格式化namenode,启动

bin/hadoop namenode -format
./bin/start-all.sh
#jps,应该能看到如下5个进程
$jps
18940 JobTracker
18775 DataNode
18687 NameNode
19027 TaskTracker
18871 SecondaryNameNode

6、拷贝数据

#拷贝
./bin/hadoop dfs -copyFromLocal ./book_data/ /user/hadoop/book_data

#执行
bin/hadoop jar hadoop-examples-0.20.203.0.jar wordcount /user/hadoop/book_data /user/hadoop/book_data_out

#耗时
real	0m39.694s
user	0m2.330s
sys	0m0.260s

以上的单机配置完毕,下面开始集群配置

0、做Host映射

不要像我一样耍小聪明,以为IP也行,Hadoop会做反向DNS,所以老老实实的Host吧……

vim /etc/hosts
10.182.165.114 node1
10.182.165.156 node2

1、在master上配置IP列表

vim conf/master
node1

vim conf/slaves
node1
node2

2、在两台机器上,都设置map/reduce的slots

mapred.map.tasks:一般是slaves * 10 -> 20
mapred.reduce.tasks:一般是slave_cpu_core * 2 -> 8

配置文件:conf/mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>node1:54311</value>
</property>
<property>
  <name>mapred.local.dir</name>
  <value>/home/hadoop/hadoop_home/var</value>
</property>
<property>
  <name>mapred.map.tasks</name>
  <value>20</value>
</property>
<property>
  <name>mapred.reduce.tasks</name>
  <value>8</value>
</property>
</configuration>

3、重新格式化Namenode,每次重新建集群时候都要?
bin/hadoop namenode -format

4、启动dfs,在master上执行

bin/start-dfs.sh

#如果启动成功的话,

#Master上:
20160 DataNode
20070 NameNode
20278 SecondaryNameNode

#Slave上:
1009 DataNode

5、重新拷贝文件
./bin/hadoop dfs -copyFromLocal ./book_data/ /user/hadoop/book_data

6、启动map-reduce

bin/start-mapred.sh

#现在Master:
20160 DataNode
20568 TaskTracker
20070 NameNode
20478 JobTracker
20278 SecondaryNameNode

#现在Slave:
1168 TaskTracker
1009 DataNode

7、运行

bin/hadoop jar hadoop-examples-0.20.203.0.jar wordcount /user/hadoop/book_data /user/hadoop/book_data_out

又添加了几组数据,也是37秒,数据还是太小。。

Leave a Reply

Your email address will not be published. Required fields are marked *