Hadoop安裝並分為以下幾個步驟:
1 准備階段--hadoop軟件安裝包(rpm,tar.gz),jdk安裝包/系統自帶jdk
2.解壓安裝:tar zxcfhadoop2.7.1.tar.gz
3.修改配置文件:文件目錄-- /home/zq/soft/hadoop-2.7.1/etc/hadoop
更改主機名:sudo gedit /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=zhangqiang
NTPSERVERARGS=iburst
更改hosts文件:sudo gedit /etc/hosts
添加:192.168.44.135 zhangqiang
關閉防火牆:root下:service iptables stop
1)core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://zhangqiang:9000</value>
</property>
</configuration>
2)hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/zq/soft/hadoop-2.7.1/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/zq/soft/hadoop-2.7.1/dfs/data</value>
</property>
</configuration>
**服務器端可以不配置藍色部分,個人用戶可以配置。
3)hadoop-env.sh
export JAVA_HOME=/home/zq/soft/jdk1.8.0_91
4) mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
5)yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
6)slaves:刪除localhost,添加用戶名zhangqiang
4.設置免密碼登錄(ssh設置):
1)打開終端,cd .ssh-->.ssh不存在;
2)下載安裝,#yum install openssh-server
3 )進入.ssh: #cd .ssh[如果目錄找不到,cd ~/.ssh],ls查看,[ssh -v 查看ssh版本]
4)免登錄操作:ssh-keygen -t rsa
5) ls查看,生成id_rsa id_rsa.pub
6)將id_rsa.pub放入密鑰連接池:cp id_rsa.pub authorized_keys
7 )ssh localhost
8 )ssh localhost
5.啟動服務:
1)格式化bin/hdfs namenode -format
2)啟動namenode服務:sbin/hadoop-daemon.sh start namenode
3)啟動datanode服務:sbin/hadoop-doemon.sh start datanode
**sbin/start-dfs.sh 該命令執行啟動時會同時開啟secondarynamenode。
創建文件夾:
bin/hadoop fs –mkdir <path>或
bin/hdfs dfs –mkdir <path>
上傳文件:
bin/hadoop fs –put 文件名 目標文件位置 或
bin/hdfs dfs –put 文件名 目標文件目錄
4)啟動yarn服務:sbin/start-yarn.sh
6.測試mapreduce:
bin/hadoop jarshare/hadoop/mapreduce/hadoop-mapreduce-example-2.7.1.jar pi 2 1000
7.jps測試:jps
會出現5個守護進程:jps,namenode,datanode,
關閉服務:sbin/stop-yarn.sh