CentOS6.5下安裝Hbase
CentOS6.5下安裝Hbase
環境:CentOS6.5 Hadoop2.7.2 HBase1.2.1
1.安裝好Hadoop集群,並啟動
[java]view plaincopy
- [grid@hadoop4~]$shhadoop-2.7.2/sbin/start-dfs.sh
- [grid@hadoop4~]$shhadoop-2.7.2/sbin/start-yarn.sh
查看 hadoop 版本:
[java]view plaincopy
- [grid@hadoop4~]$hadoop-2.7.2/bin/hadoopversion
- Hadoop2.7.2
2.查看 hbase 官方文檔(http://hbase.apache.org/book.html#basic.prerequisites),找到與 hadoop 版本對應的 hbase 並下載
[grid@hadoop4 ~]$ wget http://mirrors.cnnic.cn/apache/hbase/hbase-1.2.1/hbase-1.2.1-bin.tar.gz
3.解壓
[java]view plaincopy
- [grid@hadoop4~]$tar-zxfhbase-1.2.1-bin.tar.gz
4.進入 hbase 的 lib 目錄,查看 hadoop jar 包的版本
[java]view plaincopy
- [grid@hadoop4~]$cdhbase-1.2.1/lib/
- [grid@hadoop4lib]$find-name'hadoop*jar'
- ./hadoop-common-2.5.1.jar
- ./hadoop-mapreduce-client-common-2.5.1.jar
- ./hadoop-annotations-2.5.1.jar
- ./hadoop-yarn-server-common-2.5.1.jar
- ./hadoop-hdfs-2.5.1.jar
- ./hadoop-client-2.5.1.jar
- ./hadoop-mapreduce-client-shuffle-2.5.1.jar
- ./hadoop-yarn-common-2.5.1.jar
- ./hadoop-yarn-server-nodemanager-2.5.1.jar
- ./hadoop-yarn-client-2.5.1.jar
- ./hadoop-mapreduce-client-core-2.5.1.jar
- ./hadoop-auth-2.5.1.jar
- ./hadoop-mapreduce-client-app-2.5.1.jar
- ./hadoop-yarn-api-2.5.1.jar
- ./hadoop-mapreduce-client-jobclient-2.5.1.jar
發現與 hadoop 集群的版本號不一致,需要用 hadoop 目錄下的 jar 替換 hbase/lib 目錄下的 jar 文件。
編寫腳本來完成替換,如下所示:
[java]view plaincopy
- [grid@hadoop4lib]$pwd
- /home/grid/hbase-1.0.0/lib
- [grid@hadoop4lib]$vimf.sh
- find-name"hadoop*jar"|sed's/2.5.1/2.7.2/g'|sed's/\.\///g'>f.log
- rm./hadoop*jar
- cat./f.log|whilereadLine
- do
- find/home/grid/hadoop-2.7.2-name"$Line"|xargs-icp{}./
- done
- rm./f.log
- [grid@hadoop4lib]$chmodu+xf.sh
- [grid@hadoop4lib]$./f.sh
- [grid@hadoop4lib]$find-name'hadoop*jar'
- ./hadoop-yarn-api-2.7.2.jar
- ./hadoop-mapreduce-client-app-2.7.2.jar
- ./hadoop-common-2.5.2.jar
- ./hadoop-mapreduce-client-jobclient-2.7.2.jar
- ./hadoop-mapreduce-client-core-2.7.2.jar
- ./hadoop-yarn-server-nodemanager-2.7.2.jar
- ./hadoop-hdfs-2.7.2.jar
- ./hadoop-yarn-common-2.7.2.jar
- ./hadoop-mapreduce-client-shuffle-2.7.2.jar
- ./hadoop-auth-2.7.2.jar
- ./hadoop-mapreduce-client-common-2.7.2.jar
- ./hadoop-yarn-client-2.7.2.jar
- ./hadoop-annotations-2.7.2.jar
- ./hadoop-yarn-server-common-2.7.2.jar
OK,jar 包替換成功;hbase/lib 目錄下還有個 slf4j-log4j12-XXX.jar,在機器有裝hadoop時,由於classpath中會有hadoop中的這個jar包,會有沖突,直接刪除掉
[java]view plaincopy
- [grid@hadoop4lib]$rm`find-name'slf4j-log4j12-*jar'`
5.修改配置文件
5.1.
[java]view plaincopy
- [[email protected]]$viconf/hbase-env.sh
- exportJAVA_HOME=/usr/java/jdk1.7.0_72
- exportHBASE_CLASSPATH=/home/grid/hadoop-2.7.2/etc/hadoop
- exportHBASE_MANAGES_ZK=true
第一個參數指定了JDK路徑;第二個參數指定了 hadoop 的配置文件路徑;第三個參數設置使用 hbase 默認自帶的 Zookeeper
5.2.
[grid@hadoop4 hbase-1.2.1]$ vim conf/hbase-site.xml
[java]view plaincopy
- <property>
- <name>hbase.rootdir</name>
- <value>hdfs://localhost:9000/hbase</value>
- </property>
- <property>
- <name>hbase.cluster.distributed</name>
- <value>true</value>
- </property>
- <property>
- <name>hbase.tmp.dir</name>
- <value>/home/grid/hbase-1.2.1/tmp</value>
- </property>
- <property>
- <name>hbase.zookeeper.quorum</name>
- <value>hadoop4,hadoop5,hadoop6</value>
- </property>
- <property>
- <name>hbase.zookeeper.property.dataDir</name>
- <value>/home/grid/hbase-1.2.1/zookeeper</value>
- </property>
上述中的hbase.rootdir的value為hadoop中的etc目錄下的hadoop目錄下的core-site.xml中的dfs的value加/hbase
創建目錄
[java]view plaincopy
- [[email protected]]$mkdirtmp
- [[email protected]]$mkdirzookeeper
5.3.
[java]view plaincopy
- [[email protected]]$vimconf/regionservers
- hadoop4
- hadoop5
- hadoop6
6.設置環境變量
[java]view plaincopy
- [grid@hadoop4~]$vi.bash_profile
- exportHBASE_HOME=/home/grid/hbase-1.2.1
- exportPATH=$PATH:$HBASE_HOME/bin
- [grid@hadoop4~]$source.bash_profile
7.分發 hbase 到其它機器,並在其上設置環境變量
[grid@hadoop4 ~]$ scp -r hbase-1.2.1grid@hadoop5:~
[grid@hadoop4 ~]$ scp -r hbase-1.2.1 grid@hadoop6:~
8.啟動 hbase (在啟動hbase之前hadoop必須先啟動)
[java]view plaincopy
- [grid@hadoop4~]$shstart-hbase.sh
- [grid@hadoop4~]$jps
- 2388ResourceManager
- 3692Jps
- 2055NameNode
- 3375HQuorumPeer
- 2210SecondaryNameNode
- 3431HMaster
- [grid@hadoop5~]$jps
- 2795Jps
- 2580HQuorumPeer
- 2656HRegionServer
- 2100NodeManager
- 1983DataNode
- [grid@hadoop6~]$jps
- 2566HQuorumPeer
- 1984DataNode
- 2101NodeManager
- 2803Jps
- 2639HRegionServer
$stop-all.sh//停止hbase
如果在操作Hbase的過程中發生錯誤,可以通過hbase安裝主目錄下的logs子目錄查看錯誤原因。
jps查看發現 Master 機上 HRegionServer 服務未啟動,查看日志顯示因16020端口被占用導致 HRegionServer 啟動失敗,查證發現占用16020端口的是 HMaster 進程,查看官方文檔後解決:[grid@hadoop4 ~]$ sh local-regionservers.sh start 2
官方文檔截圖:
9.shell
10.Web管理界面
http://xxxxxx/Linuxjc/1134334.html TechArticle