歡迎來到Linux教程網
Linux教程網
Linux教程網
Linux教程網
您现在的位置: Linux教程網 >> UnixLinux >  >> Linux綜合 >> 學習Linux

CentOS6.5下安裝Hbase

CentOS6.5下安裝Hbase


CentOS6.5下安裝Hbase


環境:CentOS6.5 Hadoop2.7.2 HBase1.2.1

1.安裝好Hadoop集群,並啟動

[java]view plaincopy在CODE上查看代碼片派生到我的代碼片
  1. [grid@hadoop4~]$shhadoop-2.7.2/sbin/start-dfs.sh
  2. [grid@hadoop4~]$shhadoop-2.7.2/sbin/start-yarn.sh

查看 hadoop 版本:

[java]view plaincopy在CODE上查看代碼片派生到我的代碼片
  1. [grid@hadoop4~]$hadoop-2.7.2/bin/hadoopversion
  2. Hadoop2.7.2



2.查看 hbase 官方文檔(http://hbase.apache.org/book.html#basic.prerequisites),找到與 hadoop 版本對應的 hbase 並下載
[grid@hadoop4 ~]$ wget http://mirrors.cnnic.cn/apache/hbase/hbase-1.2.1/hbase-1.2.1-bin.tar.gz

3.解壓

[java]view plaincopy在CODE上查看代碼片派生到我的代碼片
  1. [grid@hadoop4~]$tar-zxfhbase-1.2.1-bin.tar.gz


4.進入 hbase 的 lib 目錄,查看 hadoop jar 包的版本
[java]view plaincopy在CODE上查看代碼片派生到我的代碼片
  1. [grid@hadoop4~]$cdhbase-1.2.1/lib/
  2. [grid@hadoop4lib]$find-name'hadoop*jar'
  3. ./hadoop-common-2.5.1.jar
  4. ./hadoop-mapreduce-client-common-2.5.1.jar
  5. ./hadoop-annotations-2.5.1.jar
  6. ./hadoop-yarn-server-common-2.5.1.jar
  7. ./hadoop-hdfs-2.5.1.jar
  8. ./hadoop-client-2.5.1.jar
  9. ./hadoop-mapreduce-client-shuffle-2.5.1.jar
  10. ./hadoop-yarn-common-2.5.1.jar
  11. ./hadoop-yarn-server-nodemanager-2.5.1.jar
  12. ./hadoop-yarn-client-2.5.1.jar
  13. ./hadoop-mapreduce-client-core-2.5.1.jar
  14. ./hadoop-auth-2.5.1.jar
  15. ./hadoop-mapreduce-client-app-2.5.1.jar
  16. ./hadoop-yarn-api-2.5.1.jar
  17. ./hadoop-mapreduce-client-jobclient-2.5.1.jar


發現與 hadoop 集群的版本號不一致,需要用 hadoop 目錄下的 jar 替換 hbase/lib 目錄下的 jar 文件。

編寫腳本來完成替換,如下所示:

[java]view plaincopy在CODE上查看代碼片派生到我的代碼片
  1. [grid@hadoop4lib]$pwd
  2. /home/grid/hbase-1.0.0/lib
  3. [grid@hadoop4lib]$vimf.sh
  4. find-name"hadoop*jar"|sed's/2.5.1/2.7.2/g'|sed's/\.\///g'>f.log
  5. rm./hadoop*jar
  6. cat./f.log|whilereadLine
  7. do
  8. find/home/grid/hadoop-2.7.2-name"$Line"|xargs-icp{}./
  9. done
  10. rm./f.log
  11. [grid@hadoop4lib]$chmodu+xf.sh
  12. [grid@hadoop4lib]$./f.sh
  13. [grid@hadoop4lib]$find-name'hadoop*jar'
  14. ./hadoop-yarn-api-2.7.2.jar
  15. ./hadoop-mapreduce-client-app-2.7.2.jar
  16. ./hadoop-common-2.5.2.jar
  17. ./hadoop-mapreduce-client-jobclient-2.7.2.jar
  18. ./hadoop-mapreduce-client-core-2.7.2.jar
  19. ./hadoop-yarn-server-nodemanager-2.7.2.jar
  20. ./hadoop-hdfs-2.7.2.jar
  21. ./hadoop-yarn-common-2.7.2.jar
  22. ./hadoop-mapreduce-client-shuffle-2.7.2.jar
  23. ./hadoop-auth-2.7.2.jar
  24. ./hadoop-mapreduce-client-common-2.7.2.jar
  25. ./hadoop-yarn-client-2.7.2.jar
  26. ./hadoop-annotations-2.7.2.jar
  27. ./hadoop-yarn-server-common-2.7.2.jar

OK,jar 包替換成功;hbase/lib 目錄下還有個 slf4j-log4j12-XXX.jar,在機器有裝hadoop時,由於classpath中會有hadoop中的這個jar包,會有沖突,直接刪除掉

[java]view plaincopy在CODE上查看代碼片派生到我的代碼片
  1. [grid@hadoop4lib]$rm`find-name'slf4j-log4j12-*jar'`

5.修改配置文件

5.1.
[java]view plaincopy在CODE上查看代碼片派生到我的代碼片
  1. [[email protected]]$viconf/hbase-env.sh
  2. exportJAVA_HOME=/usr/java/jdk1.7.0_72
  3. exportHBASE_CLASSPATH=/home/grid/hadoop-2.7.2/etc/hadoop
  4. exportHBASE_MANAGES_ZK=true

第一個參數指定了JDK路徑;第二個參數指定了 hadoop 的配置文件路徑;第三個參數設置使用 hbase 默認自帶的 Zookeeper

5.2.

[grid@hadoop4 hbase-1.2.1]$ vim conf/hbase-site.xml

[java]view plaincopy在CODE上查看代碼片派生到我的代碼片
  1. <property>
  2. <name>hbase.rootdir</name>
  3. <value>hdfs://localhost:9000/hbase</value>
  4. </property>
  5. <property>
  6. <name>hbase.cluster.distributed</name>
  7. <value>true</value>
  8. </property>
  9. <property>
  10. <name>hbase.tmp.dir</name>
  11. <value>/home/grid/hbase-1.2.1/tmp</value>
  12. </property>
  13. <property>
  14. <name>hbase.zookeeper.quorum</name>
  15. <value>hadoop4,hadoop5,hadoop6</value>
  16. </property>
  17. <property>
  18. <name>hbase.zookeeper.property.dataDir</name>
  19. <value>/home/grid/hbase-1.2.1/zookeeper</value>
  20. </property>

上述中的hbase.rootdir的value為hadoop中的etc目錄下的hadoop目錄下的core-site.xml中的dfs的value加/hbase

創建目錄

[java]view plaincopy在CODE上查看代碼片派生到我的代碼片
  1. [[email protected]]$mkdirtmp
  2. [[email protected]]$mkdirzookeeper


5.3.
[java]view plaincopy在CODE上查看代碼片派生到我的代碼片
  1. [[email protected]]$vimconf/regionservers
  2. hadoop4
  3. hadoop5
  4. hadoop6



6.設置環境變量
[java]view plaincopy在CODE上查看代碼片派生到我的代碼片
  1. [grid@hadoop4~]$vi.bash_profile
  2. exportHBASE_HOME=/home/grid/hbase-1.2.1
  3. exportPATH=$PATH:$HBASE_HOME/bin
  4. [grid@hadoop4~]$source.bash_profile



7.分發 hbase 到其它機器,並在其上設置環境變量
[grid@hadoop4 ~]$ scp -r hbase-1.2.1grid@hadoop5:~
[grid@hadoop4 ~]$ scp -r hbase-1.2.1 grid@hadoop6:~

8.啟動 hbase (在啟動hbase之前hadoop必須先啟動)
[java]view plaincopy在CODE上查看代碼片派生到我的代碼片
  1. [grid@hadoop4~]$shstart-hbase.sh
  2. [grid@hadoop4~]$jps
  3. 2388ResourceManager
  4. 3692Jps
  5. 2055NameNode
  6. 3375HQuorumPeer
  7. 2210SecondaryNameNode
  8. 3431HMaster
  9. [grid@hadoop5~]$jps
  10. 2795Jps
  11. 2580HQuorumPeer
  12. 2656HRegionServer
  13. 2100NodeManager
  14. 1983DataNode
  15. [grid@hadoop6~]$jps
  16. 2566HQuorumPeer
  17. 1984DataNode
  18. 2101NodeManager
  19. 2803Jps
  20. 2639HRegionServer


$stop-all.sh//停止hbase
如果在操作Hbase的過程中發生錯誤,可以通過hbase安裝主目錄下的logs子目錄查看錯誤原因。

jps查看發現 Master 機上 HRegionServer 服務未啟動,查看日志顯示因16020端口被占用導致 HRegionServer 啟動失敗,查證發現占用16020端口的是 HMaster 進程,查看官方文檔後解決:[grid@hadoop4 ~]$ sh local-regionservers.sh start 2
官方文檔截圖:


9.shell


10.Web管理界面

http://xxxxxx/Linuxjc/1134334.html TechArticle

Copyright © Linux教程網 All Rights Reserved