歡迎來到Linux教程網
Linux教程網
Linux教程網
Linux教程網
您现在的位置: Linux教程網 >> UnixLinux >  >> Linux編程 >> Linux編程

編譯Hadoop-2.4.0之HDFS的64位C++庫

C++庫的源代碼位於:

Hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs

這裡提供一個直接對這些源文件進行編譯的makefile,編譯後將打包命名為libhdfs.a. makefile內容為:

CC            = gcc
DEFINES      = -DG_ARCH_X86_64


CFLAGS        += -fPIC -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -pipe -O3 -D_REENTRANT $(DEFINES)
CXXFLAGS      += -pipe -O3 -D_REENTRANT $(DEFINES) -rdynamic

AR            = ar cqs
LFLAGS        = -rdynamic

OBJECTS      = exception.o expect.o hdfs.o jni_helper.o native_mini_dfs.o

TARGET        = libhdfs.a

#command, don't change
CHK_DIR_EXISTS= test -d
DEL_FILE      = rm -f


first: all
####### Implicit rules

.SUFFIXES: .o .c .cpp .cc .cxx .C .cu

.cpp.o:
 $(CXX) -c $(CXXFLAGS) $(INCPATH) -o "$@" "$<"

.cc.o:
 $(CXX) -c $(CXXFLAGS) $(INCPATH) -o "$@" "$<"

.cxx.o:
 $(CXX) -c $(CXXFLAGS) $(INCPATH) -o "$@" "$<"

.C.o:
 $(CXX) -c $(CXXFLAGS) $(INCPATH) -o "$@" "$<"

.c.o:
 $(CC) -c $(CFLAGS) $(INCPATH) -o "$@" "$<"
       
####### Build rules   
all: $(AR)

$(AR): $(TARGET)

$(TARGET):  $(OBJECTS)
 $(AR) $(TARGET) $(OBJECTS)

clean:
 -$(DEL_FILE) $(OBJECTS) $(TARGET)

保存好後,直接make. 編譯信息如下:

gcc -c -fPIC -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -pipe -O3 -D_REENTRANT -DG_ARCH_X86_64  -o "exception.o" "exception.c"
gcc -c -fPIC -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -pipe -O3 -D_REENTRANT -DG_ARCH_X86_64  -o "expect.o" "expect.c"
gcc -c -fPIC -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -pipe -O3 -D_REENTRANT -DG_ARCH_X86_64  -o "hdfs.o" "hdfs.c"
gcc -c -fPIC -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -pipe -O3 -D_REENTRANT -DG_ARCH_X86_64  -o "jni_helper.o" "jni_helper.c"
gcc -c -fPIC -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -pipe -O3 -D_REENTRANT -DG_ARCH_X86_64  -o "native_mini_dfs.o" "native_mini_dfs.c"
ar cqs libhdfs.a exception.o expect.o hdfs.o jni_helper.o native_mini_dfs.o

接下來測試一下這個庫能不能用。進入以下目錄

hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test

找到測試源代碼,對該文件夾中所有測試代碼進行編譯.這裡再提供一個簡單的makefile,內容如下:

LIBS = -L$(JAVA_HOME)/jre/lib/amd64/server/ -ljvm  -L../ -lhdfs
INCPATH = -I$(JAVA_HOME)/include -I$(JAVA_HOME)/include/linux -I. -I..
all:
 gcc -o hdfs_ops test_libhdfs_ops.c $(INCPATH) $(LIBS)
 gcc -o hdfs_read test_libhdfs_read.c $(INCPATH) $(LIBS)
 gcc -o hdfs_write test_libhdfs_write.c $(INCPATH) $(LIBS)
 gcc -o hdfs_zerocopy test_libhdfs_zerocopy.c $(INCPATH) $(LIBS)

直接make,編譯信息如下:

gcc -o hdfs_ops test_libhdfs_ops.c -I/d0/data/lichao/software/java/jdk1.7.0_55/include -I/d0/data/lichao/software/java/jdk1.7.0_55/include/linux -I. -I.. -L/d0/data/lichao/software/java/jdk1.7.0_55/jre/lib/amd64/server/ -ljvm  -L../ -lhdfs
gcc -o hdfs_read test_libhdfs_read.c -I/d0/data/lichao/software/java/jdk1.7.0_55/include -I/d0/data/lichao/software/java/jdk1.7.0_55/include/linux -I. -I.. -L/d0/data/lichao/software/java/jdk1.7.0_55/jre/lib/amd64/server/ -ljvm  -L../ -lhdfs
gcc -o hdfs_write test_libhdfs_write.c -I/d0/data/lichao/software/java/jdk1.7.0_55/include -I/d0/data/lichao/software/java/jdk1.7.0_55/include/linux -I. -I.. -L/d0/data/lichao/software/java/jdk1.7.0_55/jre/lib/amd64/server/ -ljvm  -L../ -lhdfs
gcc -o hdfs_zerocopy test_libhdfs_zerocopy.c -I/d0/data/lichao/software/java/jdk1.7.0_55/include -I/d0/data/lichao/software/java/jdk1.7.0_55/include/linux -I. -I.. -L/d0/data/lichao/software/java/jdk1.7.0_55/jre/lib/amd64/server/ -ljvm  -L../ -lhdfs

我們隨便生成一個文件,包含1到10這10個數字,並加載到hdfs文件系統。

seq 1 10 >  tmpfile
hadoop fs -mkdir /data
hadoop fs -put tmpfile /data
hadoop fs -cat /data/tmpfile
1
2
3
4
5
6
7
8
9
10

ok。現在運行生成的hdfs_read程序,測試一下hdfs的64位C++接口:

./hdfs_read /data/tmpfile 21 32

運行信息如下:

log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
1
2
3
4
5
6
7
8
9
10

--------------------------------------分割線 --------------------------------------

Ubuntu 13.04上搭建Hadoop環境 http://www.linuxidc.com/Linux/2013-06/86106.htm

Ubuntu 12.10 +Hadoop 1.2.1版本集群配置 http://www.linuxidc.com/Linux/2013-09/90600.htm

Ubuntu上搭建Hadoop環境(單機模式+偽分布模式) http://www.linuxidc.com/Linux/2013-01/77681.htm

Ubuntu下Hadoop環境的配置 http://www.linuxidc.com/Linux/2012-11/74539.htm

單機版搭建Hadoop環境圖文教程詳解 http://www.linuxidc.com/Linux/2012-02/53927.htm

--------------------------------------分割線 --------------------------------------

更多Hadoop相關信息見Hadoop 專題頁面 http://www.linuxidc.com/topicnews.aspx?tid=13

Copyright © Linux教程網 All Rights Reserved