Hadoop 2.2 是 Hadoop 2 即yarn的第一个稳定版。并且解决单点问题。
maven安装
<mirror> <id>nexus-osc</id> <mirrorOf>*</mirrorOf> <name>Nexus osc</name> <url>http://maven.oschina.net/content/groups/public/</url> </mirror>
下载安装hadoop2.2
[andy@s41 hadoop-2.2.0-src]$
编译
[WARNING] [protoc, --version] failed: java.io.IOException: Cannot run program “protoc”: java.io.IOException: error=2, No such file or directory
安装编译protobuf
protoc: error while loading shared libraries: libprotobuf.so.8: cannot open shared object file: No such file or directory
libhiredis.a libltdl.a libltdl.so.3.1.0 libprotobuf-lite.la libprotobuf.so libprotoc.la liby.a
libhiredis.so libltdl.la libprotobuf.a libprotobuf-lite.so libprotobuf.so.8 libprotoc.so pkgconfig
libhiredis.so.0 libltdl.so libprotobuf.la libprotobuf-lite.so.8 libprotobuf.so.8.0.0 libprotoc.so.8
libhiredis.so.0.10 libltdl.so.3 libprotobuf-lite.a libprotobuf-lite.so.8.0.0 libprotoc.a libprotoc.so.8.0.0
libprotoc 2.5.0
[INFO] Apache Hadoop Main ………………………….. SUCCESS [0.947s]
[INFO] Apache Hadoop Project POM ……………………. SUCCESS [0.294s]
[INFO] Apache Hadoop Annotations ……………………. SUCCESS [0.474s]
[INFO] Apache Hadoop Project Dist POM ……………….. SUCCESS [0.287s]
[INFO] Apache Hadoop Assemblies …………………….. SUCCESS [0.106s]
[INFO] Apache Hadoop Maven Plugins ………………….. SUCCESS [0.937s]
[INFO] Apache Hadoop Auth ………………………….. SUCCESS [0.248s]
[INFO] Apache Hadoop Auth Examples ………………….. SUCCESS [0.318s]
[INFO] Apache Hadoop Common ………………………… SUCCESS [17.582s]
[INFO] Apache Hadoop NFS …………………………… SUCCESS [1.364s]
[INFO] Apache Hadoop Common Project …………………. SUCCESS [0.016s]
[INFO] Apache Hadoop HDFS ………………………….. SUCCESS [39.854s]
[INFO] Apache Hadoop HttpFS ………………………… SUCCESS [1.544s]
[INFO] Apache Hadoop HDFS BookKeeper Journal …………. SUCCESS [1.494s]
[INFO] Apache Hadoop HDFS-NFS ………………………. SUCCESS [0.189s]
[INFO] Apache Hadoop HDFS Project …………………… SUCCESS [0.017s]
[INFO] hadoop-yarn ………………………………… SUCCESS [5.859s]
[INFO] hadoop-yarn-api …………………………….. SUCCESS [2.837s]
[INFO] hadoop-yarn-common ………………………….. SUCCESS [1.263s]
[INFO] hadoop-yarn-server ………………………….. SUCCESS [0.045s]
[INFO] hadoop-yarn-server-common ……………………. SUCCESS [0.458s]
[INFO] hadoop-yarn-server-nodemanager ……………….. SUCCESS [0.776s]
[INFO] hadoop-yarn-server-web-proxy …………………. SUCCESS [0.192s]
[INFO] hadoop-yarn-server-resourcemanager ……………. SUCCESS [0.952s]
[INFO] hadoop-yarn-server-tests …………………….. SUCCESS [0.150s]
[INFO] hadoop-yarn-client ………………………….. SUCCESS [0.239s]
[INFO] hadoop-yarn-applications …………………….. SUCCESS [0.032s]
[INFO] hadoop-yarn-applications-distributedshell ……… SUCCESS [0.155s]
[INFO] hadoop-mapreduce-client ……………………… SUCCESS [0.028s]
[INFO] hadoop-mapreduce-client-core …………………. SUCCESS [1.472s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher …. SUCCESS [0.124s]
[INFO] hadoop-yarn-site ……………………………. SUCCESS [0.047s]
[INFO] hadoop-yarn-project …………………………. SUCCESS [1.431s]
[INFO] hadoop-mapreduce-client-common ……………….. SUCCESS [1.460s]
[INFO] hadoop-mapreduce-client-shuffle ………………. SUCCESS [0.140s]
[INFO] hadoop-mapreduce-client-app ………………….. SUCCESS [0.718s]
[INFO] hadoop-mapreduce-client-hs …………………… SUCCESS [0.320s]
[INFO] hadoop-mapreduce-client-jobclient …………….. SUCCESS [1.065s]
[INFO] hadoop-mapreduce-client-hs-plugins ……………. SUCCESS [0.104s]
[INFO] Apache Hadoop MapReduce Examples ……………… SUCCESS [0.292s]
[INFO] hadoop-mapreduce ……………………………. SUCCESS [0.035s]
[INFO] Apache Hadoop MapReduce Streaming …………….. SUCCESS [0.243s]
[INFO] Apache Hadoop Distributed Copy ……………….. SUCCESS [31.506s]
[INFO] Apache Hadoop Archives ………………………. SUCCESS [0.138s]
[INFO] Apache Hadoop Rumen …………………………. SUCCESS [0.296s]
[INFO] Apache Hadoop Gridmix ……………………….. SUCCESS [0.330s]
[INFO] Apache Hadoop Data Join ……………………… SUCCESS [0.132s]
[INFO] Apache Hadoop Extras ………………………… SUCCESS [0.182s]
[INFO] Apache Hadoop Pipes …………………………. SUCCESS [0.011s]
[INFO] Apache Hadoop Tools Dist …………………….. SUCCESS [0.185s]
[INFO] Apache Hadoop Tools …………………………. SUCCESS [0.011s]
[INFO] Apache Hadoop Distribution …………………… SUCCESS [0.043s]
[INFO] Apache Hadoop Client ………………………… SUCCESS [0.106s]
[INFO] Apache Hadoop Mini-Cluster …………………… SUCCESS [0.054s]
[INFO] ————————————————————————
[INFO] BUILD SUCCESS
[INFO] ————————————————————————
[INFO] Total time: 2:00.410s
[INFO] Finished at: Thu Oct 17 15:26:18 CST 2013
[INFO] Final Memory: 95M/1548M
相关推荐
Hadoop 2.2.0版本中在64为linux系统上运行所需要的native库文件。需要时用此native文件夹覆盖Hadoop 2.2.0中native文件夹即可。
Hadoop官网上下载的hadoop-2.2.0安装包是32位的,直接运行在64位的linux系统上会有问题,所以需要自己将hadoop-2.2.0安装包编译为64位。
hadoop2.2.0 eclipse插件-重新编译过。hadoop用的是hadoop2.2.0版本,eclipse用的是 eclipse-kepler。 插件 eclipse-kepler
Hadoop2.2.0完全分布式集群平台安装设置 HDFS HA架构: 1、先设定电脑的IP为静态地址: 2、设置各个主机的hostname 3、在所有电脑的/etc/hosts添加以下配置: 4、设置SSH无密码登陆 5、下载解压hadoop-2.2.0.tar.gz...
在CentSO_64bit集群搭建,hadoop2.2(64位)编译 新版亮点: 基于yarn计算框架和高可用性DFS的第一个稳定版本。 注1:官网只提供32位release版本, 若机器为64位,需要手动编译。 环境配置是个挺烦人的活,麻烦不说还...
本文档主要详细介绍了Hadoop 2.2.0版本的编译安装过程。
此hadoop是hadoop-2.2.0是32位的编译出来的,亲测可用
hadoop官网上面下载的Hadoop2.2.0 是32位的,现在将我自己重新编译的64bit发上来,这个是百度网盘分享的下载地址http://pan.baidu.com/s/1jG5KOCm,由于限制上传文件大小,所以只好如此了。
hadoop Eclipse插件Linux版本,编译环境hadoop2.2.0
hadoop2.2.0下的eclipse插件,已经编译好的,直接可以使用哦。
hadoop 2.2.0的64位linux版。由于官网提供的hadoop2.2.0 lib/native下的.so均为32位版本,因此用源代码编译了适合64位linux的版本。本人在生产环境下即是用该版本。
hadoop 2.2.0的64位linux版。由于官网提供的hadoop2.2.0 lib/native下的.so均为32位版本,因此用源代码编译了适合64位linux的版本。本人在生产环境下即是用该版本。
自己编译的hadoop-eclipse-plugin-2.2.0.jar插件:hadoop版本hadoop-2.2.0、eclipse版本:Eclipse Standard 4.3.1
mahout0.9仅支持hadoop1.x,编译好的这个包支持hadoop2.2.0.由于上传文件50M的限制,采用分卷压缩的形式,包括三个包:mahout-mahout-distribution-0.9.zip,distribution-0.9.z01,mahout-distribution-0.9.z02,...
mahout0.9仅支持hadoop1.x,编译好的这个包支持hadoop2.2.0.由于上传文件50M的限制,采用分卷压缩的形式,包括三个包:mahout-mahout-distribution-0.9.zip,distribution-0.9.z01,mahout-distribution-0.9.z02,...
mahout0.9仅支持hadoop1.x,编译好的这个包支持hadoop2.2.0.由于上传文件50M的限制,采用分卷压缩的形式,包括三个包:mahout-mahout-distribution-0.9.zip,distribution-0.9.z01,mahout-distribution-0.9.z02,...
hadoop 64位编译后的成果,native目录 libhadoop.a libhadoop.so libhadoop.so.1.0.0 libhadooppipes.a libhadooputils.a libhdfs.a libhdfs.so libhdfs.so.0.0.0
mahout0.9仅支持hadoop1.x,编译好的这个包支持hadoop2.2.0.由于上传文件50M的限制,采用分卷压缩的形式,包括三个包:mahout-mahout-distribution-0.9.zip,distribution-0.9.z01,mahout-distribution-0.9.z02,...
mahout0.9仅支持hadoop1.x,编译好的这个包支持hadoop2.2.0.由于上传文件50M的限制,采用分卷压缩的形式,包括三个包:mahout-mahout-distribution-0.9.zip,distribution-0.9.z01,mahout-distribution-0.9.z02,...
hadoop2.2.0源码,需要自行编译