先给自己做个广告:
开源Hive管理工具phpHiveAdmin今日更新0.05 beta2
ChangeLog:
1. sql查询页全部重写,复杂查询现在可以用异步非阻塞的方式实时的查看map/reduce进度,简单的带limit查询,仍然采用thrift方式。
2. 改变查询结果获取方式,为防止大数据导致的php内存溢出,非limit结果的数据集将直接下载,仅提供30条预览,全部数据可通过下载链接获取。
3. 需下载数据,必须保证phpHiveAdmin/tmp目录的权限正确
访问
http://www.phphiveadmin.net 获取更多详细信息。
#-----------------------------------------------------
你需要为namenode配置一个主机名,这个是必须的,hadoop集群优先查找/etc/hosts文件进行匹配。所以现在我们给namenode一个主机名
将
namenode01 192.168.1.2
保存到/etc/hosts文件中
然后打开Namenode $HADOOP_HOME/conf/masters文件,写入hadoopmaster-177.tj
打开$HADOOP_HOME/conf/slaves文件,写入你的datanode主机名,每行一个。
保存退出,当然相应的hdfs-site.xml,mapred-site.xml,core-site.xml中相应的主机名也要一并修改。
下面是DataNode的配置方式
#--------------------------------------------------------------
core-site.xml
xml
version
="1.0"
?>
xml-stylesheet type ="text/xsl" href ="configuration.xsl" ?>
< configuration >
< property >
< name >fs.default.name name >
< value >hdfs://hadoopmaster-177.tj:9000 value >
property >
< property >
< name >fs.checkpoint.dir name >
< value >/opt/data/hadoop1/hdfs/namesecondary1,/opt/data/hadoop2/hdfs/namesecondary2 value >
property >
< property >
< name >fs.checkpoint.period name >
< value >1800 value >
property >
< property >
< name >fs.checkpoint.size name >
< value >33554432 value >
property >
< property >
< name >io.compression.codecs name >
< value >org.apache.hadoop.io.compress.DefaultCodec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache
.hadoop.io.compress.BZip2Codec value >
property >
< property >
< name >io.compression.codec.lzo.class name >
< value >com.hadoop.compression.lzo.LzoCodec value >
property >
configuration >
xml-stylesheet type ="text/xsl" href ="configuration.xsl" ?>
< configuration >
< property >
< name >fs.default.name name >
< value >hdfs://hadoopmaster-177.tj:9000 value >
property >
< property >
< name >fs.checkpoint.dir name >
< value >/opt/data/hadoop1/hdfs/namesecondary1,/opt/data/hadoop2/hdfs/namesecondary2 value >
property >
< property >
< name >fs.checkpoint.period name >
< value >1800 value >
property >
< property >
< name >fs.checkpoint.size name >
< value >33554432 value >
property >
< property >
< name >io.compression.codecs name >
< value >org.apache.hadoop.io.compress.DefaultCodec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache
.hadoop.io.compress.BZip2Codec value >
property >
< property >
< name >io.compression.codec.lzo.class name >
< value >com.hadoop.compression.lzo.LzoCodec value >
property >
configuration >
hdfs-site.xml
xml
version
="1.0"
?>
xml-stylesheet type ="text/xsl" href ="configuration.xsl" ?>
< configuration >
< property >
< name >dfs.name.dir name >
< value >/opt/data/hadoop/hdfs/name value >
< description > description >
property >
< property >
< name >dfs.data.dir name >
< value >/opt/data/hadoop/hdfs/data,/opt/data/hadoop1/hdfs/data,/opt/data/hadoop2/hdfs/data,/opt/data/hadoop3/hdfs/data,/opt/data/hadoop4/hdfs/data value >
< description > description >
property >
< property >
< name >dfs.http.address name >
< value >hadoopmaster-177.tj:50070 value >
property >
< property >
< name >dfs.secondary.http.address name >
< value >hadoopslave-189.tj:50090 value >
property >
< property >
< name >dfs.replication name >
< value >3 value >
property >
< property >
< name >dfs.datanode.du.reserved name >
< value >1073741824 value >
property >
< property >
< name >dfs.block.size name >
< value >134217728 value >
property >
configuration >
xml-stylesheet type ="text/xsl" href ="configuration.xsl" ?>
< configuration >
< property >
< name >dfs.name.dir name >
< value >/opt/data/hadoop/hdfs/name value >
< description > description >
property >
< property >
< name >dfs.data.dir name >
< value >/opt/data/hadoop/hdfs/data,/opt/data/hadoop1/hdfs/data,/opt/data/hadoop2/hdfs/data,/opt/data/hadoop3/hdfs/data,/opt/data/hadoop4/hdfs/data value >
< description > description >
property >
< property >
< name >dfs.http.address name >
< value >hadoopmaster-177.tj:50070 value >
property >
< property >
< name >dfs.secondary.http.address name >
< value >hadoopslave-189.tj:50090 value >
property >
< property >
< name >dfs.replication name >
< value >3 value >
property >
< property >
< name >dfs.datanode.du.reserved name >
< value >1073741824 value >
property >
< property >
< name >dfs.block.size name >
< value >134217728 value >
property >
configuration >
mapred-site.xml
xml
version
="1.0"
?>
xml-stylesheet type ="text/xsl" href ="configuration.xsl" ?>
< configuration >
< property >
< name >mapred.job.tracker name >
< value >hadoopmaster-177.tj:9001 value >
property >
< property >
< name >mapred.local.dir name >
< value >/opt/data/hadoop/mapred/mrlocal,/opt/data/hadoop1/mapred/mrlocal,/opt/data/hadoop2/mapred/mrlocal,/opt/data/hadoop3/mapred/mrlocal,/opt/data/hadoop4/mapred/mrlocalalue>
< final >true final >
property >
< property >
< name >mapred.system.dir name >
< value >/opt/data/hadoop1/mapred/mrsystem value >
< final >true final >
property >
< property >
< name >mapred.tasktracker.map.tasks.maximum name >
< value >12 value >
< final >true final >
property >
< property >
< name >mapred.tasktracker.reduce.tasks.maximum name >
< value >4 value >
< final >true final >
property >
< property >
< name >mapred.map.child.java.opts name >
< value >-Xmx1224M value >
property >
< property >
< name >mapred.reduce.child.java.opts name >
< value >-Xmx2048M value >
property >
< property >
< name >mapred.reduce.parallel.copies name >
< value >10 value >
property >
< property >
< name >io.sort.factor name >
< value >100 value >
property >
< property >
< name >mapred.job.reduce.input.buffer.percent name >
< value >0.3 value >
property >
< property >
< name >mapred.compress.map.output name >
< value >true value >
property >
< property >
< name >mapred.map.output.compression.codec name >
< value >com.hadoop.compression.lzo.LzoCodec value >
property >
< property >
< name >mapred.child.java.opts name >
< value >-Djava.library.path=/opt/hadoopgpl/native/Linux-amd64-64 value >
property >
< property >
< name >io.sort.mb name >
< value >600 value >
property >
< property >
< name >fs.inmemory.size.mb name >
< value >500 value >
property >
configuration >
xml-stylesheet type ="text/xsl" href ="configuration.xsl" ?>
< configuration >
< property >
< name >mapred.job.tracker name >
< value >hadoopmaster-177.tj:9001 value >
property >
< property >
< name >mapred.local.dir name >
< value >/opt/data/hadoop/mapred/mrlocal,/opt/data/hadoop1/mapred/mrlocal,/opt/data/hadoop2/mapred/mrlocal,/opt/data/hadoop3/mapred/mrlocal,/opt/data/hadoop4/mapred/mrlocalalue>
< final >true final >
property >
< property >
< name >mapred.system.dir name >
< value >/opt/data/hadoop1/mapred/mrsystem value >
< final >true final >
property >
< property >
< name >mapred.tasktracker.map.tasks.maximum name >
< value >12 value >
< final >true final >
property >
< property >
< name >mapred.tasktracker.reduce.tasks.maximum name >
< value >4 value >
< final >true final >
property >
< property >
< name >mapred.map.child.java.opts name >
< value >-Xmx1224M value >
property >
< property >
< name >mapred.reduce.child.java.opts name >
< value >-Xmx2048M value >
property >
< property >
< name >mapred.reduce.parallel.copies name >
< value >10 value >
property >
< property >
< name >io.sort.factor name >
< value >100 value >
property >
< property >
< name >mapred.job.reduce.input.buffer.percent name >
< value >0.3 value >
property >
< property >
< name >mapred.compress.map.output name >
< value >true value >
property >
< property >
< name >mapred.map.output.compression.codec name >
< value >com.hadoop.compression.lzo.LzoCodec value >
property >
< property >
< name >mapred.child.java.opts name >
< value >-Djava.library.path=/opt/hadoopgpl/native/Linux-amd64-64 value >
property >
< property >
< name >io.sort.mb name >
< value >600 value >
property >
< property >
< name >fs.inmemory.size.mb name >
< value >500 value >
property >
configuration >
然后需要在Datanode的/etc/hosts文件里写上自身和NameNode的主机名与IP
例如:
namenode01 192.168.1.2
datanode01 192.168.1.10
你每添加一台datanode,都需要在每台服务器的/etc/hosts文件中添加该服务器的主机名和地址。这算是hadoop不太方便的一点。当然,masters和slaves文件里也要写上。
Datanode的配置就结束了。下次说下namenode的备份,也就是secodary的配置。namenode也是可以热备份以防止单点故障的。这就是为什么我的datanode01的IP是从192.168.1.10开始的
如果你对这个系列有任何疑问或不解,或者配置失败,请到暴风影音创建的easyhadoop群里提问。QQ群号是
93086930