[已决解]关于Hadoop start-all.sh启动问题
问题一:出现Attempting to operate on hdfs namenode as root
写在最前注意:
1、master,slave都需要修改start-dfs.sh,stop-dfs.sh,start-yarn.sh,stop-yarn.sh四个文件
2、如果你的Hadoop是另外启用其它用户来启动,记得将root改为对应用户
HDFS格式化后启动dfs出现以下错误:
[root@master sbin]# ./start-dfs.sh Starting namenodes on [master] ERROR: Attempting to operate on hdfs namenode as root ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation. Starting datanodes ERROR: Attempting to operate on hdfs datanode as root ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation. Starting secondary namenodes [slave1] ERROR: Attempting to operate on hdfs secondarynamenode as root ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.
查度娘,见一仁兄的博客有次FAQ,故参考处理顺便再做一记录
参考地址:https://blog.csdn.net/u013725455/article/details/70147331
在/hadoop/sbin路径下:
将start-dfs.sh,stop-dfs.sh两个文件顶部添加以下参数
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
还有,start-yarn.sh,stop-yarn.sh顶部也需添加以下:
#!/usr/bin/env bash YARN_RESOURCEMANAGER_USER=root HADOOP_SECURE_DN_USER=yarn YARN_NODEMANAGER_USER=root # Licensed to the Apache Software Foundation (ASF) under one or more
修改后重启 ./start-dfs.sh,成功!
[root@master sbin]# ./start-dfs.sh WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER. Starting namenodes on [master] 上一次登录:日 6月 3 03:01:37 CST 2018从 slave1pts/2 上 master: Warning: Permanently added 'master,192.168.43.161' (ECDSA) to the list of known hosts. Starting datanodes 上一次登录:日 6月 3 04:09:05 CST 2018pts/1 上 Starting secondary namenodes [slave1] 上一次登录:日 6月 3 04:09:08 CST 2018pts/1 上
from: https://blog.csdn.net/lglglgl/article/details/80553828
问题二:正常启动后jps查看无namenode,datanode
解决方法:
1.先查看是不是的防火墙开启,将防火墙关闭:service iptbales stop
2.查看端口状态:telnet 192.168.86.10 9000(192.168.86.10是我自己的静态ip)
如果出现不能连接,那么我们需要修改我们的hosts
3.修改hosts:vi /etc/hosts
将127:0:0:1 localhost localhost.localdomain localhost4 localhost4.localdomain4
改为 ::1 localhost localhost.localdomain localhost4 localhost4.localdomain4
或者将127:0:0:1 localhost localhost.localdomain localhost4 localhost4.localdomain4 删除
在下方写入 192.168.86.10 mini1 localhost (192.168.86.10是我自己的静态ip mini1是我的主机名)
4.重新启动network:service network restart或sudo /etc/init.d/networking restart
from: https://blog.csdn.net/u010599953/article/details/75635058
相关文章
- Hadoop数据倾斜问题矫正
- 报错:HDFS IO error org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/yk/dl/alarm_his":hdfs:supergroup:drwxr-xr-x
- Lars George , 关于Hadoop和HBase的Blog
- 使用hadoop实现IP个数统计~并将结果写入数据库
- Hadoop 这样业界顶级的大规模数据处理平台,均发现满足不了类似双十一这样全世界的剁手党蜂拥而至的热情
- 使用hadoop restful api实现对集群信息的统计
- 《Hadoop实战手册》一1.9 使用Pig从HDFS导出数据到MongoDB
- 《Hadoop MapReduce性能优化》一2.4 用Apache Ambari监测Hadoop
- 《R与Hadoop大数据分析实战》一2.5 在R环境中编写Hadoop MapReduce程序的方式
- hadoop mapred-queue-acls 配置(转)
- 【大数据Hadoop】macbookpro m1/m2 arm 编译hadoop-3.3.1
- Hadoop: Setup Maven project for MapReduce in 5mn
- 【大数据】Hadoop完全分布式配置(超详细)