Hadoop读书笔记(二)HDFS的shell操作
Hadoop读书笔记(一)Hadoop介绍:http://blog.csdn.net/caicongyang/article/details/39898629
1.shell操作
1.1全部的HDFS shell操作命名能够通过hadoop fs获取:
[root@hadoop ~]# hadoop fs
Usage: java FsShell
[-ls <path>]
[-lsr <path>]
[-du <path>]
[-dus <path>]
[-count[-q] <path>]
[-mv <src> <dst>]
[-cp <src> <dst>]
[-rm [-skipTrash] <path>]
[-rmr [-skipTrash] <path>]
[-expunge]
[-put <localsrc> ... <dst>]
[-copyFromLocal <localsrc> ... <dst>]
[-moveFromLocal <localsrc> ... <dst>]
[-get [-ignoreCrc] [-crc] <src> <localdst>]
[-getmerge <src> <localdst> [addnl]]
[-cat <src>]
[-text <src>]
[-copyToLocal [-ignoreCrc] [-crc] <src> <localdst>]
[-moveToLocal [-crc] <src> <localdst>]
[-mkdir <path>]
[-setrep [-R] [-w] <rep> <path/file>]
[-touchz <path>]
[-test -[ezd] <path>]
[-stat [format] <path>]
[-tail [-f] <file>]
[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-chgrp [-R] GROUP PATH...]
[-help [cmd]]
Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|jobtracker:port> specify a job tracker
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]
1.2常见操作
全部的HDFS操作均以hadoop fs开头加上相相应的操作
1.2.1列出HDFS文件以下全部的文件
[root@hadoop ~]# hadoop fs -ls hdfs://hadoop:9000/
hdfs://hadoop:9000 为hadoop配置文件core-site.xml中配置的默认的文件系统名称,上述命令能够简写为:
[root@hadoop ~]# hadoop fs -ls /
1.2.1文件上传:讲Llinux下的/usr/local/hadoop-1.1.2.tar.gz上传到hdfs下的/download目录下
[root@hadoop ~]# hadoop fs -ls /usr/local/hadoop-1.1.2.tar.gz /download
1.2.2查看上传的文件:循环列出/download以下的全部文件
[root@hadoop ~]# hadoop fs -lsr /download
1.3HDFS shell操作命令帮助
[root@hadoop ~]# hadoop fs -help chown
-chown [-R] [OWNER][:[GROUP]] PATH...
Changes owner and group of a file.
This is similar to shell's chown with a few exceptions.
-R modifies the files recursively. This is the only option
currently supported.
If only owner or group is specified then only owner or
group is modified.
The owner and group names may only cosists of digits, alphabet,
and any of '-_.@/' i.e. [-_.@/a-zA-Z0-9]. The names are case
sensitive.
WARNING: Avoid using '.' to separate user name and group though
Linux allows it. If user names have dots in them and you are
using local file system, you might see surprising results since
shell command 'chown' is used for local files.
欢迎大家一起讨论学习。
实用的自己收!
记录与分享,让你我共成长!欢迎查看我的其它博客;我的博客地址:http://blog.csdn.net/caicongyang
相关文章
- 试述Hadoop的HDFS及其组成_hadoop命令和hdfs命令区别
- 大数据Hadoop生态圈介绍
- HADOOP生态圈知识概述
- hadoop的简介_hadoop体系
- hadoop 面试题_小学教师面试考试题库
- 【史上最全】Hadoop 核心 - HDFS 分布式文件系统详解(上万字建议收藏)
- Hadoop3.0-Hdfs | Apache Hadoop介绍
- Hadoop入门进阶课程1–Hadoop1.X伪分布式安装详解大数据
- ZooKeeper学习之路 (十)Hadoop的HA集群的机架感知详解大数据
- HDFS For hdfs-site.xml详解大数据
- Hadoop2源码分析-Hadoop V2初识详解大数据
- reduce hadoop利用MySQL、MapReduce、Hadoop轻松解决大数据问题(mysqlmap)
- Hadoop——高可用(High Available,HA)模式与联邦机制(Federation)启蒙详解编程语言
- HDFS与MongoDB: 数据存储的新维度(hdfs与mongodb)
- Hadoop 核心 – HDFS 分布式文件系统详解
- Linux系统下安装配置Hadoop(linux下安装hadoop)
- Hadoop集群搭建(二) HDFS
- Linux系统上安装Hadoop环境讲解(linux安装hadoop)
- 利用HDFS实现MySQL数据的快速导入(hdfs 导入mysql)
- hadoop常见错误以及处理方法详解