Hadoop集群部署(实操干货,建议收藏)

Hadoop集群部署(实操干货,建议收藏)

精选文章moguli202024-12-14 10:12:5921A+A-

前置要求

  • 需要3台虚拟机,系统为Centos7,分别host命名为bigdata01bigdata02bigdata03密码均为root
  • 确保三台虚拟机已经完成了JDKSSH免密、关闭防火墙、配置主机名映射等前置操作
    • JDK安装参考:https://blog.csdn.net/justlpf/article/details/80693508
    • SSH免密配置方法: https://blog.csdn.net/justlpf/article/details/105405910
    • 配置/etc/hosts文件

在3台虚拟机的/etc/hosts文件中,填入如下内容:(同时这也是三台虚拟机的ip地址)

192.168.31.115 bigdata01
192.168.31.131 bigdata02
192.168.31.133 bigdata03

虚拟机设置

  1. bigdata01设置4GB或以上内存
  2. bigdata02和bigdata03设置2GB或以上内存

角色分配:

  • bigdata01: Namenode、Datanode、ResourceManager、NodeManager、HistoryServer、WebProxyServer、QuorumPeerMain
  • bigdata02: Datanode、NodeManager、QuorumPeerMain
  • bigdata03: Datanode、NodeManager、QuorumPeerMain

Hadoop集群部署

下载Hadoop安装包

下载Hadoop安装包、解压、配置软链接。

# 1. 下载
# 网页: https://archive.apache.org/dist/hadoop/common/hadoop-3.4.0/
# 在 bigdata01 节点执行
$. cd /root
$. wget https://archive.apache.org/dist/hadoop/common/hadoop-3.4.0/hadoop-3.4.0.tar.gz

# 2. 解压
# 请确保目录/export/server存在
# tar -zxvf hadoop-3.4.0.tar.gz -C /export/server/
$. tar -xf hadoop-3.4.0.tar.gz
$. mv hadoop-3.4.0 /usr/local/

# 3. 构建软链接
# ln -s /usr/local/hadoop-3.4.0-3.3.0 /usr/local/hadoop-3.4.0

修改配置文件: hadoop-env.sh

配置文件位于/usr/local/hadoop-3.4.0/etc/hadoop目录,修改hadoop-env.sh文件。

此文件是配置Hadoop临时环境变量,在Hadoop运行时生效,
永久生效,需写到/etc/profile中。

$. cd /usr/local/hadoop-3.4.0/etc/hadoop
$. cp hadoop-env.sh hadoop-env.sh-bak

$. vi hadoop-env.sh
# 在文件最后添加如下内容
# 在文件开头加入:
export JAVA_HOME=/usr/java/jdk-11/
# 配置Hadoop安装路径
export HADOOP_HOME=/usr/local/hadoop-3.4.0

# Hadoop hdfs配置文件路径
# YARN_CONF_DIR has been replaced by HADOOP_CONF_DIR.
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
# export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop

# Hadoop YARN 日志文件夹
# YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR
# export YARN_LOG_DIR=$HADOOP_HOME/logs/yarn
# Hadoop hdfs 日志文件夹
export HADOOP_LOG_DIR=$HADOOP_HOME/logs/hdfs

# Hadoop的使用启动用户配置
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
export YARN_PROXYSERVER_USER=root

修改配置文件: core-site.xml

清空文件,填入如下内容:

$. cd /usr/local/hadoop-3.4.0/etc/hadoop
$. cp core-site.xml core-site.xml-bak
$. vi core-site.xml
# 写入如下内容
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->
<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://bigdata01:8020</value>
    <description></description>
  </property>

  <property>
    <name>io.file.buffer.size</name>
    <value>131072</value>
    <description></description>
  </property>
</configuration>

配置:hdfs-site.xml文件

$. cd /usr/local/hadoop-3.4.0/etc/hadoop
$. cp hdfs-site.xml hdfs-site.xml-bak
$. vi hdfs-site.xml
# 写入如下内容
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.datanode.data.dir.perm</name>
        <value>700</value>
    </property>

  <property>
    <name>dfs.namenode.name.dir</name>
    <value>/data/nn</value>
    <description>Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently.</description>
  </property>

  <property>
    <name>dfs.namenode.hosts</name>
    <value>bigdata01,bigdata02,bigdata03</value>
    <description>List of permitted DataNodes.</description>
  </property>

  <property>
    <name>dfs.blocksize</name>
    <value>268435456</value>
    <description></description>
  </property>


  <property>
    <name>dfs.namenode.handler.count</name>
    <value>100</value>
    <description></description>
  </property>

  <property>
    <name>dfs.datanode.data.dir</name>
    <value>/data/dn</value>
  </property>
</configuration>

配置: mapred-env.sh文件

$. cd /usr/local/hadoop-3.4.0/etc/hadoop
$. cp mapred-env.sh mapred-env.sh-bak
$. vi mapred-env.sh

# 在文件的末尾加入如下环境变量设置
export JAVA_HOME=/usr/java/jdk-11/
export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000
export HADOOP_MAPRED_ROOT_LOGGER=INFO,RFA

配置: mapred-site.xml文件

$. cd /usr/local/hadoop-3.4.0/etc/hadoop
$. cp mapred-site.xml mapred-site.xml-bak
$. vi mapred-site.xml

# 替换为如下内容
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    <description></description>
  </property>

  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>bigdata01:10020</value>
    <description></description>
  </property>

  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>bigdata01:19888</value>
    <description></description>
  </property>

  <property>
    <name>mapreduce.jobhistory.intermediate-done-dir</name>
    <value>/data/mr-history/tmp</value>
    <description></description>
  </property>

  <property>
    <name>mapreduce.jobhistory.done-dir</name>
    <value>/data/mr-history/done</value>
    <description></description>
  </property>
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
  <name>mapreduce.reduce.env</name>
  <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
</configuration>

配置: yarn-env.sh文件

$. cd /usr/local/hadoop-3.4.0/etc/hadoop
$. cp yarn-env.sh yarn-env.sh-bak
$. vi yarn-env.sh

# 在文件的末尾加入如下环境变量设置
# WARNING: YARN_CONF_DIR has been replaced by HADOOP_CONF_DIR. Using value of YARN_CONF_DIR.
# WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of YARN_LOG_DIR.

export JAVA_HOME=/usr/java/jdk-11/
export HADOOP_HOME=/usr/local/hadoop-3.4.0
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_LOG_DIR=$HADOOP_HOME/logs/hdfs
# export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
# export YARN_LOG_DIR=$HADOOP_HOME/logs/yarn

配置: yarn-site.xml文件

$. cd /usr/local/hadoop-3.4.0/etc/hadoop
$. cp yarn-site.xml yarn-site.xml-bak
$. vi yarn-site.xml

# 替换为如下内容
<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
    <name>yarn.log.server.url</name>
    <value>http://bigdata01:19888/jobhistory/logs</value>
    <description></description>
</property>

  <property>
    <name>yarn.web-proxy.address</name>
    <value>bigdata01:8089</value>
    <description>proxy server hostname and port</description>
  </property>

  <property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
    <description>Configuration to enable or disable log aggregation</description>
  </property>

  <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/tmp/logs</value>
    <description>Configuration to enable or disable log aggregation</description>
  </property>

<!-- Site specific YARN configuration properties -->
  <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>bigdata01</value>
    <description></description>
  </property>

  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
    <description></description>
  </property>

  <property>
    <name>yarn.nodemanager.local-dirs</name>
    <value>/data/nm-local</value>
    <description>Comma-separated list of paths on the local filesystem where intermediate data is written.</description>
  </property>

  <property>
    <name>yarn.nodemanager.log-dirs</name>
    <value>/data/nm-log</value>
    <description>Comma-separated list of paths on the local filesystem where logs are written.</description>
  </property>

  <property>
    <name>yarn.nodemanager.log.retain-seconds</name>
    <value>10800</value>
    <description>Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled.</description>
  </property>

  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    <description>Shuffle service that needs to be set for Map Reduce applications.</description>
  </property>
</configuration>

修改workers文件

$. cd /usr/local/hadoop-3.4.0/etc/hadoop
$. cp workers workers-bak
$. vi workers

# 替换为如下内容
bigdata01
bigdata02
bigdata03

分发hadoop到其它机器

# 在 bigdata01 节点执行
cd /usr/local/

scp -r hadoop-3.4.0 bigdata02:/usr/local
scp -r hadoop-3.4.0 bigdata03:/usr/local

修改 /etc/profile文件

所有节点均执行:

# 1. 配置/etc/profile
$. vi /etc/profile 
export HADOOP_HOME=/usr/local/hadoop-3.4.0
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

# 2. 刷新环境变量
$. source /etc/profile
$. which hadoop

创建所需目录

    • bigdata01执行:
mkdir -p /data/nn
mkdir -p /data/dn
mkdir -p /data/nm-log
mkdir -p /data/nm-local
    • bigdata02执行:
mkdir -p /data/dn
mkdir -p /data/nm-log
mkdir -p /data/nm-local
    • bigdata03执行:
mkdir -p /data/dn
mkdir -p /data/nm-log
mkdir -p /data/nm-local

格式化NameNode,

bigdata01执行:

hadoop namenode -format

hadoop这个命令来自于:$HADOOP_HOME/bin中的程序
由于配置了环境变量PATH,所以可以在任意位置执行hadoop命令哦

启动hdfs集群

# 在bigdata01执行
$. start-dfs.sh

# 如需停止可以执行
$. stop-dfs.sh

# ------------------ 查看状态 ---------------- #
# 方式1
# 如果HDFS已成功启动,应该能看到以下进程:
#     ? NameNode(主节点)
#     ? DataNode(在每个存储数据的节点上)
#     ? 如果启用了HA(High Availability)模式,还会有一个SecondaryNameNode或NameNode standby实例。
$. jps
4408 NameNode
4589 DataNode

# 方式2
# 访问HDFS的NameNode Web UI,它通常监听在9870端口:
http://NameNode_Hostname:9870

# 方式4
# 查看HDFS相关的日志文件(一般位于/var/log/hadoop/hdfs/目录下,
# 具体位置取决于你的Hadoop配置),查找启动成功的确认信息或错误信息。

# 方式5
# 通过HDFS客户端命令测试
$. hdfs dfs -ls /

start-dfs.sh这个命令来自于:$HADOOP_HOME/sbin中的程序
由于配置了环境变量PATH,所以可以在任意位置执行start-dfs.sh命令

启动yarn集群

# 在bigdata01执行
start-yarn.sh

# 如需停止可以执行
stop-yarn.sh

# ------------------ 查看状态 ---------------- #
# 方式1
$. yarn resourcemanager status
$. yarn nodemanager status

# 方式2
# 主节点(ResourceManager)上应该有ResourceManager进程。
# 从节点(NodeManager)上应该有NodeManager进程。
$. jps
4680 ResourceManager
5241 NodeManager

# 方式3
# 打开浏览器访问ResourceManager Web UI,通常端口号是8088:
http://ResourceManager_Hostname:8088

# 方式4
# 查看YARN相关的日志文件(如/var/log/hadoop-yarn/目录下),
# 寻找启动成功的确认信息或错误信息。

启动历史服务器

$. mapred --daemon start historyserver

# 如需停止将start更换为stop
$. mapred --daemon stop historyserver

启动web代理服务器

yarn-daemon.sh start proxyserver

# 如需停止将start更换为stop
yarn-daemon.sh stop proxyserver

验证Hadoop集群运行情况

验证进程

bigdata01bigdata02bigdata03上通过jps验证进程是否都启动成功

# bigdata01
$. jps
8401 NameNode
8513 DataNode
9201 WebAppProxyServer
9106 JobHistoryServer
8712 SecondaryNameNode

# bigdata02
$. jps
22768 DataNode

# bigdata03
$. jps
26675 DataNode

验证HDFS

浏览器打开:http://bigdata01:9870 创建文件test.txt,随意填入内容,并执行:

# hdfs dfs 命令等价于 hadoop fs
$. hadoop fs -put test.txt /test.txt
# or
$. hdfs dfs -put test.txt /test.txt

$. hadoop fs -cat /test.txt
# or
$. hdfs dfs -cat /test.txt

hadoop fs 命令参考

# -------------------------- 其它命令 -------------------------- # 
hadoop fs -ls / # 显示目录信息
hadoop fs -ls -R / # 递归显示目录信息
hadoop fs -mkdir /user/tguigu 在hdfs上创建目录
hadoop fs -moveFromlocal test.txt /user/tguigu/data 从本地剪切粘贴到hdfs
hadoop fs -appendTofile test.txt /user/tguigudata/test.txt 追加一个文件到已经存在的文件末尾
hadoop fs -cat 显示文件内容
hadoop fs -tail 显示一个文件的末尾
hadoop fs -cp /user/tguigu/../x.txt /user/tguigu/test../ 从hdfs的一个路径拷贝到hdfs的另一个路径
hadoop fs -mv /user/tguigu/../x.txt /.../ 在hdfs目录中移动文件
hadoop fs -get /user/tguigu/../x.txt ./ 等同于copyToLocal,就是从hdfs下载文件到本地
hadoop fs -getmerge /user/tguigu//test/* ./zaiyiqi.txt 合并下载多个文件
hadoop fs -put 等同于 copyFromLocal 上传
hadoop fs -rm 删除文件或文件夹
hadoop fs -rmdir 删除空目录
hadoop fs -df 统计文件系统的可用空间
hadoop fs -du 统计文件的大小信息
hadoop fs -setrep 设置hdfs中文件的副本量数
    # 例如: hadoop fs -setrep -R 3 /user/hadoop/data
    hadoop fs -setrep [-R] [-w] <numReplicas> <path>
        -setrep : 命令关键字,用于设置副本数。
        -R:递归选项,如果指定,将对指定目录及其子目录下的所有文件进行副本数设置。
        -w:等待选项,如果指定,命令将在所有副本都复制完成之后才返回。
        <numReplicas>:你想要设置的副本数量。
        <path>:你要更改副本数的HDFS文件或目录的路径。
# --------------------------  hadoop fs ---------------------------- # 
# [root@lpf-vm-115 hadoop-3.4.0]# hdfs dfs -h
# [root@lpf-vm-115 hadoop-3.4.0]# hadoop fs -h
-h: Unknown command
Usage: hadoop fs [generic options]
        [-appendToFile [-n] <localsrc> ... <dst>]
        [-cat [-ignoreCrc] <src> ...]
        [-checksum [-v] <src> ...]
        [-chgrp [-R] GROUP PATH...]
        [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
        [-chown [-R] [OWNER][:[GROUP]] PATH...]
        [-concat <target path> <src path> <src path> ...]
        [-copyFromLocal [-f] [-p] [-l] [-d] [-t <thread count>] [-q <thread pool queue size>] <localsrc> ... <dst>]
        [-copyToLocal [-f] [-p] [-crc] [-ignoreCrc] [-t <thread count>] [-q <thread pool queue size>] <src> ... <localdst>]
        [-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] [-e] [-s] <path> ...]
        [-cp [-f] [-p | -p[topax]] [-d] [-t <thread count>] [-q <thread pool queue size>] <src> ... <dst>]
        [-createSnapshot <snapshotDir> [<snapshotName>]]
        [-deleteSnapshot <snapshotDir> <snapshotName>]
        [-df [-h] [<path> ...]]
        [-du [-s] [-h] [-v] [-x] <path> ...]
        [-expunge [-immediate] [-fs <path>]]
        [-find <path> ... <expression> ...]
        [-get [-f] [-p] [-crc] [-ignoreCrc] [-t <thread count>] [-q <thread pool queue size>] <src> ... <localdst>]
        [-getfacl [-R] <path>]
        [-getfattr [-R] {-n name | -d} [-e en] <path>]
        [-getmerge [-nl] [-skip-empty-file] <src> <localdst>]
        [-head <file>]
        [-help [cmd ...]]
        [-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...]]
        [-mkdir [-p] <path> ...]
        [-moveFromLocal [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
        [-moveToLocal <src> <localdst>]
        [-mv <src> ... <dst>]
        [-put [-f] [-p] [-l] [-d] [-t <thread count>] [-q <thread pool queue size>] <localsrc> ... <dst>]
        [-renameSnapshot <snapshotDir> <oldName> <newName>]
        [-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...]
        [-rmdir [--ignore-fail-on-non-empty] <dir> ...]
        [-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
        [-setfattr {-n name [-v value] | -x name} <path>]
        [-setrep [-R] [-w] <rep> <path> ...]
        [-stat [format] <path> ...]
        [-tail [-f] [-s <sleep interval>] <file>]
        [-test -[defswrz] <path>]
        [-text [-ignoreCrc] <src> ...]
        [-touch [-a] [-m] [-t TIMESTAMP (yyyyMMdd:HHmmss) ] [-c] <path> ...]
        [-touchz <path> ...]
        [-truncate [-w] <length> <path> ...]
        [-usage [cmd ...]]

Generic options supported are:
-conf <configuration file>        specify an application configuration file
-D <property=value>               define a value for a given property
-fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations.
-jt <local|resourcemanager:port>  specify a ResourceManager
-files <file1,...>                specify a comma-separated list of files to be copied to the map reduce cluster
-libjars <jar1,...>               specify a comma-separated list of jar files to be included in the classpath
-archives <archive1,...>          specify a comma-separated list of archives to be unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]

验证YARN

浏览器打开:http://bigdata01:8088/cluster 执行:

# 创建文件words.txt,填入如下内容
$. vi words.txt
example osc hadoop
osc hadoop hadoop
osc hadoop

# 将文件上传到HDFS中
$. hadoop fs -put words.txt /words.txt

# 执行如下命令验证YARN是否正常
# 在web界面能看到任务并且没有报错,则集群部署成功!
hadoop jar \
/usr/local/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar \
wordcount \
-Dmapred.job.queue.name=root.root \
/words.txt \
/output

$. hadoop fs -cat /words.txt

yarn 参考命令

yarn常用命令参考: yarn常用命令

# ------------------------ yarn 参考命令 ------------------------ # 
# yarn 示例

# 查看yarn运行状态(Hadoop 2.x版本)
yarn resourcemanager status
yarn nodemanager status


# 列出YARN集群中正在运行或最近运行过的所有应用程序的状态信息。
$. yarn application -list [-appStates <state1, state2, ...>] [-all]
   $. yarn application -list -all
# 查看指定应用的详细信息
$. yarn application -status <Application ID>

# 查看应用的日志:
yarn logs -applicationId <Application ID>

# 终止指定的应用
yarn application -kill <Application ID>

$. yarn top
$. yarn node

# 显示集群的节点信息:
$. yarn node -list # 可以查看NodeManager information url和 nodeId
$. yarn node -list -all

$. yarn -showDetails
$. yarn -list -showDetails

# 查看单个节点详细状态:
$. yarn node -states
$. yarn node -states -all -list
# -status <NodeId>   Prints the status report of the node.
$. yarn node -status  bigdata02:30823

# yarn -h
[root@lpf-vm-115 bin]# yarn -h
WARNING: YARN_CONF_DIR has been replaced by HADOOP_CONF_DIR. Using value of YARN_CONF_DIR.
WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of YARN_LOG_DIR.
Usage: yarn [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
 or    yarn [OPTIONS] CLASSNAME [CLASSNAME OPTIONS]
  where CLASSNAME is a user-provided Java class

  OPTIONS is none or any of:

--buildpaths                       attempt to add class files from build tree
--config dir                       Hadoop config directory
--daemon (start|status|stop)       operate on a daemon
--debug                            turn on shell script debug mode
--help                             usage information
--hostnames list[,of,host,names]   hosts to use in worker mode
--hosts filename                   list of hosts to use in worker mode
--loglevel level                   set the log4j level for this command
--workers                          turn on worker mode

  SUBCOMMAND is one of:


    Admin Commands:

daemonlog               get/set the log level for each daemon
node                    prints node report(s)
rmadmin                 admin tools
routeradmin             router admin tools
scmadmin                SharedCacheManager admin tools

    Client Commands:

app|application         prints application(s) report/kill application/manage long running application
applicationattempt      prints applicationattempt(s) report
classpath               prints the class path needed to get the hadoop jar and the required libraries
cluster                 prints cluster information
container               prints container(s) report
envvars                 display computed Hadoop environment variables
fs2cs                   converts Fair Scheduler configuration to Capacity Scheduler (EXPERIMENTAL)
jar <jar>               run a jar file
logs                    dump container logs
nodeattributes          node attributes cli client
queue                   prints queue information
schedulerconf           Updates scheduler configuration
timelinereader          run the timeline reader server
top                     view cluster information
version                 print the version

    Daemon Commands:

globalpolicygenerator   run the Global Policy Generator
nodemanager             run a nodemanager on each worker
proxyserver             run the web app proxy server
registrydns             run the registry DNS server
resourcemanager         run the ResourceManager
router                  run the Router daemon
sharedcachemanager      run the SharedCacheManager daemon
timelineserver          run the timeline server

SUBCOMMAND may print help when invoked w/o parameters or with -h.

其它Web页面url

# hadoop首页
http://bigdata01:8088/cluster

# yarn resourcemanager web ui


# Datanode Information
http://bigdata01:9870/dfshealth.html#tab-datanode

# yarn log url:
http://bigdata01:19888/jobhistory/logs

# hadoop jobhistory
http://bigdata01:19888/jobhistory

# yarn.web-proxy.address
http://bigdata01:8089

# NodeManager information
http://bigdata01:8042/node

点击这里复制本文地址 以上内容由莫古技术网整理呈现,请务必在转载分享时注明本文地址!如对内容有疑问,请联系我们,谢谢!
qrcode

莫古技术网 © All Rights Reserved.  滇ICP备2024046894号-2