My First CTF Write Up
SANS Hackfest 2020
I have always wanted to participate in a CTF. I knew it would be a great way to learn but I have always been intimidated. I was always afraid I would spend days attempting to hack away at something only to end the game dead last, with zero flags
However, in 2020 I attended a virtual conference SANS Hackfest. I heard about a few free CTF's that would be open during the conference. The speaker mentioned the different events and he described BootUp as a great start for beginners so I gave it a shot. The following is the write up of my first CTF.
* Spoiler Alert * It seems I did come in dead last but I lost nothing. I gained an amazing experience, had fun, and learned quite a bit. So let’s dive into it.
BootUp 2020
This CTF was jeopardy style separated into categories: binary exploitation, forensics, networking, and web. Each challenge was assigned points relative to the difficulty level.
Web
WE01 - URL Fuzzing: 100pt
Our first web challenge starts here.
The hint said to “think of places you could get a list of common directories to check for”. I tried looking at /robots.txt
by appending this to the end of the URL but it seemed to take me back to the same destination web page.
I knew this was not necessarily going to take me to a list of common directories but at first, I thought it may take me to a list of directories that were either allowed/disallowed
This challenge is called URL Fuzzing. URL Fuzzing is essentially making numerous requests to a server attempting to find hidden paths/files/directories in hopes of discovering hidden content or content that is not obvious to web page viewers.
The requests are built using a list of commonly known directory/file names, the tool would traverse the list sending many requests and logging responses. If the response is a 200, then this is reported back to the user.
This would give me an idea of paths we can access on the site. The results of the scan showed us a new route /sample/
returning a 200 response.
I navigated back to the webpage and typed /sample/
after the URL. Lo and behold we seemed to have found it.
Contents of flag.txt: bustING_direTORies_8918
WEO3 - Web Crawling: 100pt
In this exercise, we start out with a new web page.
In this challenge, we are instructed to navigate to the URL and try to find the flag. I decided to take a similar approach to the one I took before. I attempted to navigate to /robots.txt
.
Figure 5 displays instructions from robots.txt: User-agent: * Disallow: /61829201829023.html
indicating that the server does not want any agent to access /61829201829023.html
this tells search engine crawlers to ignore the path stated. Since we can view robots.txt, we can see the path that we “should not navigate to”. So I typed in our URL followed by the “disallowed path” and here we can see our flag!
WM03 - API's: 250pt
This next challenge's hint tells us to get the flag from the API. So we check it out, following the URL to
I decided to navigate to the URL and proxy through Burp Suite to look at our requests' structure. I noticed that upon requesting the web page from the image above, a POST request was made to an API route /stag/wm03
for user information. I assumed this request determines the user data displayed on the initial web page (User/Age):
The response that we see: