Jekyll2023-10-06T10:13:42-05:00https://whynotsecurity.com/feed.xmlWhyNotSecurityYet another InfoSec blogwhynotsecurityCredmaster22023-01-22T18:01:00-06:002023-01-22T18:01:00-06:00https://whynotsecurity.com/blog/credmaster2<p>CredMaster 2: Electric Boogaloo</p>
<p>Upgrades, Modules and Feature Additions</p>
<ul>
<li><a href="#tldr">TLDR</a></li>
<li><a href="#new-plugins">New Plugins</a>
<ul>
<li><a href="#gmail-user-enumeration">Gmail User Enum</a></li>
<li><a href="#office365-managed-tenant-user-enumeration">Office365 User Enum</a></li>
<li><a href="#owaews">OWA/EWS</a></li>
<li><a href="#adfs">ADFS</a></li>
<li><a href="#azure-seamless-sso">Azure Seamless SSO</a></li>
<li><a href="#azure-vault">Azure Vault</a></li>
<li><a href="#msgraph">MSGraph</a></li>
</ul>
</li>
<li><a href="#config-file-updates">Config File Updates</a></li>
<li><a href="#new-features">New Features</a>
<ul>
<li><a href="#weekday-warrior">Weekday Warrior</a></li>
<li><a href="#notification-system">Notification System</a></li>
<li><a href="#header-addition">Header Addition</a></li>
<li><a href="#fireprox-utility-functions">FireProx Utilities</a></li>
<li><a href="#other-stray-additions">Others</a></li>
</ul>
</li>
<li><a href="#credits">Credits</a></li>
</ul>
<p>Github: <a href="https://github.com/knavesec/CredMaster">github.com/knavesec/CredMaster</a></p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/credmaster2/1.png" alt="screenshot1" /></p>
<h2 id="tldr">TLDR</h2>
<p>Roughly 2 years ago, I released CredMaster as an all-in-one password spraying suite.</p>
<p>If you’re familiar with CredMaster, feel free to skip down a paragraph. For those unfamiliar, CredMaster launches password sprays using AWS API Gateways to rotate requesting IP addresses on each request. Credmaster is a plugin-based tool used to run anonymous password sprays in order to beat throttle detections by spoofing and changing identifications markers. This was all based on the stellar research by <a href="https://twitter.com/ustayready">@ustayready’s</a> awesome <a href="https://github.com/ustayready/fireprox">Fireprox tool</a>. Feel free to read the original CredMaster blog post with all the juicy details <a href="https://whynotsecurity.com/blog/credmaster/">here</a>.</p>
<p>Over the time after released, I’ve continued adding features and modules, while fixing bugs as well. I felt like it would be a great time to update those on the progress of those goals and modifications. The new features are listed below:</p>
<ul>
<li><a href="#config-file-updates">Config File Updates</a></li>
<li><a href="#new-plugins">8 New Plugins</a></li>
<li><a href="#notification-system">Notification systems</a></li>
<li><a href="#weekday-warrior">Weekday Warrior Evasion</a></li>
<li><a href="#fireprox-utility-functions">FireProx Utilities</a></li>
<li>Color Output</li>
<li>Automatic logging of successful creds & valid users</li>
</ul>
<p><strong>Thank you so much to the members of the community who have contributed your time in helping this tool, either by your own research, direct pull request, bug fixes or issue reports. See the <a href="#Credits">Credits</a> section for the list of contributors. Special thanks to <a href="https://twitter.com/ZephrFish">Andy Gill</a> who helped re-write and spark anew many of these features</strong></p>
<h2 id="new-plugins">New Plugins</h2>
<p>A total of 8 new plugins have been added: 2 user enum, 6 spraying. Andy is working on an MFASweep module at the time of writing, to be pushed and merged soon.</p>
<h3 id="gmail-user-enumeration">Gmail User Enumeration</h3>
<p>User enumeration technique for Gmail and GSuite users, based on x0rz’s research found <a href="https://blog.0day.rocks/abusing-gmail-to-get-previously-unlisted-e-mail-addresses-41544b62b2">here</a></p>
<p>Simply takes an input list of users and will return either valid/unknown user, it <em>will not</em> make an authentication request</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>credmaster.py <config args> --plugin gmailenum -u users.txt
</code></pre></div></div>
<h3 id="office365-managed-tenant-user-enumeration">Office365 Managed Tenant User Enumeration</h3>
<p>User enumeration for Office365 Managed tenants, via the classic redirect in login.microsoftonline.com. Again, no authentication attempts are made against the account.</p>
<p>This has been tested with 15 threads and the entirety of <a href="https://github.com/insidetrust/statistically-likely-usernames">statistically-likely-username’s</a> <a href="https://github.com/insidetrust/statistically-likely-usernames/blob/master/jsmith.txt">jsmith.txt</a> userlist (~50k usernames) without throttling/limiting.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>credmaster <config args> --plugin o365enum -u users.txt
</code></pre></div></div>
<h3 id="owaews">OWA/EWS</h3>
<p>These are the classic Outlook Web App (OWA) and Exchange Web Services (EWS) on-prem email solution password sprayers. On-prem password sprays really need no advanced throttle evasion, but always great to have the option.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>credmaster.py <config args> --plugin {owa | ews} --url https://mail.domain.com
</code></pre></div></div>
<h3 id="adfs">ADFS</h3>
<p>This is a tool to spray on-prem AD/FS servers for domain-joined accounts. These are typically juicy since there are less throttle controls for password sprays. Contributed by <a href="https://twitter.com/frycos">frycos</a>.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>credmaster.py <config args> --plugin adfs --url https://adfs.domain.com
</code></pre></div></div>
<h3 id="azure-seamless-sso">Azure Seamless SSO</h3>
<p>The AzureSSO module is for brute-forcing Azure AD instances using the “autologon.microsoftazuread-sso.com” URL method. At the time, this method left no evidence of password spraying attack in Office365 logs. This module is also verbose enough to generally provide user enumeration against Managed Office365 tenants.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>credmaster.py <config args> --plugin azuresso --domain tenantdomain.com
</code></pre></div></div>
<h3 id="azure-vault">Azure Vault</h3>
<p>The Azure Vault is a similar module to the MSOL and AzureSSO modules, simply with a different endpoint targeted. This again makes for a more evasive spray since logs aren’t always consistent.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>credmaster.py <config args> --plugin azuresso --domain tenantdomain.com
</code></pre></div></div>
<h3 id="msgraph">MSGraph</h3>
<p>This, again, is yet another MS spraying tool. The target domain is the same as the MSOL tool, with a different resource targeted (<code class="language-plaintext highlighter-rouge">graph.microsoft.com</code> vs <code class="language-plaintext highlighter-rouge">graph.windows.net</code>). Simply provides a bit more variety to your desired type of spraying.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>credmaster.py <config args> --plugin msgraph
</code></pre></div></div>
<h2 id="config-file-updates">Config File Updates</h2>
<p>When I initially created this script, there weren’t that many options to choose from. Now there are many. The config file, originally meant for FireProx connection details, has now been modified to support all flags of CredMaster for easy re-use. This was mainly out of a desire to keep certain config options static across campaigns.</p>
<p>Config file CLI to launch a spray with a filled out config file, it’s just that easy:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>credmaster.py --config config.json
</code></pre></div></div>
<p>Any config options specified in this file can be overridden with CLI inputs. In case a static “operator template” would be preferred, but engagement specific details are preffered. The below command would take all inputs from the config file, but manually specify 8 threads.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>credmaster.py --config config.json --threads 8
</code></pre></div></div>
<p>Example Config file options</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
"plugin" : null,
"userfile" : null,
"passwordfile" : null,
"userpassfile" : null,
"useragentfile" : null,
"outfile" : null,
"threads" : null,
"region" : null,
"jitter" : null,
"jitter_min" : null,
"delay" : null,
"passwordsperdelay" : null,
"randomize" : false,
"header" : null,
"weekday_warrior" : null,
"color" : false,
"slack_webhook" : null,
"pushover_token" : null,
"pushover_user" : null,
"discord_webhook" : null,
"teams_webhook" : null,
"operator_id" : null,
"exclude_password" : false,
"access_key" : null,
"secret_access_key" : null,
"session_token" : null,
"profile_name" : null
}
</code></pre></div></div>
<h2 id="new-features">New Features</h2>
<p>Outside of modules, a few additional features have been added for evasion, user experience and general usability. Some brief summaries of the bigger ones are below:</p>
<ul>
<li>WeekDay Warrior: SOC evasion by only spraying during business hours and at common login times</li>
<li>Notification Systems: Notify yourself when you’ve got a successful password guess</li>
<li>Header Addition: Add a custom static header to each request for attribution if desired</li>
<li>FireProx Utilities: General FireProx utility functions for backend management, API creation, cleaning, etc for easier management on error</li>
</ul>
<p>As always, all of this information is stored in the Wiki as well ;)</p>
<h3 id="weekday-warrior">Weekday Warrior</h3>
<p>This was a technique designed out of a desire to spray against an active SOC that may detect your spray and issue password resets to those impacted. While spraying against a client I managed to guess a password correctly, but at a time when no one should be logging in (well after business hours). This resulted in the SOC seeing that anomolous login, resetting the password, and being on higher alert than they would have been.</p>
<p>The WeekDay Warrior feature is designed to help with that by doing three key things:</p>
<ol>
<li>Spraying between standard business hours automatically (~7-5)</li>
<li>Specifically spraying at times where a user would log in normally (Morning, Lunch, End of Day)</li>
<li>Only spray on normal business days (Monday-Friday)</li>
</ol>
<p>By doing these three things, a successful password guessed is far more likely to go unnoticed by an active security team, which could then be used further by your team.</p>
<p>So how does this work in practice?</p>
<p>If you wanted to spray a company, you’d first need to figure out what timezone they’re in and what their UTC offset is. This way, you’re not spraying at <em>your own</em> M-F 9-5, it is your client’s timezone. This plugin will then attempt one password per userlist at 8:00, another at 12:00, and one last at 16:00 on each business day. The program would then sleep until the next business day and start again. The command below sprays at times 8, 12, 16 in the timezone UTC -6 (US Central Time)</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>credmaster.py <config args> --weekday_warrior -6
</code></pre></div></div>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/credmaster2/2.png" alt="screenshot of delay" /></p>
<h3 id="notification-system">Notification System</h3>
<p>Since this spraying tool is meant to be started and then left untouched for a long period of time, many people would like to be alerted when they’ve successfully guessed a password. As of now, there are configurable alert systems for the Pushover API and then Discord/Slack/Teams Webhooks. These settings can be added to the config file, multiple notification systems can be used.</p>
<p>The notification system will send a notification for spray start/stops and for valid credentials, sample below. The Operator ID will not be included if it isn’t configured. If the password should be “sanitized” from the notification, that can be configured with the “exclude_password” input flag.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/credmaster2/3.png" alt="screenshot of slack notification" /></p>
<h3 id="header-addition">Header Addition</h3>
<p>CredMaster’s creation was inherently designed to eliminate the possibility of attribution of the authentication requests. This is effective, however, it is sometimes beneficial to a client to verify that you were, indeed, the one making those requests.</p>
<p>The <code class="language-plaintext highlighter-rouge">header</code> flag can add a custom static flag to each of your requests, which can be relayed to your client at the end of an engagement if desired.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>credmaster.py <config args> --header "X-Force-Red: Was-Here"
</code></pre></div></div>
<h3 id="fireprox-utility-functions">FireProx Utility Functions</h3>
<p>CredMaster (obviously) uses FireProx API gateways significantly, but it doesn’t allow easy access to their management if an error occurs. There are 3 FireProx utility functions that may help the operator maintain a clean house. I typically use these commands if the spray was cancelled before completion and the script didn’t clean up the APIs properly.</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">credmaster.py --api_list</code></li>
</ul>
<p>This is essentially the same as the <code class="language-plaintext highlighter-rouge">list</code> command in the original FireProx. This will iterate over all regions and list out any APIs in use with detailed information.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/credmaster2/4.png" alt="screenshot" /></p>
<ul>
<li><code class="language-plaintext highlighter-rouge">credmaster.py --api_destroy {id}</code></li>
</ul>
<p>This is essentially the same as the <code class="language-plaintext highlighter-rouge">delete</code> command in the original FireProx. This will delete an API of the specified ID.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/credmaster2/5.png" alt="screenshot" /></p>
<ul>
<li><code class="language-plaintext highlighter-rouge">credmaster.py --clean</code></li>
</ul>
<p>This is slightly different from the original FireProx, it will instead iterate over every region and delete <em>every</em> instance of a FireProx API (will not touch non-fireprox APIs). It will leave any non-fireprox related APIs, but this is irreversible. Best used for if you have lots of APIs created, but don’t want to delete them one-by-one.</p>
<h3 id="other-stray-additions">Other Stray Additions</h3>
<p>A few other nice, but not ground breaking additions that were made:</p>
<ul>
<li>Ability to randomize the input list of users (<code class="language-plaintext highlighter-rouge">-r</code>)</li>
<li>Color output for success/failure upon guesses (<code class="language-plaintext highlighter-rouge">--color</code>)</li>
<li>Region selection to create APIs in (<code class="language-plaintext highlighter-rouge">--region</code>)</li>
<li>Automatic logging of successful guesses and valid users</li>
<li>Full rewrite for easier future development</li>
<li>TODO List</li>
</ul>
<h2 id="credits">Credits</h2>
<p>As said before, thank you to all those who directly or indirectly supported this project. This list of contributors can always be found on the CredMaster Readme page and within the wiki. The following two made multiple contributions, thank you to them.</p>
<ul>
<li><a href="https://twitter.com/ZephrFish">Andy</a></li>
<li><a href="https://infosec.exchange/@TheToddLuci0">Logan</a></li>
</ul>
<p>Always feel free to reach out, thanks for taking the time to read.</p>
<ul>
<li><a href="https://twitter.com/knavesec">@knavesec</a></li>
</ul>whynotsecurityCredMaster 2: Electric BoogalooOffice365 User Enumeration2022-05-09T00:00:03-05:002022-05-09T00:00:03-05:00https://whynotsecurity.com/blog/o365fedenum<p>Office365 User Enumeration Through Correlated Response Analysis</p>
<ul>
<li><a href="#tldr">TLDR</a></li>
<li><a href="#Technique">Technique</a></li>
<li><a href="#Limits">Limits</a></li>
<li><a href="#Conclusion">Conclusion</a></li>
</ul>
<p>Github: <a href="https://github.com/knavesec/o365fedenum">github.com/knavesec/o365fedenum</a></p>
<p>WWHF Talk: (Will be updated when posted to youtube)</p>
<h2 id="tldr">TLDR</h2>
<p>Office365 user enumeration is back with a new technique for both Managed and Federated environments. In my opinion, this technique could be abstracted and generalized to find userenum in <em>any</em> website, but would be a decent amount of effort to do so.</p>
<p>Against a target Office365 instance, the indicators for valid/invalid user appeared to be inconsistent so they must be determined dynamically. The rough process for this technique is to make authentication responses for 5 invalid users (RNG usernames) and 1 valid user, then compare the responses in order to determine which pieces of the response indicate valid/invalid user. This baseline is then used to enumerate unknown users.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/o365fedenum/baseline.png" alt="screenshot-indicators" /></p>
<p>Note: This technique was submitted to MSRC and was listed as a “won’t fix” issue.</p>
<h2 id="technique">Technique</h2>
<p>I was looking into different Office365 authentication methods to potentially implement into <a href="https://github.com/knavesec/CredMaster">CredMaster</a> and I came upon <a href="https://twitter.com/byt3bl33d3r">byt3bl33d3r’s</a> <a href="https://github.com/byt3bl33d3r/SprayingToolkit">SprayingToolkit</a>. The Office365 spraying technique using <code class="language-plaintext highlighter-rouge">autodiscover.xml</code> was great, and I was able to implement it as an o365 module. While doing so, I started looking at the response headers to check for any irregularities and saw there was a header called <code class="language-plaintext highlighter-rouge">X-AutoDiscovery-Error</code>. It contained a decent chunk of what appeared to be debugging information, so I wanted to see if it was prone to a user enumeration vulnerability.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/o365fedenum/initial-request.png" alt="xerror" /></p>
<p>When requesting an invalid user, it responded with:</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/o365fedenum/invalid-ex1.png" alt="invalid user" /></p>
<p>With a valid user, there were some very slight differences:</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/o365fedenum/valid-ex1.png" alt="valid user" /></p>
<p>It appeared the hunch was correct, indicators of <code class="language-plaintext highlighter-rouge">BlockStatus</code> as 1/10 and a literal Hit or Miss? Wrote up a quick script to check for these things, and tried it on the next client, but their responses were entirely different.</p>
<p>For an invalid user:</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/o365fedenum/invalid-ex2.png" alt="invalid user2" /></p>
<p>For a valid user:</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/o365fedenum/valid-ex2.png" alt="valid user2" /></p>
<p>Significantly different! Not only are the <code class="language-plaintext highlighter-rouge">BlockStatus</code> indicators of 3/8 different, but the MissHrd and HitHrd don’t seem to correlate either. The good news is there are considerably more indicators, like the HTTP Response code, two other headers, and “Login Failed” strings compared to “STS Failure” strings. After trying this against a few more federated environments, it became clear that there really wasn’t a common thread of indicators, and those indicators would need to be generated dynamically.</p>
<p>I hadn’t seen any techniques like this before, so I wanted to make a catch-all script that would work for each unique environment. This would require dynamically understanding what the indicators of a “valid” vs “invalid” user, then using those flags to make assessments for unknown users. The tool itself is designed to follow a process:</p>
<ol>
<li>Request 5 invalid users (RNG usernames)</li>
<li>Request 1 “known valid” user (supplied input)</li>
<li>Analyze the differences in the respective responses and generate a “baseline”</li>
<li>Test each unknown user against that baseline to see what their indicators reflect</li>
</ol>
<p>Congratulations, you’ve solved machine learning and it’s just a bunch of nested if/else statements!</p>
<p>Since this process is highly generic, in theory you could use a similar script and process to perform user enumeration against other endpoints with different constraints. The full process of this script is show below:</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/o365fedenum/full-run.png" alt="full run" /></p>
<h2 id="limits">Limits</h2>
<ol>
<li>This does make an authentication, so keep that in mind with respect to account lockouts</li>
<li>Since you are effectively password spraying against Office365, SmartLockout appears to kick in after a little while which may skew results (unknown if this actually impacts the userenum, but assuming it does), FireProx helps</li>
<li>It does appear that some indicators seem to be inconsistent in their settings, and will flip randomly. This can result in false negatives/positives, but from my testing its very rare considering how many other indicators are tracked. Against targets that have multiple indicators this becomes less of an issue due to the aggregation of flags</li>
</ol>
<h2 id="conclusion">Conclusion</h2>
<p>Even though this script is meant as a proof-of-concept for Office365, this process could be extracted to catch user enumeration vulnerabilities across any and all web applications. With a sufficient method of parsing, it would be great to abstract this to be generic, but I’ll leave that to someone who fancies solving the user enumeration problem.</p>
<p>It was a fantastic problem to implement a solution for, dynamic categorization of unknown request values was interesting. I was hoping to do this with some of ML solution (for extra buzzwords), but who has that kind of time.</p>
<p>Always feel free to reach out, thanks for taking the time to read.</p>
<ul>
<li><a href="https://twitter.com/knavesec">@knavesec</a></li>
</ul>whynotsecurityOffice365 User Enumeration Through Correlated Response AnalysisConvert ldapdomaindump to Bloodhound2021-10-25T19:00:00-05:002021-10-25T19:00:00-05:00https://whynotsecurity.com/blog/ldd2bh<p>Convert ldapdomaindump to Bloodhound using <a href="https://github.com/blurbdust/ldd2bh">ldd2bh</a></p>
<ul>
<li><a href="#tldr">TL;DR</a></li>
<li><a href="#disclaimers">Disclaimers</a></li>
<li><a href="#useful-scenarios">Useful scenarios</a></li>
<li><a href="#isnt-there-already-one">Isn’t there already one?</a></li>
</ul>
<h2 id="tldr">TL;DR</h2>
<p>I was on a internal engagement without credentials but we got a successful relay to LDAP. We were able to dump information from LDAP but wanted to avoid changing or adding a new computer to the domain. I’m a little too used to the <a href="https://github.com/knavesec/Max">Max</a> workflow and wanted to convert the <a href="https://github.com/dirkjanm/ldapdomaindump">ldapdomaindump</a> data into Bloodhound data.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/ldd2bh/ldd.png" alt="grep whiterose" /></p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/ldd2bh/tool.png" alt="conversion" /></p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/ldd2bh/bloodhound.png" alt="bloodhound" /></p>
<h2 id="disclaimers">Disclaimers</h2>
<p>This tool is not fully fleshed out. It currently provides the bare minimum for getting usable objects within Bloodhound. Sometimes, the <code class="language-plaintext highlighter-rouge">users.json</code> file requires pretty printing before Bloodhound will upload the data. I have not figured that out and the engagement moved on from initial access so I ran out of time to polish this tool.</p>
<p>Another note worthy item is ldapdomaindump does not contain all of the ACLs that a Bloodhound collector would identify so that data is left blank for all objects.</p>
<p>Currently Local Administrator access is assumed to all computer objects for Domain and Enterprise Admins. It’s likely correct but not a guarantee.</p>
<h2 id="useful-scenarios">Useful Scenarios</h2>
<p>This tool is very useful if you are on an internal and do not have credentials yet or prefer ldapdomaindump over a Bloodhound collector. It’s especially useful if you are accustomed to having Bloodhound data, like pretty graphs from Bloodhound, or really like <a href="https://github.com/knavesec/Max">Max’s</a> workflow.</p>
<h2 id="isnt-there-already-one">Isn’t there already one?</h2>
<p>Well yes and no. There is one for the fist release of Bloodhound but it hasn’t and won’t be updated according to <a href="https://github.com/dirkjanm/ldapdomaindump/issues/14">this Github issue</a>. So I set out to make my own and I envisioned this as the successor to <a href="https://github.com/dirkjanm">@dirkjanm</a>/<a href="https://twitter.com/_dirkjan">@_dirkjan</a>’s already existing <a href="https://github.com/dirkjanm/ldapdomaindump/blob/9e65b48eab765bfc6f85e57f8a46ff728d74b4b1/ldapdomaindump/convert.py#L164">ldd2bloodhound</a> converter.</p>
<h2 id="shoutouts">Shoutouts</h2>
<p>Shoutout to <a href="https://twitter.com/your_b1gbroth3r">b1gbroth3r</a> for providing the ldapdomaindump data from his homelab.
Shoutout to <a href="https://twitter.com/knavesec">knavesec</a> for fixing bugs for me when I was being dumb.</p>whynotsecurityConvert ldapdomaindump to Bloodhound using ldd2bhA tool to find Windows registry files in a blob of data2021-10-07T19:00:00-05:002021-10-07T19:00:00-05:00https://whynotsecurity.com/blog/needle<p>A tool to find Windows registry files in a blob of data: Needle</p>
<ul>
<li><a href="#tldr">TL;DR</a></li>
<li><a href="#useful-scenarios">Useful scenarios</a></li>
<li><a href="#how-does-the-tool-work">How does the tool work?</a>
<ul>
<li><a href="#sam">SAM</a></li>
<li><a href="#system">SYSTEM</a></li>
<li><a href="#security">SECURITY</a></li>
<li><a href="#cleaning-dirty-registry-files">Cleaning dirty registry files</a></li>
</ul>
</li>
<li><a href="#can-you-do-it-manually">Can you do it manually?</a></li>
<li><a href="#htb-bastion-spoilers">HTB Bastion Spoilers</a></li>
</ul>
<h2 id="tldr">TL;DR</h2>
<p>I found an open NFS share during an Internal with a backup of a Domain Controller in it, but the file was too big to download. I wrote this tool to grab SAM, SYSTEM, and SECURITY registry hives from the mounted share to compromise the live DC. I’ve found multiple instances of similar situations as recently as a couple weeks ago. Additionally, I have heard secondhand it came in handy recently. Maybe you’ll find it’s handy?
Find it <a href="https://github.com/blurbdust/needle.git">here</a>.</p>
<p>I actually wrote this tool some time ago but never got around to making a blog post about it. Here it is in action on a tar file.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/needle/tldr.png" alt="tldr" /></p>
<h2 id="useful-scenarios">Useful scenarios</h2>
<p>This tool is the most useful if you find a file that looks like it’s backup of a Windows machine in formats like .tar, .vhd, or even a .vmdk file. The large file or blob of data is the haystack and the registry files are the needles in the haystack, hence the tool’s name of Needle.
Needle is also useful for incomplete forensics images or downloads but you still need to pull credentials out of the partial image.
There are also edge cases where <code class="language-plaintext highlighter-rouge">guestmount(1)</code> fails or tar fails to extract.</p>
<p>Is the file is too large to exfil but you can mount it locally using <code class="language-plaintext highlighter-rouge">mount.nfs</code> or <code class="language-plaintext highlighter-rouge">mount.cifs</code>? Needle has you covered.
Could Needle fix up on-disk registry hives are marked as dirty and still get credentials even though secretsdump fails? Yes it can.</p>
<h2 id="how-does-the-tool-work">How does the tool work?</h2>
<p>First off, let’s focus on our goals: extract some form of credentials to demonstrate impact and potentially escalate privileges. Impacket’s secretsdump.py needs either SAM and SYSTEM or SECURITY and SYSTEM to find potentially useful credentials. SAM+SYSTEM combo would be the local password database for Windows and SECURITY+SYSTEM would return LSA secrets. If impacket is available, Needle will import it and automatically secretsdump the dumped registry files for you.</p>
<h3 id="sam">SAM</h3>
<p>The SAM file sounds like a good start.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/needle/sam.hexdump.png" alt="SAM" /></p>
<p>As you hopefully can see, a really good (and long) pattern to match off of would be <code class="language-plaintext highlighter-rouge">\\\x00S\x00y\x00s\x00t\x00e\x00m\x00R\x00o\x00o\x00t\x00\\\x00S\x00y\x00s\x00t\x00e\x00m\x003\x002\x00\\\x00C\x00o\x00n\x00f\x00i\x00g\x00\\\x00S\x00A\x00M
</code>.
I have been told the longer the pattern, the faster the searching so this very long pattern should work out great and we need to keep pattern length in mind for the other registry files.</p>
<h3 id="system">SYSTEM</h3>
<p>Moving onto the SYSTEM registry hive since we need the <code class="language-plaintext highlighter-rouge">bootkey</code> out of it to decrypt the data stored in SAM.
<img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/needle/system.hexdump.png" alt="SYSTEM" /></p>
<p>The <code class="language-plaintext highlighter-rouge">SYSTEM</code> part of the file doesn’t seem too longer so after checking a couple SYSTEM registry files from different Windows systems in my homelab, I settle on adding some null bytes to increase the pattern length. After all, no one wants to sit watching a terminal waiting for results longer than required. So we can make our pattern <code class="language-plaintext highlighter-rouge">\x00\x00\x00S\x00Y\x00S\x00T\x00E\x00M\x00\x00\x00\x00\x00"</code>.
to maximize length and effectiveness. At this point we could try searching for just SAM and SYSTEM to get the local password hashes but it would also be really nice to try for the machine account hash or potential plaintext credentials stored in LSA secrets.</p>
<h3 id="security">SECURITY</h3>
<p>Now focusing on SECURITY, let’s find a pattern.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/needle/security.hexdump.png" alt="SECURITY" /></p>
<p>It looks like we can get another long one. I’m not sure why the first part truncated but every sample I found has the same start point so I’ll roll with it. The pattern is <code class="language-plaintext highlighter-rouge">e\x00m\x00R\x00o\x00o\x00t\x00\\\x00S\x00y\x00s\x00t\x00e\x00m\x003\x002\x00\\\x00C\x00o\x00n\x00f\x00i\x00g\x00\\\x00S\x00E\x00C\x00U\x00R\x00I\x00T\x00Y</code>.</p>
<h3 id="cleaning-dirty-registry-files">Cleaning dirty registry files</h3>
<p>Take a look at the SYSTEM registry file shown above. There’s an extra <code class="language-plaintext highlighter-rouge">DIRT</code> and a large chunk of null bytes. Since most tools parsing the registry file, use offsets this is obviously break it. After debating for several nights what the best way to go about fixing up the dirty registry hives could be, I decided on just stripping out the extra data. I’m going to be honest, it’s been long enough I don’t remember why removing extra zeros was sufficient. However, if you come across a registry file that is marked as dirty, it will have those extra chunks of null bytes so Needle will try to remedy this (if the <code class="language-plaintext highlighter-rouge">--clean</code> flag is specified) by removing the extra chunks. Needle will then try to secretsdump as usual and output the results.</p>
<h2 id="can-you-do-it-manually">Can you do it manually?</h2>
<p>Sure! That’s exactly how I started but quickly realized lots of false positives and wanted an automated way to try all possible offsets into a file. You’ll have to repeat for every instance of every pattern which can (and did) get tedious.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">grep</span> <span class="nt">--byte-offset</span> <span class="nt">-Pa</span> <span class="nv">$PATTERN</span> /mnt/large.file.tar
<span class="nb">dd </span><span class="k">if</span><span class="o">=</span>/mnt/large.file.tar <span class="nv">of</span><span class="o">=</span>test_SAM.bin <span class="nv">skip</span><span class="o">=</span><span class="nv">$OFFSET</span> <span class="nv">count</span><span class="o">=</span>17000000 <span class="nv">iflag</span><span class="o">=</span>skip_bytes,count<span class="o">=</span>bytes
secretsdump.py LOCAL <span class="nt">-sam</span> test_SAM.bin <span class="nt">-system</span> test_SYSTEM.bin
</code></pre></div></div>
<h2 id="htb-bastion-spoilers">HTB Bastion Spoilers</h2>
<p>In order to test this out using some sample (not client data) I used HTB’s Bastion. If you’re working through this retired machine, you should not read any further and come back when you’re ready for spoilers.</p>
<p>Bastion had an open SMB share with a backup of a Windows machine in the form of a VHD file. The notes in the same directory say to not download the image file as it’d kill bandwidth for other users. <code class="language-plaintext highlighter-rouge">guestmount</code> is likely the intended path but it occasionally fails to mount an image.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/needle/bastion.gif" alt="Bastion" /></p>whynotsecurityA tool to find Windows registry files in a blob of data: NeedleXSS to RCE2021-09-28T00:00:03-05:002021-09-28T00:00:03-05:00https://whynotsecurity.com/blog/xss-to-rce<p>XSS to RCE: Covert Target Websites into Payload Landing Pages</p>
<ul>
<li><a href="#tldr">TLDR</a></li>
<li><a href="#putting-it-together">Putting It Together</a></li>
<li><a href="#limits">Limits</a></li>
<li><a href="#defenses">Defenses</a></li>
</ul>
<h2 id="tldr">TLDR</h2>
<p>I recently came upon an interesting post about a threat actor’s tactic of converting a vulnerable website into a great payload landing page. That post can be found here: <a href="https://www.bleepingcomputer.com/news/security/phishing-campaign-uses-upscom-xss-vuln-to-distribute-malware/">https://www.bleepingcomputer.com/news/security/phishing-campaign-uses-upscom-xss-vuln-to-distribute-malware/</a>. With some variation, using a XSS vulnerability you can load an external JavaScript file, which creates a “new page” that you control for your pretext. The benefit of this tactic is that your landing page URL can still point to your client domain, but it can load whatever HTML code you want, download a payload file, masquerade as the real site, etc.</p>
<p>The impact to XSS isn’t always something like session stealing, sometimes it’s a whole new vector.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/xss2rce/site2.png" alt="site2" /></p>
<h2 id="putting-it-together">Putting it together</h2>
<p>To start, you need to find a XSS vulnerability of some kind, one that you can trigger by directing a user to a specific URL. This can be done via a URL parameter based reflected XSS, or something like a stored XSS that can be triggered from a specific URL. Either way, you’ll need a URL of some kind to direct a user to click on. I’ve set up a basic company website that is vulnerable to XSS.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/xss2rce/site1.png" alt="site1" /></p>
<p>The XSS can be triggered fairly easily by design.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/xss2rce/xssexample.png" alt="xssexample" /></p>
<p>Since we want to host a new landing page, we will have to clone a site to use. I prefer to use SingleFile, which is a browser extension. It will simply clone a page down to a single HTML file that you can use as your landing.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/xss2rce/singlefile.png" alt="singlefile" /></p>
<p>In this case I’ll clone the website so I can edit the HTML to what I please, SingleFile downloads everything for you.</p>
<p>A quick conversion with the following bash one-liners will turn your HTML file into a usable JS file. Naturally, you may want to inject other JS into the session, or auto-download your payload files. I personally like to provide a link that says “if your download doesn’t start, please click here”, but not auto download (for spam checkers). With creativity, you can do whatever you want.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sed 's/"/\\x22/g' SINGLEFILE_OUTPUT_FILE.html | sed -z 's/\n//g' | awk '{print "htmlstring = \"" $0 "\";"}' > JS_OUTPUT_FILE.js
echo -e "\n document.write(htmlstring); \n" >> JS_OUTPUT_FILE.js
</code></pre></div></div>
<p>In this case, I can simply load the cloned HTML code into my target website via the XSS vector so it looks like the real thing. I’ve edited the landing page to match a file download pretext, and provided a “click here” button that links to my payload. Since the XSS is URL based, I can put that in an email and direct users to it.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/xss2rce/site2.png" alt="site2" /></p>
<p>Pretty sleek landing page, loaded on the clients web domain. Since the URL parameters sometimes look sketchy, sometimes I’ll include some fake parameters like <code class="language-plaintext highlighter-rouge">download=OnboardingDocument.docx&cookie=<snip></code> to obscure the actual XSS payload.</p>
<h2 id="limits">Limits</h2>
<p>There are a few weird limitations that I’ve found while using this technique on engagements:</p>
<ul>
<li>Content Security Policy headers: This can limit loading external external JS files. This however can sometimes be bypassed by simply including your entire JS payload within the raw XSS variable. Significantly more difficult though.</li>
<li>Stored XSS: I’ve never tried it with something like this, but I assume it’s possible to still execute as long as you can direct a user to your XSS landing via the URL.</li>
<li>Weird HTML tricks: Depending on where the XSS is, the page will be loaded in a contained section of HTML (like a div/table/etc), which simply won’t look right. You can fix this by closing the original site HTML, and commenting out that stuff below, but its a fairly hacky fix. To fix that kind of stuff you’ll have to work some HTML magic.</li>
</ul>
<h2 id="defenses">Defenses</h2>
<p>What can a defender do to mitigate this issue?</p>
<p>Well to state the obvious, don’t allow XSS on your webapp. Super simple, right?</p>
<p>An alternate way of mitigating this style of attack is with effective security header settings, specifically the Content-Security-Policy. This policy effectively determines where valid information can be leaded from, and in this case we’re attempting to load JavaScript from a secondary, malicious website. Applying these headers can help in a situation like this, but they aren’t perfect. In theory an attacker could always put their entire desired payload and include it in the XSS string, but that too has its limitations. CSP headers are a quick and easy way to take a chunk of this attack out of play. Make sure to apply them on your subdomains as well, redirecting a phish to sub.domain.com still looks pretty legit ;)</p>
<p>If anyone has any better mitigations or techniques, let me know and I’ll update the blog! Always feel free to reach out, thanks for taking the time to read.</p>
<ul>
<li><a href="https://twitter.com/knavesec">@knavesec</a></li>
</ul>whynotsecurityXSS to RCE: Covert Target Websites into Payload Landing PagesEyeWitnessTheFitness2021-08-08T00:00:03-05:002021-08-08T00:00:03-05:00https://whynotsecurity.com/blog/eyewitnessthefitness<p>EyeWitnessTheFitness</p>
<p><a href="https://github.com/knavesec/EyeWitnessTheFitness">github.com/knavesec/EyeWitnessTheFitness</a></p>
<h2 id="tldr">TLDR</h2>
<p>External scan prevention systems make recon and enum difficult, one of the best ways to bypass that is to distribute your operations to different IP addresses. <a href="https://github.com/ustayready/fireprox">Fireprox</a> (shoutout <a href="https://twitter.com/ustayready">@ustayready</a>) makes that easy by rotating the IP on every request, but for a tool like Eyewitness, you’d need to generate a new Fireprox API for every url.</p>
<p>Instead of doing that, use this tool to generate a single Fireprox API that encompasses all your needs, then outputs to a file compatible for direct use with Eyewitness. Easy distributed scan prevention bypass for external recon.</p>
<h2 id="theory">Theory</h2>
<p>On a red team engagement recently, we were doing some limited enumeration of client network URLs but after a certain amount of requests with EyeWitness, they would all start timing out and fail to load. When investigating we were able to load pages manually, but any type of scan would get blocked after a set of time. We needed IP rotation, with the functionality of EyeWitness, enter: TheFitness.</p>
<p>I had already used FireProx generation extensively in my CredMaster tool, but I didn’t want to generate a unique API for every host I wanted to witness. For 100 hosts generating 100 APIs is just silly and inefficient, so I started to dig into how the API was generated at the template level. On each template, you can specify granular details about what you want your end URI functionality to do. The standard FireProx template just maps anything after the initial <code class="language-plaintext highlighter-rouge">/</code> to the end website desired for a straight pass-through. Instead of doing that, I aliased the first URI to be the target domain:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/www.google.com/ -> https://www.google.com/
/amazon.com/ -> https://amazon.com/
...
</code></pre></div></div>
<p>While I haven’t figured out a way to make this truly generic and dynamic, it does provide the ability to create a single API and pass through to multiple hosts. Taking in a list of target hosts, like a standard EyeWitness target file, I could generate a new template that encompassed everything necessary for the enum scan. Thus, EyeWitnessTheFitness was born.</p>
<h2 id="usage">Usage</h2>
<p>Since this method involves using AWS to create APIs, you’ll need access keys. Instructions to get those can be found here: <a href="https://bond-o.medium.com/aws-pass-through-proxy-84f1f7fa4b4b">https://bond-o.medium.com/aws-pass-through-proxy-84f1f7fa4b4b</a>. I wouldn’t be worried about cost, it’s something like a few pennies USD for a few million requests.</p>
<p>Once you have your keys, you can either provide them as CLI, or put them in the <code class="language-plaintext highlighter-rouge">aws.config.template</code> file for easier use. Simply provide a formatted Eyewitness target file (with http/s already appended!), provide an output file, and a region and you’re good to go!</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/wtf/ewtf-run1.png" alt="ewtf-run1" /></p>
<p>Using it with Eyewitness, it will make requests to each of the endpoints desired.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/wtf/eyewitness-run.png" alt="eyewitness-run" /></p>
<p>Then you get your output view.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/wtf/eyewitness-results1.png" alt="eyewitness-results1" /></p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/wtf/eyewitness-results2.png" alt="eyewitness-results2" /></p>
<p>Simple scan prevention bypass.</p>
<p>You can also use this to list and delete those APIs. When listing, the original Fireprox tool won’t output these options due to a filtering issue, so I’ve included that functionality here.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/wtf/ewtf-run-delete.png" alt="ewtf-run-delete" /></p>
<p>Feel free to reach out with any questions, I’m always willing to chat.</p>
<ul>
<li><a href="https://twitter.com/knavesec">@knavesec</a></li>
</ul>whynotsecurityEyeWitnessTheFitnessExternal Email Warning Bypass2021-04-22T00:00:03-05:002021-04-22T00:00:03-05:00https://whynotsecurity.com/blog/external-email-warning-bypass<p>External Email Warning Bypass for Office365 & Outlook</p>
<p><a href="https://gist.github.com/knavesec/570ddd0cd7e00d02e87121576a677b59">POC</a></p>
<ul>
<li><a href="#tldr">TLDR</a></li>
<li><a href="#summary">Summary</a></li>
<li><a href="#impact">Impact</a></li>
<li><a href="#poc">POC</a></li>
<li><a href="#limitations">Limitations</a></li>
<li><a href="#remediation">Remediation</a></li>
<li><a href="#disclosure-timeline">Disclosure Timeline</a></li>
</ul>
<h2 id="tldr">TLDR</h2>
<p>Company emails are often receiving phishing emails from malicious actors using similar domains as the company. To combat this. Administrators set rules to label these emails as an “external email” and tend to set some sort of warning to prevent users from clicking it. One of the most common ways to set this prepending HTML code to the beginning of the external email, as shown below.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/phishing/poc-client1.png" alt="poc-client1" /></p>
<p>This provides the user with a big indicator that the email is not from the internal domain and should be read with caution. However, with a little bit of HTML tampering on the attackers side, we can force the receiving end to not display this error as shown below.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/phishing/poc-client2.png" alt="poc-client2" /></p>
<p>My implementation of the POC works for the Outlook desktop client as well as the Outlook WebApp. See the “POC” Section for steps, and pay attention to the limitations.</p>
<h2 id="summary">Summary</h2>
<p>On a client engagement, we had a scenario that was pretty unorthodox for a penetration test. For this client we had a long term contract, and they specifically wanted us to use their testing machines, so on the first day we were set up with a corporate laptop, internal company email, and a Kali VM. We started on the external test, and quickly managed to gain access to a few Office 365 user accounts. We weren’t able to use this to gain code execution, so we downloaded the Global Address List to use in a phishing campaign. While we were browsing email inboxes, we noticed that every non-internal email had a large “EXTERNAL EMAIL” marker set on top of the email.</p>
<p>We began setting up our phishing C2 and began sending test emails to our internal account to test the format, and we kept seeing the “EXTERNAL EMAIL” marker on our emails. We decided to see if there was any way to get rid of this.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/phishing/poc-client1.png" alt="poc-client1" /></p>
<p>We inspected the source of the received email and found that it was adding a few lines of code into our email:</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/phishing/warning-html.png" alt="warning-html" /></p>
<p>Essentially the filter just an injected a small table and filled it with color and the warning sign. Initially we tried commenting the section out or adding anything above the message that would potentially eliminate the warning, but the filter appeared to be taking anything in the <code class="language-plaintext highlighter-rouge"><body></code> tag and placing this below it. This left us with the <code class="language-plaintext highlighter-rouge"><head></code> tag to manipulate.</p>
<p>There are a few tags that you can put within the <code class="language-plaintext highlighter-rouge"><head></code> section: title and style are the main ones, but you can put near any HTML tag within there and it will operate normally. We again tried to add commenting there as well, but this ended up with malformed HTML. The <code class="language-plaintext highlighter-rouge"><title></code> tag didn’t change anything either. We landed on CSS styling to try and obfuscate this warning.</p>
<p>The way CSS styling works is that there are overall type styling declarations in the header, but any styling done per tag in the body would override the generic styling. Since the tags they were injecting already had color specified, we wouldn’t be able to change it to white to make it invisible. Similarly, we couldn’t make the font size 0. The visibility:hidden tag also didn’t seem to be working in outlook. We landed on the display:none tag that we could add to these specific things.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/phishing/bypass-initial-html.png" alt="bypass-initial" /></p>
<p>Adding these tags forced the external email warning to go away!</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/phishing/poc-client2.png" alt="poc-client2" /></p>
<p>That’s great, but where do we go from here? One thing we did find out was that even though the text was not visible, the EXTERNAL EMAIL warning was still clearly there and displayed on the email preview on the scroll bar.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/phishing/limitiation1.png" alt="limitation1" /></p>
<p>This we were not able to get to go away. Due to a limitation in Outlook, CSS styling tags like ::before cannot be applied so there does not appear to be any way to introduce different text before this to fool the preview. Unfortunately, that is a limitation of this obfuscation technique. That being said, the impact of this limitation is very small, a typical user would not notice this, especially if they are used to seeing a larger, more pronounced warning.</p>
<p>So ultimately we have achieved our goal. We were able to introduce a little bit of HTML/CSS into our email to get rid of the external email warning. So where do we go from here? Surely other companies structure this differently, use different tags, etc, so how can I make a generic “catch all” that will obfuscate ANY additional HTML warnings a company might introduce. The answer was simple: whitelisting only the things I, as an attacker, wanted visible.</p>
<p>Since I had control over the CSS styling of the whole page, I had the power to set the “display” properties for everything. A method that worked great for me was setting the entire <body> tag to display:none; this made everything, including anything injected in my a filter, blank. From there, I assigned a unique class to all pieces of HTML that I injected, and assigned a display:block styling to them, This allowed me to “whitelist” any HTML I wanted by assigning it to my class, and everything else in the email would be invisible. Code shown below.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/phishing/bypass-email.png" alt="bypass-email" /></p>
<p>This was the catch all that I needed. After applying these changes, we were able to get 20 out of 250 users to not only click on the link, but download and execute payload from an external site. Only one user reported it.</p>
<p>Ultimately, this is a cool way to try and evade warning labels put in by system administrators. Even though there are ways to remediate this, it ultimately doesn’t hurt your phish by putting this in there. There is no way it would make a phish more apparent.</p>
<h2 id="impact">Impact</h2>
<p>This external warning is custom for each implementation, but in general anything can be bypassed. To demonstrate impact, I searched Google for the top 5 results on how to configure this warning and used their template. End of the day, the attached POC was able to bypass each one. At the time of MSRC submission, the links were:</p>
<ul>
<li>https://answers.microsoft.com/en-us/msoffice/forum/all/mail-flow-external-message-warning-help/38e75efe-5945-451a-bcd0-f80d8d685a23</li>
<li>https://community.spiceworks.com/how_to/164036-set-an-external-email-header-on-inbound-emails-office-365</li>
<li>https://www.securit360.com/blog/configure-warning-messages-office-365-emails-external-senders/</li>
<li>https://supertekboy.com/2020/02/17/add-external-sender-disclaimer-in-office-365/</li>
<li>https://gcits.com/knowledge-base/warn-users-external-email-arrives-display-name-someone-organisation/</li>
</ul>
<p>The way HTML styling works, this can be applied to any bypass. The <code class="language-plaintext highlighter-rouge">style</code> tag has the ability to override any HTML on the page, because it has the highest precedent.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/phishing/html-style.png" alt="html-style" /></p>
<p>This vulnerability is applicable to both the Outlook desktop client as well as the Outlook web application (outlook.office.com).</p>
<h4 id="outlookofficecom">Outlook.office.com</h4>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/phishing/poc-web1.png" alt="poc-web1" /></p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/phishing/poc-web2.png" alt="poc-web2" /></p>
<h4 id="outlook-client">Outlook client</h4>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/phishing/poc-client1.png" alt="poc-client1" /></p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/phishing/poc-client2.png" alt="poc-client2" /></p>
<h2 id="poc">POC</h2>
<p>Full POC <a href="https://gist.github.com/knavesec/570ddd0cd7e00d02e87121576a677b59">here</a>.</p>
<p>Add the following code to the <style> section of your phish, replacing “CLASSNAME” with whatever you want the class id to be.</style></p>
<p>body{
display: none;
}</p>
<p>.CLASSNAME {
display: block;
}</p>
<p>Then for each part of the HTML in the <body> section add ‘class=“CLASSNAME” ’. Anything you add this to will be visible in the phish, anything else will not be displayed. See the screenshot on the previous page for an example. This is a very simple example, adding more tags will bypass more things. See the full POC for a generic catch-all.</p>
<h2 id="limitations">Limitations</h2>
<p>As stated before adding this to your phish will not hurt its performance (UPDATE: unless they detect on this behavior, see below), however there are some things to take note of.</p>
<ol>
<li>Still displays warning message in preview</li>
</ol>
<p>As noted above, the warning message is still shown in the email preview because the text is still the first thing on the page. This, however, is likely overlooked especially if the actual email doesn’t reflect the same warning.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/phishing/limitiation1.png" alt="limitation1" /></p>
<ol>
<li>Implementation Specific</li>
</ol>
<p>The HTML warning is configurable by the SysAdmin in charge, so configurations tend to be different. I’ve tested on the top 5 implementations on Google, and it works, but its still <em>possible</em> that it could be configured in a preventative way. The POC should be a catch all, but its hard to test every possible configuration.</p>
<h2 id="remediation">Remediation</h2>
<p>There is only one remediation technique that can help prevent this attack (only one that I’ve found at least).</p>
<p>Outlook has a method of “classifying” emails, and setting appropriate labels for them accordingly. This label can be made into a warning, and it is not displayed within the HTML and cannot therefore be manipulated. A screenshot of the classification label is shown below.</p>
<p>A link to an applicable blog can be found <a href="https://techcommunity.microsoft.com/t5/exchange-team-blog/native-external-sender-callouts-on-email-in-outlook/ba-p/2250098">here</a>.</p>
<p>UPDATE: Additionally, there is one company who has provided detections for this kind of phishing email, Inky. A link to some of their marketing material for this issue can be found here: <a href="https://www.inky.com/understanding-phishing-disappearing-banners">https://www.inky.com/understanding-phishing-disappearing-banners</a>. Note that I am in no way associate with this company, nor can I vouch for their products in an official capacity as I haven’t used them myself. I’m just happy they’ve shown an effort in remediating this problem.</p>
<h2 id="disclosure-timeline">Disclosure Timeline</h2>
<ol>
<li>December, 2019 - Discovery</li>
<li>May 7, 2020 - Disclosure to MSRC</li>
<li>June 1, 2020 - MSRC “Won’t Fix”</li>
<li>April 21, 2021 - Public disclosure on <a href="https://twitter.com/ldionmarcil/status/1384987686113583107">Twitter</a></li>
<li>April 21, 2021 - My disclosure on <a href="https://twitter.com/knavesec/status/1385266648668536835">Twitter</a></li>
</ol>
<p>Ultimately after discovery, research and “won’t fix” from MSRC, I decided not to disclose publicly. I believed that even with potential remediation techniques, the ability to obscure warning signs would severely impact the community since phishing is the biggest cause of compromise. I only chose to post this info after it had already been publicized online.</p>
<p>Please apply remediation advice, keep your users safe. For all you red teamers, happy hunting.</p>
<ul>
<li><a href="https://twitter.com/knavesec">knavesec</a></li>
</ul>whynotsecurityExternal Email Warning Bypass for Office365 & OutlookCredMaster2021-03-18T00:00:03-05:002021-03-18T00:00:03-05:00https://whynotsecurity.com/blog/credmaster<p>CredMaster: Easy & Anonymous Password Spraying</p>
<p><a href="https://github.com/knavesec/CredMaster">github.com/knavesec/CredMaster</a></p>
<ul>
<li><a href="#tldr">TLDR</a></li>
<li><a href="#setup">Setup</a></li>
<li><a href="#background">Background</a></li>
<li><a href="#throttle-evasion">Throttle Evasion</a></li>
<li><a href="#staying-anonymous">Staying Anonymous</a></li>
<li><a href="#plugins">Plugins</a></li>
<li><a href="#detections">Detections</a></li>
</ul>
<h2 id="tldr">TLDR</h2>
<p>This tool was designed during a red team engagement while trying to beat a pesky password spray throttle limitation. It now serves as an example of what an adept attacker can build.</p>
<p>CredMaster provides a method of running anonymous password sprays against endpoints in a simple, easy to use tool. The FireProx tool provides the rotating request IP, while the base of CredMaster spoofs all other identifying information.</p>
<p>Current plugins include:</p>
<ul>
<li>Office365</li>
<li>MSOL (Microsoft Online)</li>
<li>Okta</li>
<li>Fortinet VPN</li>
<li>HTTP Basic/Digest/NTLM methods</li>
</ul>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/credmaster-screenshots/credmaster-default.png" alt="general" /></p>
<h2 id="setup">Setup</h2>
<p>For some quick setup and cool features.</p>
<p>To use the tool, you’ll have to get an AWS access key and secret access key. A great walkthrough can be found here: <a href="https://bond-o.medium.com/aws-pass-through-proxy-84f1f7fa4b4b">https://bond-o.medium.com/aws-pass-through-proxy-84f1f7fa4b4b</a>. If you’re concerned about AWS costs, I’ve been using it extensively with zero costs associated. I believe the metric is something like a few pennies per million requests.</p>
<p>Now, gather a list of users/passwords, and you’re ready to spray. The most simple way to spray can be found in the example command:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>python3 credmaster.py --plugin <pluginname> -u userfile -p passwordfile -a useragentfile --access_key <key> --secret_access_key <key2>
</code></pre></div></div>
<p>Thats it. All you need. But, just because that’s all you need, there’s still more you want! A few cool options:</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">-o</code> File output</li>
<li><code class="language-plaintext highlighter-rouge">-d/--delay</code> Delay between passwords, example: try a password every X minutes</li>
<li><code class="language-plaintext highlighter-rouge">--passwordsperdelay</code> Number of passwords to try per delay cycle, example: try X passwords per Y minutes</li>
<li>Jitter min & max limits</li>
<li><code class="language-plaintext highlighter-rouge">--config</code> A config file to store AWS data, don’t hardcode stuff if not necessary</li>
<li><code class="language-plaintext highlighter-rouge">--clean</code> Remove all APIs from AWS, helpful if things aren’t cleaned up properly</li>
</ul>
<p>I like to set it up to run over a long list of passwords, with a delay set up reset lockout counters, but its whatever works for you.</p>
<h2 id="background">Background</h2>
<p>Normal password spraying tools do exactly what they’re designed to do: make an authentication request in order to test the validity of credentials. Unfortunately, this request is made from your local machine, which leaks the IP address. That IP can be blocked, blacklisted, traced, etc.</p>
<p>The next iteration of the game was to spin up proxies to route your traffic through, which would mask your IP address. This was taken to an automated fashion by Mike Felch (<a href="https://twitter.com/ustayready">@ustayready</a>) in his <a href="https://github.com/ustayready/CredKing">CredKing</a> tool, which would dynamically create AWS Lambdas in the cloud to submit requests on your behalf. These Lambdas would maintain the same IP address on each request, but the proxy aspect helped keep your information safe. With enough Lambdas, you could spread your authentication attempts across a high number of IP addresses which could help beat throttle rate-limiting. This tool automatically generated Lambdas, then used pre-designed “plugins” to perform the authentication.</p>
<p>Felch’s next password spraying game-changer was the introduction of the <a href="https://github.com/ustayready/fireprox">FireProx</a> tool. This would spin up AWS APIs as a HTTP pass-though proxy. Any request submitted to the API is made to the endpoint specified, this obscures your local machine from the target system. The API rotates your IP address with every request in order to beat IP-based throttle detections and anonymize your machine.</p>
<p>CredMaster is an amalgamation of the two: the plugin-based CredKing suite used to dynamically create FireProx APIs for spraying. CredMaster also does a few other things on the back end to spoof headers, stay anonymous and beat throttling.</p>
<h2 id="throttle-evasion">Throttle Evasion</h2>
<p>Now, I certainly can’t claim that this will completely evade password spray rate-limiting. What I can claim is that it can provide some of the base throttle evasion to date.</p>
<p>Throttle detection <em>does</em> work on a case-by-case basis, a targets on-prem systems are likely to have less sophisticated rate-limiting capabilities. Larger authentication providers like Microsoft & Okta do a good job of detecting and throtting password spray attempts, which make life more difficult for us!</p>
<p>Microsoft employs the Azure Smart Lockout defense system. If a password spray is detected, it will show every account as “locked” regardless of valid password. This detection system is proprietary, so it makes analysis more difficult. According to DaftHack’s MSOLSpray tool, use with FireProx appeared to be able to bypass Smart Lockout during testing. My own testing has shown the same.</p>
<p>Okta appears to be a tougher nut to crack. Their detection system <em>appears</em> to be based off some variation <code class="language-plaintext highlighter-rouge">total number of auth attempts / time</code> regardless of who/what IP makes the request. Through use of any tool, I’ve not yet been able to sufficiently beat Okta’s throttle attempts. I will note that a single thread and a relatively high jitter has allowed the spray to last a bit longer, though it does end in throttle after a while. Typically, I spray with a thread and high jitter, filter out the throttled attempts, then try again later with the other accounts to get full coverall.</p>
<p>Further research is necessary for all plugins and methods. Each plugin has a section for “throttle notes” on the Wiki.</p>
<h2 id="staying-anonymous">Staying Anonymous</h2>
<p>The original FireProx does a great job of doing what it was meant to do: rotating the IP address of every authentication request to mask the operator’s IP. The AWS API makes this easy, but your IP address can be leaked through the “X-Forwarded-For” header. This, of course, was taken into account by the creator, but is left up to the spraying tool developer to spoof the headers.</p>
<p>Without using either FireProx or CredMaster, standard password sprays leak some sensitive data. A comparison between two consecutive requests is shown below.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/credmaster-screenshots/standard-1.png" alt="standard1" /></p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/credmaster-screenshots/standard-2.png" alt="standard2" /></p>
<p>As you can see, your IP address is leaked (duh) as well as your browser useragent. I’ll note that some tools do provide the ability to spoof useragents.</p>
<p>Using FireProx to rotate our IP addresses takes care of the first problem, but introduces a few other anonymity issues. We can start by creating a FireProx API gateway, and launching a quick spray using a random off-the-shelf password spraying tool.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/credmaster-screenshots/fireprox-list.png" alt="fireproxlist" /></p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/credmaster-screenshots/fireprox-cli.png" alt="fireproxcli" /></p>
<p>Now lets compare the requests from the gateway.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/credmaster-screenshots/fireprox-1.png" alt="fireprox1" /></p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/credmaster-screenshots/fireprox-2.png" alt="fireprox2" /></p>
<p>We have IP rotation, huzzah! But, not without a catch, there are a few new issues introduced.</p>
<ul>
<li>Leaked IP address in X-Forwarded-For header</li>
<li>Repeated useragent</li>
<li>API gateway ID leaked in x-amzn-apigateway-id header</li>
<li>Trace ID leaked in X-Amzn-Trace-Id (unsure what this is)</li>
</ul>
<p>Like I said before, FireProx does have the ability to spoof the X-Forwarded-For header, but that must be done on a per-tool basis. Same concept for useragents again. The important thing here is the leaked API gateway ID, since this is tied to your FireProx instance and therefore your AWS account.</p>
<p>Lets get rid of those!</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/credmaster-screenshots/credmaster-cli.png" alt="credmastercli" /></p>
<p>Credmaster automatically generates AWS Gateways using a modified FireProx tool, then launches a spray against the input users. Lets dig into the requests.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/credmaster-screenshots/credmaster-1.png" alt="credmaster1" /></p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/credmaster-screenshots/credmaster-2.png" alt="credmaster2" /></p>
<p>Now we have it: rotating IP address, randomized X-Forwarded-For IP, randomized useragents, spoofed amazon headers. Anonymity.</p>
<h2 id="plugins">Plugins</h2>
<p>Currently, there are 5 plugins: Office365, MSOL, Okta, FortinetVPN and HTTP methods. The Office365, Okta and MSOL modules have been heavily tested and are based off other open source tools. The FortinetVPN and HTTP method modules, however, have not been tested (I don’t have test endpoints).</p>
<p>I tried to make future development easy, providing a template and instructions to contribute. More plugins == more fun.</p>
<h2 id="detections">Detections</h2>
<p>Since CredMaster automatically spoofs information, the best way to detect is based off the headers being present in the first place. Anywhere dealing with authentication shouldn’t allow authentication attempts from AWS APIs, especially with these headers. A few potential methods of detection are:</p>
<ul>
<li>The presence of “X-My-“ headers (weak detection, could lead to false positives)</li>
<li>The presence of “x-amzn-apigateway-id” headers (stronger detection, only API gateways have this header)</li>
<li>Trend analysis, a significant influx of requests with the identifiers shown above</li>
</ul>
<p>I will note, I’m not great with detection and mitigation techniques. Hopefully someone can find better methods. If you do find better techniques, let me know and I’d be happy to update this blog, give a shoutout, etc.</p>
<p>Feel free to reach out with any questions, I’m always willing to chat.</p>
<p>- <a href="https://twitter.com/knavesec">@knavesec</a></p>whynotsecurityCredMaster: Easy & Anonymous Password SprayingMax: BloodHound Domain Password Audit Tool2021-02-01T23:00:03-06:002021-02-01T23:00:03-06:00https://whynotsecurity.com/blog/max3<h2 id="tldr">TLDR</h2>
<p>Github: <a href="https://github.com/knavesec/Max">github.com/knavesec/Max</a></p>
<p>The introduction of the Domain Password Audit Tool (DPAT) a few years ago was a great way to have a graphical display of password cracking audits (<a href="https://github.com/clr2of8/DPAT">github.com/clr2of8/DPAT</a>). The capability to export domain groups to check which members had been cracked was great, but since we already ingested domain group information with BloodHound, it would be far more valuable to just map those users to the database information.</p>
<p>The goal of the DPAT module was to combine the information and pathfinding of BloodHound with password analytics, all outputable to HTML, ASCII art and CSV formats. This module searches for:</p>
<ul>
<li>All the stats that come with the original DPAT tool</li>
<li>Accounts with passwords that never expire cracked</li>
<li>Kerberoastable users cracked</li>
<li>High value domain group members cracked</li>
<li>Accounts with paths to unconstrained delegation objects cracked</li>
<li>and much much more…</li>
</ul>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post3/dpat-htmloutput.png" alt="HTML Output" /></p>
<p>Shoutout to <a href="https://twitter.com/blurbdust">@blurbdust</a>, the DPAT module was his idea and we worked together for quite a while to put it together. This release also includes a full port to python Windows functionality.</p>
<h2 id="full-post">Full Post</h2>
<p>While using the original DPAT tool, the thought was “Why do I need to extract domain groups again, I already have all the information within BloodHound?”. It took a bit of work to figure out how to correlate NTDS users to BloodHound since they’re in different formats, but at the end of the day it was possible by matching the RID & usernames to the BH data. This made it easy to not only look into group members cracked, but utilize the wealth of information already ingested by BloodHound to find more trends and significant patterns.</p>
<p>So far, this DPAT module looks for:</p>
<ul>
<li>Cracked password percentages</li>
<li>Password length, reuse & complexity stats</li>
<li>Specific high-value group members cracked</li>
<li>Group specific crack rates</li>
<li>Kerberoastable & AS-REP roastable users cracked</li>
<li>Inactive accounts cracked</li>
<li>Accounts with passwords set to never expire & with passwords set over 1yr ago cracked</li>
<li>Accounts with paths to HVTs & unconstrained delegation systems cracked</li>
<li>Accounts with local administrator or other control privileges cracked</li>
</ul>
<p>Then all affected users are filtered by whether or not they are enabled. I will note, this currently doesn’t include additions for the Azure & the AzureHound edges, it’s tailored for typical AD environments. PR’s welcome for Azure improvements.</p>
<h3 id="general-usage">General Usage</h3>
<p>Similar to the original DPAT tool, it requires that you have an extracted NTDS.dit file that has been parsed with Impacket Suite’s secretsdump tool. I won’t go into detail on how to extract and parse, see the <a href="https://github.com/knavesec/Max/blob/dpat/wiki/dpat.md">Readme</a> file for that.</p>
<p>I’ve tried to keep the CLI similar to the original DPAT tool, so at the end of the day it would feel familiar. I took the time to port everything to Windows as well, to allow any Windows based sysadmins the same pleasure. The only two necessary inputs are an NTDS file (parsed by secretsdump) and a Potfile (both Hashcat and JTR supported). Additionally, since this handles all passwords & hashes for an organization, we’ve provided a “sanitize” option to obfuscate credentials, identical to the original DPAT. For large environments, there is an option to increase the thread count for the upload process.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post3/dpat-generaluse.png" alt="General Use" /></p>
<p>The process maps the Bloodhound database users to the NTDS users, then uploads their NT/LM hashes and passwords into the database. When performing password analytics, the script will simply query for that information. At the end, all data uploaded will be sanitized from the database. Sometimes, however, keeping hashes and passwords tied to the AD users in BH can be beneficial to pentest workflow or for further analysis. A Store option has been added which will write all the information to the database but won’t clear it at the end. A separate Clear flag can be used to delete all traces independently. If data has already been uploaded, you can use the NoParse flag to do password analytics and skip the parsing/upload process.</p>
<p>One benefit of storing the data within BloodHound is the search functionality of uploaded passwords. You can search for the password of an input user, or match any user who has a certain password.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post3/dpat-storeclear.png" alt="Store Clear" /></p>
<p>On some bigger AD environments, pathfinding queries can take excessively long times. A “less” flag has been included to remove time-intensive queries. This only omits the following queries:</p>
<ul>
<li>Group Statistics</li>
<li>Accounts with paths to HVTs & Unconstrained delegation objects</li>
<li>Accounts with Local Admin privs</li>
<li>Accounts with other controlling privs</li>
</ul>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post3/dpat-less.png" alt="Less" /></p>
<h3 id="output">Output</h3>
<p>There are three primary output methods: HTML, CSV and ASCII art.</p>
<p>The best output method, and the purpose of this tool is the HTML report. The design mirrors the original DPAT tool’s table output, simply with the addition of extra information and statistics. It functions pretty simply:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>python3 max.py dpat <other args> --output outputdirectory --html
</code></pre></div></div>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post3/dpat-htmloutput.png" alt="HTML Output" /></p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post3/dpat-htmlhashes.png" alt="HTML Hashes" /></p>
<p>An additional method of output is more geared towards getting raw lists of users in the output in CSV format.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>python3 max.py dpat <other args> --output outputfilename --csv
</code></pre></div></div>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post3/dpat-csvoutput.png" alt="CSV Output" /></p>
<p>The last output method, omitting the <code class="language-plaintext highlighter-rouge">-o/--output</code> flag and output options will default to an ASCII art output. It’s splendid, courtesy of @blurbdust yet again.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post3/dpat-asciioutput1.png" alt="Ascii Output1" /></p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post3/dpat-asciioutput2.png" alt="Ascii Output2" /></p>
<p>I’m hoping this can help people provide some insight into vulnerable users and groups within the environment, user passwords tend to be a weak link in the chain for many organizations. Always looking for improvements.</p>
<p>- <a href="https://twitter.com/knavesec">@knavesec</a></p>whynotsecurityTLDRMax Updates and Primitives2020-08-25T10:10:43-05:002020-08-25T10:10:43-05:00https://whynotsecurity.com/blog/max2<h2 id="tldr">TLDR</h2>
<p>Github: <a href="https://github.com/knavesec/Max">github.com/knavesec/Max</a></p>
<p>In a previous post, Max was released to aid in BloodHound operations in a bash-based pentesting cycle. The idea was to combine the Neo4j database with standard output and bash tools to make data extraction and manipulation during a pentest smooth and painless. See the previous post here: <a href="https://whynotsecurity.com/blog/max/">post</a>.</p>
<p>Now I’ve added a few new sections & features:</p>
<ul>
<li>
<p><code class="language-plaintext highlighter-rouge">add-spns</code> new function: A new potential attack primitive, creates a HasSPNConfigured relationship between objects. This is based on clear text credentials being stored in LSA secrets for running service accounts, and Service Principal Names giving a good indication of where to find them which allows a new pivot path. Note this is not guaranteed, but merely a good indicator of such.</p>
</li>
<li>
<p><code class="language-plaintext highlighter-rouge">get-info</code> new additions: Functions to find DA sessions, extract specific group members, extract the groups of owned objects (for grepping), return all computers without LAPS, return all users with PasswordNotRequired set, get all computers with a session for a specific user</p>
</li>
<li>
<p><code class="language-plaintext highlighter-rouge">add-spw</code> new function: add a SharesPasswordWith relationship between objects, helpful to map relationships of shared local administrator for modeling/etc</p>
</li>
<li>
<p><code class="language-plaintext highlighter-rouge">del-edge</code> new function: delete an unused or “bad” edge, helps when there are things that you’re “not allowed” to do like change a service account password, enc</p>
</li>
<li>
<p><code class="language-plaintext highlighter-rouge">pet-max</code> new-function: not cowsay, but dogsay. No real use, just for fun, its national dog day after all!</p>
</li>
</ul>
<h2 id="full-post">Full Post</h2>
<p>A month or two ago, I released Max (<a href="https://whynotsecurity.com/blog/max/">post</a>). I thought it was an great little tool for BloodHound, but like any good tool, there’s always room for improvement. Based off some comments & suggestions from co-workers, plus some prior research on my own, I’ve added a number of other functions and options to the tools.</p>
<p>One of the big things I wanted to highlight is the introduction of a new possible attack primitive; see the following “add-spns” section for full details.</p>
<p>As always, if you have any features or functions that you’d like added, feel free to reach out @knavesec on Twitter & the BloodHoundHQ slack channel.</p>
<h2 id="add-spns--a-new-primitive">add-spns & a new primitive</h2>
<p>This function will create a new relationship <code class="language-plaintext highlighter-rouge">HasSPNConfigured</code> pointing from computer to user indicating that there is a possible method of compromise if you have access to the specific computer.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post2/spn-rel.png" alt="relationship" /></p>
<p>This function is also an introduction of a new attack primitive that I’ve been looking into. The concept is that a user account running a service on a machine stores their cleartext password within LSA Secrets, so if you have admin right on that system you can secretsdump the machine and extract the credentials. Service Principal Names (SPNs) are good indicators that the user would be running a service on that specific machine, so it’s also then a good indicator that their credentials would be stored in the registry. I will note that this is an INDICATOR and is not 100% guaranteed, though in my experience on clients the correlation is true roughly 2/3s of the time. It just varies.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post2/spn-secrets.png" alt="secrets" /></p>
<p>There are 3 ways to upload this information:</p>
<ul>
<li>Upload the output of Impackets GetUserSPNs. It will iterate through each of the configured SPNs and create a relationship for each entry if possible.</li>
</ul>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post2/spns-i.png" alt="impacket" /></p>
<ul>
<li>Use the information already stored within BloodHound, assuming you’ve ingested information with a collection method of <code class="language-plaintext highlighter-rouge">All</code> or <code class="language-plaintext highlighter-rouge">ObjectProps</code> to collect SPNs. This pulls the <code class="language-plaintext highlighter-rouge">serviceprincipalnames</code> property from users and assigns relationships based on them.</li>
</ul>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post2/spns-b.png" alt="bloodhound" /></p>
<ul>
<li>Import a file with object pairs of <code class="language-plaintext highlighter-rouge">Computer, User</code>, which will simply create the relationships manually specified.</li>
</ul>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post2/spns-f.png" alt="manual" /></p>
<p>As you can see in the screenshots, sometimes the relationships can’t be created. Typically this is because a computer within the SPN doesn’t exist within the bloodhound data, OR the SPN was in a non-standard format and wasn’t parsed properly (the program should warn if that happens).</p>
<h2 id="get-info">get-info</h2>
<p>A few extra features have been added to this, at this point the <code class="language-plaintext highlighter-rouge">get-info</code> function is just becoming a hotkey for queries I use frequently or for general analysis (happy to add others on suggestion/adding your own is pretty simple). All of the features listed below are new to the project.</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">group-members</code> to pull out all members of a specified group, typically used for targeting and grepping</li>
</ul>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post2/group-mems.png" alt="group-members feature" /></p>
<ul>
<li><code class="language-plaintext highlighter-rouge">dasessions</code> to see where any Domain Administrator sessions are located, in the format <code class="language-plaintext highlighter-rouge">DA username - computer with session</code></li>
</ul>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post2/dasessions.png" alt="dasessions feature" /></p>
<ul>
<li><code class="language-plaintext highlighter-rouge">sessions</code> will retrieve all the computers a specified user has, for targeting a specific user session. I use this when targeting specific HVTs like PCI or SCADA accounts to try and extract their cleartext/hashed password from memory.</li>
</ul>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post2/sessions.png" alt="sessions" /></p>
<ul>
<li><code class="language-plaintext highlighter-rouge">owned-groups</code> pull the groups for each owned user, primarily to be used for grepping and analysis</li>
</ul>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post2/owned-groups.png" alt="owned-groups feature" /></p>
<ul>
<li><code class="language-plaintext highlighter-rouge">sidhist</code> returns SID history information for all objects in the database, returned in a format of <code class="language-plaintext highlighter-rouge">username - SID - foreign domain - foreign SID object name</code>. If you have SID history information stored in the database, this will extract the important information. That being said the <code class="language-plaintext highlighter-rouge">foreign domain</code> and <code class="language-plaintext highlighter-rouge">foreign SID object name</code> rely on the Domain information (Domain Trusts) and actual foreign domain BloodHound data to be imported into the database. For example, if you’ve run the ingestor on one domain, you can query for domain trusts which satisfies the <code class="language-plaintext highlighter-rouge">foreign domain</code> objects. The <code class="language-plaintext highlighter-rouge">foreign SID object name</code> does a lookup by SID, therefore if you do not actually have the object with the respective SID in the database then it will not register. Note the first entry in the screenshot below corresponds to a foreign group, but there is no information for the second remote SID and RID.</li>
</ul>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post2/sidhist.png" alt="sidhist feature" /></p>
<ul>
<li><code class="language-plaintext highlighter-rouge">unsupported</code> returns a list of computers running unsupported operating systems in the format <code class="language-plaintext highlighter-rouge">computer - OS</code>, typically I use this as a direct output for the client to make a note of any outdated systems there are on the network.</li>
</ul>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post2/unsupport.png" alt="unsupported feature" /></p>
<ul>
<li><code class="language-plaintext highlighter-rouge">nolaps</code> returns a list of computer objects not configured with Microsoft LAPS</li>
<li><code class="language-plaintext highlighter-rouge">passnotreq</code> returns a list of all users with the PASSWORD_NOT_REQ flag set, a common misconfiguration</li>
</ul>
<p>As said before, I’m always open to adding functions upon request.</p>
<h2 id="add-spw">add-spw</h2>
<p>This was one of the original functionalities of porterhau5’s Bloodhound-Owned tool, so I thought I would include it as well. It’ll take in a list of objects and create a SharesPasswordWith relationship between each object. This is primarily used for repeated local administrators, but in theory could also be used for domain users. Since you have to know in advance who’s passwords are shared by who, its more useful after the fact in determining alternate paths to get places. I personally just mark all the objects as owned when I have repeated passwords (see the <code class="language-plaintext highlighter-rouge">mark-owned</code> function), but I know some people who prefer the relationship route so I’ve included it for completeness.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post2/spw-create.png" alt="spw create" /></p>
<p>After completion, you’re left with an entanglement of relationships that vaguely resembles a flower or spider web.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post2/spw-rel.png" alt="flower power" /></p>
<p>Flower power.</p>
<h2 id="del-edge">del-edge</h2>
<p>If you happen to have one relationship that you’d like to remove from the database (like a certain flower-shaped mess), you can delete all edges of a certain type. For example, often times I don’t want to change an account’s password on a real engagement so this allows you to simply remove ForcePasswordChange relationships. Filtering through the GUI is handy, but sometimes deletion is necessary.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post2/del.png" alt="delete" /></p>
<h2 id="pet-max">pet-max</h2>
<p>Arguably the most important contribution to this project: <em>dogsay</em>. He says various predetermined phrases and spreads happiness.</p>
<p><img src="https://raw.githubusercontent.com/whynotsecurity/whynotsecurity.github.io/master/assests/images/max-screenshots/post2/pet-max.png" alt="pet-max feature" /></p>
<p>Thanks to everyone who suggested new features and helped with testing. Dedicated to my dog Arlo, ‘tis national dog day!</p>
<p>- <a href="https://twitter.com/knavesec">@knavesec</a></p>whynotsecurityTLDR