Before starting a penetration test on company resources (like web services or desktop/mobile apps), it’s usually important to conduct some preliminary exploration. The customer may intentionally or unintentionally omit certain endpoints, nested domains, application functionalities, or request parameters that, if discovered, could still fall within the scope of the test.
With this in mind, properly investigating the company resources beforehand becomes even more important because this will directly influence the number of vulnerabilities you can catch and your bounty. If you find endpoints or parameters the other pentesters are not aware of, you will have the highest chance of discovering bugs and vulnerabilities that have not been reported yet.
Understanding how a business works internally helps you spot hidden API parameters, endpoints, and app features. So, what do you need to know to improve your pentest coverage?
A company might provide a website, apps for Android and iOS, and public API for developers. If you think all of them share the same API endpoints when communicating with the backend, then you are likely wrong. Some features may be only available on the website, and others only in the apps. A business might consider the website not as popular as the app. As a result, the website might share all the functionalities of the app. At the same time, the website might use some endpoints of earlier versions that are no longer used by the apps but are still active and in scope. It is crucial to investigate each client the company provides.
Moreover, the features might depend on the web browser version or the operating system type. It makes sense to check the company resources in different web browsers (or at least by spoofing the User-Agent header value with a few popular and recent browser versions). Apps for different smartphones may also provide a slightly different set of features.
A company might support multiple apps that cover different sectors of its business. For example, Google offers Google Ads and Google Analytics apps. They implement individual features and use different API endpoints. Separate apps from the same company might be owned by completely unrelated teams or departments (or even by outsourced developers), which increases the probability of vulnerabilities and logical errors. For example, one group of software engineers develops a search app while a separate team works on the ads app. Ads and search systems interact with each other (ads show up in search). If the boundary between these surfaces was not carefully designed, and the communication between the developer teams was insufficient, it would be harder to avoid mistakes.
During the pentest, you might need to create a new user account to access all website or app features. You are likely missing something if you register only one or two accounts.
When companies develop software, they usually do not launch new features to all users. First, a business comes up with an idea. For example, a company decides to optimize one of its backend services, deprecate a legacy service, and migrate to a new one (with no vulnerabilities this time for sure), or work on some new functionality that is not yet announced.
All significant changes first go through the experiments. For example, only 10% of all users see the new app version or a new website feature is only accessible by 5% of people. The company then gathers metrics and assesses the users' attitudes toward the change. Do they spend more time on the website with the change? Do they purchase more items when the new feature is enabled? Is the change too expensive for the infrastructure (does it use too many extra CPU cycles or too much memory)? Does it slow down the app considerably? Metrics from the test users group are compared to metrics from the control group, which does not see the change. Once the management collects enough signals, they decide to either roll out the change to all users or cancel the experiment and abandon the change (or improve the feature and re-run the trial). Big companies conduct hundreds of experiments simultaneously, and each experiment can last from a few days and up to a year.
Moving forward, the company is satisfied with the metric movements and has decided to launch the new feature for all users. In some cases, if the change is complex, the company will create a so-called holdout experiment, usually a small group of users (1%-2%) who will keep seeing the old website or the app even though everybody else is already using the new version. The company may conduct holdout experiments for verification purposes. The business might want to ensure that the change is truly valuable and earns extra money on a large scale. Holdout experiments can continue for many months and provide the baseline metrics the company had at some point (such as at the beginning of the year). Data scientists use these metrics to compute the aggregated impact of all newly launched changes.
A fictional case study: video platform marketing specialists decide to add a new "Like" button under each video. They hypothesize that such a button will boost user engagement. Software engineers implement this feature and launch it for 3% of all registered users.
After a couple of weeks, the company will have collected enough statistically significant metrics to understand that the users began to spend 2% more time on the website and that the profit from ads increased by 1%. Moreover, 3% more users out of the test group purchased the premium subscription. On the other hand, the company will need 300 more server machines to compute and store the data on likes. Data scientists weigh the pros and cons and decide to launch the feature for 100% of users.
You have likely already guessed that you will barely manage to join enough test experiment groups if you only have a few user accounts. Moreover, if you are unlucky, your accounts may end up in some long-term holdouts, and you will not see the features already available to the general public (and the pentesters who were luckier than you). This means you will miss a bunch of endpoints and query parameters. If you get at least ten accounts, you increase the probability of getting into various company experiments. Some will be obvious (e.g., you will see some UI changes when you log in as one of your accounts). Some changes may be less apparent. For example, your requests may flow to a different backend implementation tested by the company. The only way to catch such a change might be to measure the latencies of your requests. I am not aware of any software that allows comparing the UI or the website code under different user accounts. You can at least run the software to collect API endpoints multiple times using the cookies of several users.
If you discover an experiment that adds new endpoints or request parameters, you should also check how the service behaves when you access those endpoints under users who are not in that experiment test group.
Apart from experiments, companies sometimes roll out the new feature gradually, for example, to 1% of users first, then to 10%, then to 50%, and finally to all users. It helps resolve any potential issues quickly. If something goes wrong, it is easier to turn off the change if it is available to 1% of users only. Gradual roll-out also reduces losses and risks. Investors may turn a blind eye to the news that says: "Twitter is down for about 1% of the user base", but will react negatively to the headline stating: "Twitter is down for everyone, and the company is urgently trying to fix the problem." Gradual roll-outs are another reason to register additional accounts during the pentest.
Besides simply creating many accounts, it is advisable to register them with different regions. This becomes relevant when the company you are testing is big enough and does business in multiple countries.
A business ready to enable the new feature for all users (after all experiments have finished) may do this first in a specific region or country. For example, some bleeding-edge AI features in Google or Meta products were available first in the USA. In other countries, this functionality was enabled a few months later. This could have been related to simpler user data protection laws in America rather than stricter GDPR in the EU.
When you register multiple accounts in different regions (e.g., using VPN or proxies), you boost your chances of seeing the freshest features and starting to test them while other pentesters are unaware.
Let's assume that you completed the tests of the company resources you were aware of, created the bug report, and shared it with the company. If the bugbounty program from that company remains active after that, it makes sense to come back to it again later. You might see new features that have been launched while you were away. Engineers could also introduce new bugs into the existing functionality, which no pentester would re-test.
You will gain an advantage over other testers who think this specific bug bounty is over and there is no reason to repeat it. Repeating the penetration tests is called Continuous Penetration Testing (CPT). You can automate your tests to reduce the amount of manual labor.
You probably do not pay much attention to agreements you are prompted to accept after the first log-in. You might also ignore optional cookie pop-ups. You might miss some bugs if you blindly accept or reject different conditions.
It is reasonable to test the functions potentially impacted by the agreements in three states:
using an account with accepted agreements,
using an account with rejected agreements,
and with an account where you have neither accepted nor refused those agreements yet (when such a state is allowed).
Let's assume that the website requests permission to use personal data on some other company resource or for a purpose that is optional and not listed in the mandatory privacy policy agreement. Before you allow this permission, verify that your data is absent on that unrelated company resource and that you can not perform any actions there. If you see a discrepancy, this might be a serious legal risk for the company, especially if it operates in jurisdictions with strict data protection regulations. Apart from the legal angle, the features covered by optional agreements might have software bugs that you could exploit (for example, if some implementation piece takes consent into account while the other part does not).
If the website or app logic allows you to revoke your consent, do that and verify that all corresponding functions are deactivated and that your data has been removed from relevant optional resources.
Here is an optional Gmail agreement. As you can see, you can revoke your consent and then agree again if you want to:
Besides optional agreements, you should pay attention to the account settings. The company may allow users to participate in beta testing of new features if they explicitly accept such an option (or even choose the specific features they decide to test).
To sum up, to increase the pentest coverage, you should remember the following points:
Good luck with your pentests, and I hope this guide will help you earn some extra bounties!