How to Start a Bug Bounty Program in India: A Step-by-Step Guide for CISOs
- Ridhi Sharma
- 14 hours ago
- 17 min read
Starting a bug bounty program is one of the highest-ROI security investments a CISO can make but only if it is done right. Done wrong, it becomes a triage nightmare, a researcher relations disaster, and a budget black hole.
The difference between programs that succeed and those that quietly die within a year almost always comes down to preparation. The organisations that thrive in bug bounty have invested time in their scope, their internal processes, and their relationship with the researcher community before the first report ever lands. Those that struggle skipped those steps.
This guide is a practical, sequenced playbook built specifically for Indian organisations on How to start a Bug Bounty Program in India. It assumes you are a CISO or security leader who understands the value of crowdsourced security testing and wants a clear, actionable path from 'we should do this' to 'we have a live, producing program.' No vendor fluff, just the steps, the decisions, and the things that trip people up.
What to expect from a well-run program Organisations that run structured bug bounty programs on the Com Olho platform find an average of 3–8 valid vulnerabilities per month in their first quarter, including findings that traditional penetration tests and automated scanners consistently miss. Payment flow vulnerabilities and IDOR issues are the most common high-severity discoveries in Indian programs. |

Before you start : the honest prerequisites
Bug bounty programs are not magic. They amplify the security maturity you already have. If your fundamentals are weak, a program will expose that publicly and at pace. Before you proceed, be honest about where you stand on each of the following.
□ | You know what you have You have a reasonably complete inventory of your internet-facing assets — domains, subdomains, APIs, mobile applications, and cloud infrastructure. If you cannot list your attack surface, you cannot scope a program. |
□ | You have someone to own triage At least one security engineer can dedicate 4–8 hours per week to reviewing incoming reports. This person needs the technical skills to validate findings and the seniority to escalate them. Triage is the single most common point of failure in new programs. |
□ | Engineering will patch what you find You have an agreement, informal or formal, with your engineering leadership that confirmed critical and high vulnerabilities will be remediated within defined SLAs. A program that finds vulnerabilities but cannot fix them is a liability, not an asset. |
□ | Legal is ready to engage Your legal team is aware you are planning this and is prepared to review the program policy. This does not need to be a six-month process — a good platform provides templates — but sign-off before launch is non-negotiable. |
□ | You have board or leadership visibility Your CISO or equivalent has visibility into this initiative. Bug bounty programs occasionally produce findings that require board-level awareness, a critical vulnerability in a payment system, for instance. Having that escalation path established in advance prevents chaos. |
□ | You have a modest budget approved You have at least ₹50,000–₹2,00,000 in approved researcher reward budget for your first program cycle. This is not a large number — it is less than the day-rate of a mid-senior penetration tester — but it needs to be approved and accessible before you go live. |
Watch out If you cannot tick at least four of these six boxes, pause before launching. A program launched without readiness will produce more problems than it solves. Use the gaps above as a 60-day preparation checklist rather than launch blockers. |
How to Start a Bug Bounty Program in India
Phase 1: Define your scope (Weeks 1–2)
Your scope is the contract between you and every researcher who participates in your program. It defines what they can test, what they cannot touch, how they should behave, and what they will be rewarded for. A well-written scope is the single greatest predictor of program quality better than your reward structure, better than your platform choice.
What to include in scope for bug bounty program
Start narrower than you think you need to. The temptation is to throw everything in — all your domains, all your apps, your entire cloud infrastructure. Resist it. A tight, well-defined scope for your first program will produce higher-quality, more actionable reports than a sprawling one. You can always expand.
Asset type | Example | First program? | Notes |
Primary web application | Yes — include | Your main product; researchers know it best | |
Marketing website | Optional | Low risk, useful for SEO. Exclude if static CMS | |
Mobile app (Android/iOS) | Yes — include | High-value target; specify APK version in scope | |
Public API | Yes — include | Often the highest-severity finding source | |
Admin panel | No — exclude | Too risky for first program; add in cycle 2 | |
Customer subdomains | No — exclude | Third-party data risk; requires separate legal review | |
Cloud infrastructure (AWS/GCP) | S3 buckets, etc. | No — exclude | Exclude unless you have specific infra hardening focus |
Third-party integrations | Razorpay, Twilio, etc. | No — exclude always | You do not own these; out of scope by definition |
What to explicitly exclude from bug bounty program
An out-of-scope list is as important as your in-scope list. Be explicit. Researchers read scope documents carefully vague exclusions lead to disputes, wasted effort, and frustration on both sides.
Denial of Service (DoS/DDoS): Explicitly prohibited. No exceptions. Any testing that degrades service availability is out of scope regardless of how it is framed.
Social engineering: Phishing employees, vishing, pretexting. These are people problems, not code problems, and they fall outside the security research framework.
Physical security: Tailgating, office access, hardware attacks. Not relevant to a web/app bug bounty program.
Automated scanning at scale: Prohibit running bulk automated scanners against your production environment. Researchers should test intelligently, not fire-and-forget tools.
Accessing other users' data: Researchers must demonstrate vulnerabilities using test accounts they control, not by accessing real customer data. Make this explicit.
Third-party services: Any service you use but do not control, payment processors, CDNs, email providers, is out of scope.
Vulnerability types to explicitly exclude from rewards
Not everything a scanner finds is worth paying for. Define upfront which finding types are out of scope for rewards to avoid disputes:
Missing HTTP security headers without demonstrated impact (CSP, HSTS, X-Frame-Options)
Self-XSS (requires victim to execute their own payload)
Clickjacking on pages without sensitive actions
Rate limiting issues without demonstrated account takeover or data exposure
TLS/SSL configuration issues on non-sensitive endpoints
Username enumeration via timing attacks (low-severity, accepted risk for most programs)
Open redirects that do not demonstrably lead to a higher-severity vulnerability
Theoretical vulnerabilities without a working proof-of-concept
Pro tip Write your scope document as if a smart, motivated researcher who has never heard of your company is reading it. They will spend 20 minutes reading it before deciding whether your program is worth their time. Clarity and specificity are the difference between attracting your first great finding in week one versus week eight. |
Phase 2: Build your bug bounty program policy (Weeks 2–3)
Your program policy is a legal document as much as it is a researcher communication. It establishes the rules of engagement, grants the authorisation that makes testing legal under Indian law, and sets the expectations that both you and researchers will be held to. Treat it accordingly.
The seven elements every Indian program policy needs
1 | Safe harbour declaration This is the most legally critical element. It must explicitly state that your organisation authorises the researcher to perform security testing within the defined scope, that you will not initiate civil or criminal action against a researcher who follows the program rules, and that this authorisation is granted in good faith for the purpose of improving security. Under the IT Act 2000, testing without this authorisation is potentially illegal — even with good intent.
|
2 | Disclosure timeline Commit to a specific timeline: how long you need from report submission to acknowledgement, triage, and remediation before the researcher may disclose publicly. The industry standard, following Google Project Zero, is 90 days from initial report to permitted public disclosure. You may extend by mutual agreement for complex vulnerabilities.
|
3 | Testing rules and prohibited actions Be explicit about what researchers may not do, regardless of whether it falls within the technical scope. This protects you from creative interpretations of what 'testing' means.
|
4 | Report submission requirements Define what a valid report must contain. This dramatically reduces low-quality, incomplete submissions — which are the primary source of triage burden for new programs.
|
5 | Reward structure Your reward table should be part of the policy, not a separate document. Researchers need to see the financial terms before they decide to invest their time. Include minimum and maximum reward amounts per severity tier, and any multipliers for particularly impactful findings.
|
6 | Confidentiality requirement Researchers must agree not to disclose program details including the existence of specific vulnerabilities until the coordinated disclosure timeline has elapsed. This is particularly important for private programs where the program itself may be confidential.
|
7 | Duplicate and out-of-scope handling Define clearly how you will handle duplicate reports (same vulnerability reported by multiple researchers) and out-of-scope submissions. Researchers invest significant time in their findings — clear, consistent handling of these cases is essential for maintaining goodwill.
|
Note Com Olho provides India-specific program policy templates as part of the platform setup process. These templates have been designed with the IT Act 2000, CERT-In Directions, and DPDP Act in mind. We still recommend having your legal team review any final policy before publication but the template significantly reduces the drafting burden. |
Phase 3: Set your reward structure (Week 3)
Reward structures are where many Indian organisations make their first serious mistake: either underpaying relative to the difficulty of their scope (deterring top researchers) or paying uniformly high rewards that exhaust their budget on medium-severity findings. The goal is calibration, rewards proportional to impact and effort.
The four factors that should determine your reward levels
Sector sensitivity: Financial data, payment flows, and health records command higher rewards than marketing content or internal tooling. If a breach in the affected system would make national news, pay top-of-range.
Asset criticality: A critical finding in your core payment API is worth more than the same finding in a low-traffic blog subdomain. Consider building asset tiers into your reward table.
Exploitability: A vulnerability that can be exploited remotely, without authentication, with no user interaction, at scale, should pay more than one requiring complex pre-conditions. CVSS already encodes most of this — let it guide you.
Researcher market: If you want India's best researchers to prioritise your program, your reward rates need to be competitive with what they can earn elsewhere. Underpaying creates a race to the bottom — you attract volume seekers, not skilled researchers.
Reward table for Indian programs (2025 benchmarks)
Severity | BFSI / Fintech | Healthtech | E-commerce | SaaS / Tech | Example finding types |
Critical | ₹1L–₹2.5L | ₹75K–₹1.5L | ₹50K–₹1L | ₹30K–₹1L | Auth bypass, RCE, account takeover, payment manipulation, mass PII exposure |
High | ₹30K–₹75K | ₹20K–₹50K | ₹15K–₹40K | ₹15K–₹35K | IDOR with data access, stored XSS on critical path, privilege escalation |
Medium | ₹8K–₹25K | ₹6K–₹20K | ₹5K–₹15K | ₹5K–₹15K | Reflected XSS, CSRF on sensitive actions, limited access control bypass |
Low | ₹2K–₹8K | ₹2K–₹6K | ₹2K–₹5K | ₹2K–₹5K | Minor info disclosure, best-practice gaps, self-XSS |
L = Lakh. E.g. ₹1L = ₹1,00,000. K = Thousand. Ranges are indicative; adjust to your sector and program maturity.
First bug bounty program budget planning
For a private program running for 90 days with a well-defined scope, a realistic first-cycle budget is:
Budget scenario | Approved reward budget | Expected valid findings | Expected spend |
Conservative | ₹1,00,000 | 5–10 findings | ₹40,000 – ₹80,000 (most findings will be medium/low) |
Standard | ₹2,50,000 | 10–20 findings | ₹1,00,000 – ₹2,00,000 |
Ambitious | ₹5,00,000 | 20–35 findings | ₹2,00,000 – ₹4,50,000 |
Pro tip In your first program cycle, you are almost certain to underspend your reward budget. This is normal researchers need time to learn your scope, and private programs take weeks to reach full velocity. Do not over-index on the budget as a signal of program failure in the first 30 days. |
Phase 4: Choose your bug bounty platform and launch (Weeks 3–4)
Your bug bounty platform choice determines the operational experience of your program both for your team and for researchers. This is not a trivial decision. The wrong platform creates friction at every stage: researcher acquisition, report management, triage workflow, payment processing, and compliance documentation.
What a bug bounty platform should do for you
Researcher vetting and onboarding: The platform should vet researchers before they access your program verifying identity, reviewing track record, and ensuring they have agreed to the terms of engagement. You should not be doing this yourself.
Report submission and management: A structured submission workflow that enforces the report format you require — reducing the volume of incomplete, unactionable reports landing in your inbox.
Triage support: For teams with limited bandwidth, managed triage — where the platform's security analysts perform initial review and validation before escalating to your team — is transformational. It means your engineers only see pre-validated, high-confidence findings.
Escrow payments in INR: Researcher rewards should be held in escrow and released in Indian Rupees. USD payments via global platforms create FX costs and complexity for Indian researchers — a real deterrent to participation.
Audit trail and reporting: Every finding, triage decision, communication, and payment should be logged and exportable. CERT-In, RBI, and SEBI audits increasingly look for evidence of ongoing security testing — this log is your evidence pack.
Legal infrastructure: Program policy templates, safe harbour language, and researcher agreements that are appropriate for the Indian regulatory context.
Why India-first matters
Global platforms like HackerOne and Bugcrowd have established brands and large researcher pools, primarily in North America and Europe. For Indian organisations, this creates structural gaps: reward tables typically denominated in USD, support teams operating across time zone gaps, and researcher communities with less exposure to Indian app architectures, payment flows, and regulatory contexts.
The most common findings in Indian bug bounty programs, IDOR vulnerabilities in UPI integrations, authentication issues in Aadhaar-linked systems, API misconfigurations in NACH/e-Mandate flows, are findings that researchers with deep experience in Indian financial infrastructure are best positioned to discover. A researcher pool built on Indian platforms, tested against Indian companies, naturally concentrates this expertise.
Why Com Olho Com Olho is built for this context: an Indian researcher community of 500+ vetted security professionals, INR-denominated escrow payments, CERT-In-aligned policy templates, managed triage support, and a customer success team with deep experience in Indian BFSI, healthtech, and e-commerce security programs. Our programs are typically live within 2–3 weeks of kickoff. |
The launch sequence
Once your scope, policy, reward structure, and platform are in place, the launch itself is a 3-stage process:
Stage 1 Soft launch | Invite 5–10 of the platform's most trusted, senior researchers to test your scope privately for 2 weeks before broader rollout. This 'bug bash' phase lets you validate your scope document, test your triage process under real conditions, and fix any obvious issues before a larger researcher pool sees them. Expect 2–5 findings in this stage — treat them as a rehearsal. |
Stage 2 Private program | Expand to 20–50 invited researchers. This is your primary operating mode for the first 90 days. Monitor report volume, triage burden, and finding quality closely. Refine your scope exclusions based on what you see — particularly any finding types that are generating disputes or wasting triage time. |
Stage 3 Public program (optional) | After a successful private cycle, consider opening to the full researcher community. This dramatically increases coverage and finding volume — but requires a mature triage process. Most Indian organisations run private programs indefinitely, expanding the invited researcher pool gradually rather than going fully public. |
Phase 5: Triage — the make-or-break phase
More programs fail at triage than at any other stage. It is unglamorous operational work reviewing reports, reproducing vulnerabilities, communicating with researchers, assigning severity, escalating to engineering and it is relentless once the program is live. Get this right and your program runs smoothly for years. Get it wrong and it collapses within months.
The triage SLA that keeps researchers engaged
Stage | Target SLA | What happens if you miss it |
Initial acknowledgement | 24 hours | Researcher assumes you are not managing the program seriously. Trust erodes immediately. |
Initial triage (valid/invalid) | 5 business days | Researcher may submit the finding elsewhere or lose patience with the program. |
Severity confirmation | 7 business days | Reward disputes become more likely if severity is contested after a long delay. |
Reward payment (on confirmed findings) | 14 days | Delayed payment is the single most common researcher complaint. It directly reduces your program's reputation. |
Remediation — Critical | 7–14 days | An unpatched critical vulnerability is a live risk. CERT-In may require reporting if it constitutes a cybersecurity incident. |
Remediation — High | 30 days | Researchers may escalate to public disclosure if remediation stalls without communication. |
Remediation — Medium/Low | 60–90 days | Acceptable, but communicate the timeline proactively. |
How to handle common triage scenarios
The duplicate report
Two researchers submit the same vulnerability within days of each other. Pay the first valid submission in full. Acknowledge the second researcher, explain it is a duplicate, and if their report was particularly well-written or added new detail, consider a goodwill payment of ₹1,000–₹3,000. Document your duplicate policy in the program rules before this happens — handling it on the fly creates inconsistency.
The disputed severity rating
The researcher says it is Critical. Your team says it is High. This is one of the most common sources of friction in bug bounty programs. The best resolution process: explain your reasoning in detail, invite the researcher to provide additional evidence of impact if they believe you are wrong, and commit to reconsidering within 48 hours. If you are using a platform with managed triage, the platform's security analysts serve as a neutral third party.
The out-of-scope finding
A researcher submits a valid, high-severity vulnerability in an asset that is explicitly out of scope. The ethical and reputational answer is to thank the researcher, fix the vulnerability, and consider a goodwill payment even though it is technically outside your obligations. The alternative rejecting valid security research because of a technicality creates bad will in the researcher community and does your security posture no favours.
The CERT-In determination
A researcher submits evidence of a critical vulnerability that may have already been exploited for example, compromised credentials or evidence of unauthorised access. Your triage process must include a step at which your team determines whether this constitutes a reportable cybersecurity incident under the CERT-In Directions (2022). For organisations in covered sectors, the six-hour reporting clock starts when you become aware of the incident, not when you confirm it. Err on the side of reporting.
Watch out Never go silent on a researcher. If your triage is backed up, send a holding message: 'We have received your report and it is in our review queue. We will update you within [X] days.' Silence is interpreted as dismissal. A program that dismisses researchers loses its best ones within a cycle. |
Phase 6: Remediate, reward, and iterate
The program does not end when you confirm a vulnerability it ends when the vulnerability is fixed, the researcher is paid, and you have learned something that makes the next cycle better. This final phase is where the compounding value of bug bounty programs is built.
Remediation that researchers respect
Pay rewards before patches are deployed, not after. This is a significant cultural shift from traditional security operations in bug bounty, the value is in finding and disclosing the vulnerability, not in waiting for the fix. Researchers who are paid promptly become advocates for your program. Those who wait months for payment stop submitting to you and tell others not to bother.
Communicate your remediation timeline to the researcher when you confirm the finding. If you hit a delay an engineering sprint change, a complex dependency, a regulatory review tell the researcher proactively. Radio silence during remediation is almost as damaging as silence during triage.
What to review at the end of each cycle
Finding quality: Were the majority of reports valid and actionable? If more than 30–40% of reports are being rejected as invalid or out-of-scope, your scope document needs clarification or your researcher pool needs refinement.
Finding distribution: Are findings concentrated in one asset or vulnerability class? This suggests either a specific area of weakness to prioritise in engineering, or a scope expansion opportunity.
Triage burden: How many hours did your team spend on triage? If it exceeded your capacity, either narrow the scope, add triage support, or increase your reward threshold to filter out low-severity submissions.
Researcher engagement: How many active researchers submitted reports? A high invitation count with low participation signals that your scope or rewards are not competitive. Survey your top researchers — their feedback is invaluable.
Time to remediation: Did you hit your remediation SLAs? If critical findings are taking longer than 14 days to patch, the bottleneck is in engineering prioritisation, not the security program itself.
Pro tip After your first program cycle, schedule a 60-minute retrospective with everyone involved — security, engineering, and legal. The three questions to answer: what did we find that we did not expect? What slowed us down? What would we do differently? The answers will make your second cycle dramatically more effective than your first. |
The CISO Bug Bounty Program launch checklist: 30-day program
Use this timeline to sequence your preparation. The phases above map to weeks — this gives you a day-by-day view of the critical path.
Days 1–5 Internal alignment | Confirm triage ownership (name the person). Get engineering leadership commitment on remediation SLAs. Brief legal team. Get budget approved. Schedule the platform kickoff call. |
Days 6–10 Asset inventory | Run subdomain enumeration on all your domains. List all public APIs and mobile app versions. Identify what is explicitly out of scope. Document asset sensitivity tiers (payment API = critical, marketing site = low). |
Days 11–15 Scope and policy drafting | Write your in-scope and out-of-scope asset lists. Draft your program policy using the platform template. Send to legal for review. Finalise reward table by severity tier and asset sensitivity. |
Days 16–20 Platform setup | Complete platform onboarding. Load your scope document and policy. Configure reward tiers. Agree on initial researcher invite list (10–15 senior researchers for soft launch). Set up your triage queue and assign the triage owner. |
Days 21–25 Soft launch | Go live with 10–15 invited researchers. Monitor report volume daily. Respond to every submission within 24 hours. Note any scope ambiguities or policy questions — fix them before the broader launch. |
Days 26–30 Review and expand | Review soft launch findings. Fix any scope or policy issues. Expand to 30–50 researchers for full private launch. Schedule 90-day cycle review date. Communicate program update to leadership. |
Frequently asked questions
How long does it take to launch a bug bounty program in India?
With preparation and a managed platform, a private program can be live in 2–4 weeks. The critical path is usually legal review of the program policy — this takes 5–10 business days if your team is responsive. Asset inventory and scope drafting can be done in parallel and typically takes 3–5 days. Platform setup and researcher onboarding takes 2–3 days. The soft launch itself starts generating findings within the first week.
Do we need to do a penetration test before launching a bug bounty program?
Not strictly, but it is advisable for first-time programs. A penetration test before your bug bounty launch fixes the most obvious vulnerabilities so that your researcher community encounters a more interesting, less trivially broken scope. This raises the quality of findings and makes your program more rewarding for skilled researchers. Think of it as cleaning the house before inviting guests, you will have more productive conversations.
What if a researcher finds a vulnerability we already know about?
If the vulnerability is on your known and scheduled remediation list, you have two options: include a 'known issues' list in your program scope (which tells researchers not to submit findings you are already aware of), or treat it as a valid finding and pay the reward, because a second source of confirmation for a known issue is still operationally valuable. We recommend the latter for critical and high findings; the former for medium and low.
How do we handle a researcher who wants to disclose publicly before we have patched?
This is why your policy's disclosure timeline matters. If the researcher agreed to a 90-day disclosure timeline and you are within that window, you have time to remediate. If you are approaching the deadline and have not patched, your options are: request an extension (researchers will usually agree for reasonable causes), coordinate a public disclosure that does not include exploitable technical details, or expedite the patch. Never threaten a researcher with legal action for following the disclosure terms you published — this destroys your reputation permanently in the research community.
Should we pay a researcher who finds a critical vulnerability outside our defined scope?
Yes, with a goodwill payment or a certificate of appreciation, not necessarily the full critical reward. A researcher who finds a critical vulnerability in an out-of-scope asset has done you a genuine service. Rejecting the finding entirely because of a scope technicality is both ethically questionable and strategically unwise it sends a signal to the researcher community that your program prioritises technicalities over security outcomes. A goodwill payment or a certificate of appreciation for a critical out-of-scope finding is appropriate and maintains researcher goodwill.
Ready to launch your first bug bounty program?
Com Olho runs India's most active bug bounty platform. We have helped organisations across BFSI, healthcare, e-commerce, manufacturing and enterprise technology launch their first programs, typically within 2–3 weeks of kickoff, with full managed triage support and an INR escrow payment system built for Indian researchers.
Schedule a free 30-minute consultation and we will review your scope, suggest a reward structure for your sector, and give you a realistic timeline for your first live program.
-c.png)



Comments