top of page

Competitive Analysis: Buy-Now Pay-Later Products

  • Writer: Priank Ravichandar
    Priank Ravichandar
  • Nov 3, 2025
  • 6 min read

Conducting a competitive analysis on Buy-Now Pay-Later (BNPL) products in the EU market using an AI-assisted workflow.




Context

There are many new startups emerging in the B2C Buy-Now Pay-Later (BNPL) space. This project explores how AI can be used in competitive research and analysis. We want to understand the key features offered by competitor products to identify potential improvements and/or feature additions. We want to compile a competitive intelligence report to highlight the competitor product’s high-level capabilities and specific functionality that might be useful to integrate into the product.


Objective

The overall objective is to identify a new reproducible competitive analysis workflow that can be used to quickly and reliably analyze competitors' companies and their products' features and functionality.


Workflow


Tools: ChatGPT, Claude, Gemini, Grammarly, Perplexity


Step 1: Define Requirements


We outline the requirements for the competitive intelligence report. We want to generate a comprehensive report with the following sections:


  1. Executive Summary: A one-page overview outlining the BNPL market, along with each company's positioning, target audience, and core value proposition.

  2. Market Overview: BNPL market size, user base, transaction volume, historic/current trends, and projected growth.

  3. Company Overview: Breakdown of each company, its product, and its features.

  4. Product Competitive Analysis: Comparison of the features and functionality of the three companies’ products.

  5. Strategic Implications: Key lessons from each product to guide the design of a BNPL product.


Step 2: Compile Report


We construct a prompt to generate the report and use two methods (to compare results): GPT-5.1 (Research Mode) and Perplexity (using Claude Sonnet 4.5). The goal is to evaluate the outputs to identify the best tool for generating these reports in the future.


Step 2A: Craft a competitive research prompt


We construct an initial prompt with basic requirements. We optimize this prompt using meta-prompting with Claude Sonnet 4.5. We review and refine the generated prompt as needed. Refer to my meta prompting resource for more details on how to do this.


Note: The initial and optimized analysis prompts can be found here.


Step 2B: Compile competitive intelligence report using GPT-5.1 Research


We run the optimized prompt in GPT-5.1 Research to generate the competitive intelligence report. We review and refine the AI-generated output as needed.


Step 2C: Compile competitive intelligence report using Perplexity (Model → Sonnet 4.5)


We run the optimized prompt in Perplexity (with Claude Sonnet 4.5) to generate the competitive intelligence report. We set the Sources to Web and Financial.


Step 2D: Manually refine and review the competitive intelligence reports generated from each tool.


We review and refine the AI-generated output as needed. Even with multiple iterations of the optimized prompt, the quality of the report was not great.


Issues Observed

  • Conciseness (e.g., generated reports longer than specified).

  • Grammatical Errors (e.g., 200+ errors flagged by Grammarly).

  • Repetition (e.g., information duplicated across multiple sections).

  • Mixing Geographies (e.g., including information only relevant to Afterpay US).

  • Conflating Facts (e.g., assuming PayPal transaction volume is the same as PayPal Later’s volume.


Even after explicitly directing the prompt to fix issues, they persisted, so extensive manual editing was required. This highlights the need to manually review AI-generated reports.


Outputs


Step 3: Compare Reports Generated


Step 3A: Compare the competitive intelligence reports generated from each tool.


Observations From Manual Review

  • Both ChatGPT and Perplexity reports cover all the critical details needed for competitive research.

  • The Perplexity report required a lot more manual editing than the ChatGPT report, since the model explained topics in depth.

  • The Perplexity report provided better strategic insights (in its “Strategic Implications” section) than the ChatGPT report.

  • The Perplexity report might be preferable if the team were interested in quickly surfacing specific actionable insights to explore in more detail.

  • The ChatGPT report might be preferable if the team were interested in quickly getting a general overview of the market and the competitors.


Step 3B: Use AI to compare the competitive intelligence reports generated from each tool.


We can take the comparison a step further by using AI to highlight the key differences. This method can be especially helpful when comparing long reports because the AI can flag strengths/weaknesses that may have been missed during the manual review.


We can use a simple prompt in Claude to extract the basic report requirements (in a format that an AI model can understand) from our optimized prompts in step 2A. We edit the generated requirements based on our specific goals (what we plan to do with the report).


Report Requirements Prompt


Here is a prompt used to generate a competitive intelligence report. I have generated the reports using this prompt via two AI tools. I want to distill the quality requirements from this prompt so I can use that to compare the reports. Generate a concise list of critical requirements that can be used to compare these reports.

<prompt> Enter Prompt </prompt>

Next, we can use another prompt to compare the reports based on these requirements and generate a recommendation for a report to use.


Report Comparison Prompt


You are an expert BNPL domain analyst and competitive intelligence reviewer. You will evaluate two competitive intelligence reports on the EU BNPL market covering Klarna, PayPal Later, and Afterpay. One report was generated using GPT-5.1 Research, and the other using Perplexity (Claude Sonnet 4.5). Both reports have undergone substantial editing.

# Task
1. Assess each report against the requirements provided in `<report_requirements>`.
2. Identify strengths and weaknesses for each report.
3. Provide a concise list of observations for each report.
4. Recommend which report should be used and justify your choice.

## Output Format

- **Report A Observations:**
	- Strengths
  - Weaknesses
- **Report B Observations:**
	- Strengths
  - Weaknesses
- **Recommendation:**
  - Chosen report
  - Rationale
  
# Requirements 
<report_requirements>Enter Requirements</report_requirements>

Note: Models can sometimes display a preference for outputs generated using their own model. Therefore, to get a balanced analysis, we run the comparison using ChatGPT, Claude, and Gemini, with Gemini more likely to be neutral, since it did not create either report. What we have below is a summarized version of the results.


Comparison Results From AI Review

  • Detailed Requirement Comparison Results

    • ChatGPT Recommendation: ChatGPT Report

    • Claude Recommendation: ChatGPT Report

    • Gemini Recommendation: Perplexity Report

  • Key reasons for choosing the Perplexity Report (Claude)

    • The 1-page executive summary with clear TL;DR positioning makes the Perplexity report immediately usable for stakeholder presentations.

    • The Perplexity report's strategic implications section provides specific, actionable recommendations versus the GPT report's more generic guidance.

  • Key reasons for choosing the ChatGPT Report (ChatGPT, Gemini)

    • The GPT Report provides richer, product-level feature tables that are directly actionable for product teams.

    • While the Perplexity Report contains several useful strategic points, it shows factual inconsistencies and lighter per-company technical depth.


We see here that both Claude and ChatGPT recommended the report that was generated using their own model. The models did not highlight any critical points that were not observed during the manual analysis. Since the overall goal was to understand the competitor product’s high-level capabilities and specific functionality, we chose the ChatGPT report, since it required less editing, and the quality of the “Strategic Implications” section can likely be enhanced using further prompt optimization (if necessary).


(Optional) Step 4: Generate A Presentation


A competitive intelligence report can be great for outlining competitor companies, products, and features in depth. However, most team members, stakeholders, and executives don’t always have the time to read through a report. Therefore, it can be helpful to create a presentation to highlight the key details. This lets people quickly grasp what’s important, while still having the option to find more information in the report (if they need it).


Step 4A: Craft a presentation generation prompt


We construct an initial prompt with basic requirements. We optimize this prompt using meta-prompting with Claude Sonnet 4.5. We review and refine the generated prompt as needed. Refer to my meta prompting resource for more details on how to do this.


Note: The initial and optimized analysis prompts can be found here.


Step 4B: Generate a competitive intelligence presentation in Claude


We run the optimized prompt in Claude (Sonnet 4.5) to generate the competitive intelligence presentation. We review and refine the AI-generated output as needed.


Final Outputs


Note: I experimented with using Claude to create visuals to add to the presentation, but the quality was not great, so they were not included in the report or presentation.


Conclusion


This project explored how AI tools can streamline competitive intelligence workflows. By systematically comparing GPT-5.1 Research and Perplexity (Claude Sonnet 4.5), we identified a reproducible framework for generating comprehensive competitor reports while uncovering critical quality control requirements.


AI significantly accelerates competitive research but requires substantial manual oversight. Both tools successfully compiled detailed BNPL market reports covering Klarna, PayPal Later, and Afterpay. However, persistent quality issues meant extensive editing was necessary even after prompt optimization.


The workflow established here provides a reproducible framework adaptable across industries and research objectives. Success depends on treating AI outputs as research drafts requiring validation rather than final deliverables, and on building institutional knowledge about tool performance patterns through systematic testing and documentation.



bottom of page