ThinkReview introduces Gemini 3.1 Flash-Lite

ThinkReview introduces Gemini 3.1 Flash-Lite

We're excited to announce that Gemini 3.1 Flash-Lite is now available in ThinkReview. Google's newest model is built for intelligence at scale—their fastest and most cost-efficient Gemini 3 series model. You can now use it for AI-powered code reviews on GitHub, GitLab, Azure DevOps, and Bitbucket.

What is Gemini 3.1 Flash-Lite?

Gemini 3.1 Flash-Lite is designed for high-volume developer workloads. Priced at $0.25/1M input tokens and $1.50/1M output tokens, it delivers strong quality at a fraction of the cost of larger models. According to Google, it achieves 2.5× faster Time to First Answer Token and 45% higher output speed than 2.5 Flash while maintaining similar or better quality in benchmarks. It also reaches an Elo score of 1432 on the Arena.ai Leaderboard and excels on reasoning and multimodal tasks—including 86.9% on GPQA Diamond and 76.8% on MMMU Pro—even surpassing larger Gemini models from earlier generations.

For code review, that means faster reviews, lower cost per review, and reliable quality when you're running many reviews per day.

Why use Gemini 3.1 Flash-Lite in ThinkReview

  • Fast — Lower latency and higher throughput than 2.5 Flash, so reviews feel snappy even on large diffs.
  • Quality you can trust — Strong benchmark results on reasoning and understanding; well-suited for security checks, best practices, and actionable suggestions.
  • Built for scale — Google built 3.1 Flash-Lite for high-frequency workflows; ThinkReview brings that same efficiency to your PR workflow.

Whether you're on Professional, Lite, or Teams, you can choose Gemini 3.1 Flash-Lite when you want fast, affordable reviews without sacrificing quality.

How to use it in ThinkReview

  1. Open ThinkReview settings — Click the extension icon and go to Settings or Model selection.
  2. Choose Gemini 3.1 Flash-Lite — Select it from the model dropdown for your reviews.
  3. Run a review — Open any pull request and click AI Review. The review will use Gemini 3.1 Flash-Lite.

You can switch between models at any time. Use 3.1 Flash-Lite for high-volume or cost-sensitive workflows, and switch to Pro or other models when you need maximum depth on complex changes.

Where it works

Same as always: GitHub, GitLab, Azure DevOps, and Bitbucket Cloud. One extension, one workflow, with more model choice—including Google's most cost-effective Gemini 3 model yet.

If you try Gemini 3.1 Flash-Lite in ThinkReview and have feedback, we'd love to hear it—open an issue on GitHub or reach out via thinkreview.dev.


Ready to try faster, more cost-efficient AI code reviews? Try ThinkReview free or manage your models in the portal.