Skip to content

Upgrade to Next.js 16 and Gemini 3.1 Pro Enhancements#593

Open
ngoiyaeric wants to merge 1 commit intomainfrom
feature/nextjs16-gemini31-upgrade
Open

Upgrade to Next.js 16 and Gemini 3.1 Pro Enhancements#593
ngoiyaeric wants to merge 1 commit intomainfrom
feature/nextjs16-gemini31-upgrade

Conversation

@ngoiyaeric
Copy link
Copy Markdown
Collaborator

@ngoiyaeric ngoiyaeric commented May 7, 2026

This PR upgrades the core framework to Next.js 16 and React 19.2, enables the React Compiler, and enhances the Gemini 3.1 Pro integration with the customtools endpoint and context caching.

Summary by CodeRabbit

  • New Features

    • Enhanced AI reasoning capabilities with step-by-step analysis approach
    • Conversation context caching for improved response efficiency
  • Performance

    • Upgraded to Next.js 16 and React 19.2 with improved routing and performance
    • Enabled React compiler and component caching optimizations
  • Documentation

    • Added platform upgrade roadmap covering next-generation enhancements

@vercel
Copy link
Copy Markdown
Contributor

vercel Bot commented May 7, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
qcx Error Error May 7, 2026 4:38am

@CLAassistant
Copy link
Copy Markdown

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


Dev seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 7, 2026

Walkthrough

This PR upgrades to Next.js 16 with React 19.2.0, enables Turbopack and React compiler, switches to Gemini 3.1 Pro with custom-tools support via conditional model selection, adds in-memory context caching for conversation state, and enhances agent reasoning prompts with step-by-step guidance and multi-step analysis workflows.

Changes

Next.js 16 & Gemini 3.1 Pro Integration Upgrade

Layer / File(s) Summary
Package & Config Foundation
package.json, next.config.mjs, tsconfig.json
Next.js bumped to ^16.2.0, React/React-DOM to ^19.2.0 with matching types; Turbopack root and React compiler enabled; JSX emit switches to react-jsx.
Routing & Middleware
proxy.ts
Exported function renamed from middleware to proxy; request-handling logic (auth gating, redirect logic) remains unchanged.
Model Selection Logic
lib/utils/index.ts
getModel() adds new usesTools parameter to conditionally select between gemini-3.1-pro-preview-customtools and gemini-3.1-pro-preview Gemini models.
Context Caching Infrastructure
lib/utils/context-cache.ts
New in-memory cache for Gemini 3.1 Pro stores serialized message prefixes (~80% of conversation), system prompts, and TTL metadata; provides retrieval, invalidation, and reporting exports with automatic LRU-based eviction.
Agent Prompt & Model Enhancements
lib/agents/researcher.tsx, lib/agents/resolution-search.tsx
System prompts expanded with "Enhanced Reasoning" and "Chain-of-Thought" sections; multi-step analysis workflows added; researcher now calls getModel(hasImage, true) to use the custom-tools model variant.
Documentation & Planning
NEXTJS_GEMINI_RESEARCH_REPORT.md
New research report documenting Next.js 16 and Gemini 3.1 Pro upgrade recommendations, three-phase QCX platform implementation roadmap, and external reference links.

Sequence Diagram

sequenceDiagram
    participant Client
    participant Agent as Agent<br/>(researcher)
    participant CacheUtil as Context<br/>Cache
    participant ModelUtil as getModel()
    participant LLM as Gemini 3.1 Pro

    Client->>Agent: Messages + System Prompt
    Agent->>CacheUtil: cacheConversationContext(...)
    CacheUtil->>CacheUtil: Serialize first ~80% of messages,<br/>store with TTL & timestamp
    activate CacheUtil
    Note over CacheUtil: In-memory Map keyed by<br/>conversationId + messageCount
    deactivate CacheUtil
    
    Agent->>ModelUtil: getModel(hasImage, true)
    Note over ModelUtil: usesTools=true selects<br/>gemini-3.1-pro-preview-customtools
    ModelUtil-->>Agent: Custom-tools model variant
    
    Agent->>LLM: Stream with enhanced reasoning prompt<br/>+ tools + cached context
    LLM-->>Agent: Reasoning steps + tool calls
    Agent->>CacheUtil: clearConversationCache(...)<br/>(on completion/expiry)
    CacheUtil->>CacheUtil: Remove cached context
    Agent-->>Client: Final result
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related issues

  • System improvements #592: This PR directly implements the technical recommendations outlined in the retrieved research report, including Next.js 16 and React version upgrades, Turbopack/React compiler enablement, middleware→proxy rename, Gemini 3.1 Pro custom-tools model selection, context caching integration, and enhanced agent reasoning prompts.

Possibly related PRs

  • QueueLab/QCX#584: Both modify getModel() logic to select Gemini 3.x model IDs—this PR adds a usesTools flag to choose the custom-tools variant, while the retrieved PR updates the baseline Gemini model selection.
  • QueueLab/QCX#589: Both enhance lib/agents/resolution-search.tsx system prompts with expanded reasoning workflows and temporal/news integration logic.
  • QueueLab/QCX#304: Both modify lib/agents/researcher.tsx system prompts and tool-related behavior to improve reasoning and agent decision-making.

Suggested labels

enhancement, dependencies, ai-reasoning, next.js-upgrade

Suggested reviewers

  • CharlieHelps

🐰 Next.js sixteen hops, with Gemini's mind so keen,
Context caches whisper secrets in between,
Enhanced reasoning flows, step by thoughtful step,
Tools await with custom prep—
The future's here, and what a tech-savvy scene! 🚀✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely summarizes the main changes: upgrading to Next.js 16 and enhancing Gemini 3.1 Pro integration, which aligns with the primary objectives across package.json, next.config.mjs, lib/utils/index.ts, and agent enhancements.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feature/nextjs16-gemini31-upgrade

Warning

Review ran into problems

🔥 Problems

Git: Failed to clone repository. Please run the @coderabbitai full review command to re-trigger a full review. If the issue persists, set path_filters to include or exclude specific files.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link
Copy Markdown
Contributor

Review Summary by Qodo

Upgrade to Next.js 16, React 19.2, and enhance Gemini 3.1 Pro with context caching

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Upgrade Next.js to v16.2 and React to v19.2 with React Compiler enabled
• Add context caching utility for Gemini 3.1 Pro to reduce token usage
• Integrate custom tools endpoint for improved agent tool-calling reliability
• Enhance system prompts with advanced reasoning approach for geospatial analysis
• Update configuration for Next.js 16 standards (turbopack, cacheComponents, proxy)
Diagram
flowchart LR
  A["Next.js 16.2<br/>React 19.2"] --> B["React Compiler<br/>Enabled"]
  A --> C["Turbopack<br/>Stable"]
  D["Gemini 3.1 Pro"] --> E["Custom Tools<br/>Endpoint"]
  D --> F["Context Caching<br/>Utility"]
  E --> G["Improved Agent<br/>Tool Calling"]
  F --> H["Reduced Token<br/>Usage"]
  B --> I["Auto Component<br/>Optimization"]
  G --> J["Enhanced Geospatial<br/>Analysis"]
  H --> J
Loading

Grey Divider

File Changes

1. lib/utils/context-cache.ts ✨ Enhancement +114/-0

New context caching utility for token optimization

lib/utils/context-cache.ts


2. lib/utils/index.ts ✨ Enhancement +5/-3

Add custom tools endpoint support to getModel

lib/utils/index.ts


3. proxy.ts ⚙️ Configuration changes +1/-1

Rename middleware to proxy for Next.js 16

proxy.ts


View more (6)
4. NEXTJS_GEMINI_RESEARCH_REPORT.md 📝 Documentation +75/-0

Strategic upgrade roadmap for Next.js 16 and Gemini 3.1

NEXTJS_GEMINI_RESEARCH_REPORT.md


5. lib/agents/researcher.tsx ✨ Enhancement +15/-2

Enhance system prompt with reasoning approach and custom tools

lib/agents/researcher.tsx


6. lib/agents/resolution-search.tsx ✨ Enhancement +19/-9

Add advanced chain-of-thought reasoning to geospatial analysis

lib/agents/resolution-search.tsx


7. next.config.mjs ⚙️ Configuration changes +10/-0

Enable React Compiler and configure Turbopack for v16

next.config.mjs


8. package.json Dependencies +7/-6

Update Next.js, React, and add React Compiler dependencies

package.json


9. tsconfig.json ⚙️ Configuration changes +3/-2

Update JSX and include Next.js dev types for v16

tsconfig.json


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown
Contributor

qodo-code-review Bot commented May 7, 2026

Code Review by Qodo

🐞 Bugs (3) 📘 Rule violations (0)

Grey Divider


Action required

1. Middleware never runs 🐞 Bug ≡ Correctness
Description
proxy.ts exports proxy() instead of exporting middleware() from a middleware.ts entrypoint,
so Next.js will not execute this code as middleware. Any request-handling logic in this file
(server-action bypass, redirects, future auth) becomes a no-op at runtime.
Code

proxy.ts[4]

+export function proxy(request: NextRequest) {
Evidence
The PR changes the exported handler to proxy() in proxy.ts; there is no middleware.ts
entrypoint in the repo, and proxy.ts is not referenced elsewhere, so the middleware behavior is
effectively removed.

proxy.ts[1-24]
Best Practice: Next.js Middleware conventions

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`proxy.ts` will not be executed as Next.js middleware. Next.js middleware must live in `middleware.ts` (or `middleware.js`) and export a function named `middleware` (or a default export).

## Issue Context
This PR appears to have renamed middleware to align with a perceived Next.js 16 change, but Next.js still uses the `middleware.*` convention. As a result, the redirect/auth logic is silently disabled.

## Fix Focus Areas
- proxy.ts[1-24]

## Suggested fix
- Rename `proxy.ts` back to `middleware.ts`.
- Rename `export function proxy` back to `export function middleware` (or `export default function middleware`).
- If you want to keep a `proxy.ts` helper, keep the logic there but re-export it from `middleware.ts`:
 - `import { proxy } from './proxy'`
 - `export function middleware(req) { return proxy(req) }`
- Ensure the `config.matcher` remains exported from the middleware entrypoint.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. ESLint/Next peer mismatch 🐞 Bug ☼ Reliability
Description
eslint-config-next is upgraded to 16.x while eslint remains on 8.x, but the resolved
eslint-config-next@16.2.5 declares a peer dependency of eslint >=9.0.0. This can cause
dependency resolution failures or forced peer overrides depending on your npm/CI settings.
Code

package.json[R112-116]

+    "babel-plugin-react-compiler": "^1.0.0",
    "cross-env": "^7.0.3",
    "eslint": "^8.57.1",
-    "eslint-config-next": "^14.2.28",
+    "eslint-config-next": "^16.x",
    "postcss": "^8.5.3",
Evidence
package.json keeps eslint at v8 while moving eslint-config-next to a v16 range. The checked-in
lockfile resolves eslint-config-next to 16.2.5 and shows it requires eslint >=9, creating a peer
dependency conflict.

package.json[112-118]
package-lock.json[11713-11733]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`eslint-config-next@16.x` requires `eslint >=9`, but the repo is pinned to `eslint ^8.57.1`. This may break installs (ERESOLVE) or require disabling peer checks.

## Issue Context
The lockfile already resolves `eslint-config-next` to 16.2.5 and declares the peer requirement, so the mismatch is present in the committed dependency set.

## Fix Focus Areas
- package.json[112-118]
- package-lock.json[11713-11733]

## Suggested fix
Choose one:
1) Upgrade eslint to a compatible major:
- Set `"eslint": "^9.x"` (and adjust any lint config changes needed).

2) Keep eslint v8:
- Downgrade/pin `eslint-config-next` to a version that supports eslint v8.

Then regenerate `package-lock.json` to reflect the consistent set.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

3. Context cache never hits 🐞 Bug ≡ Correctness
Description
The new context cache is keyed by ${conversationId}:${messageCount}, but
cacheConversationContext() always stores under the current messages.length, so subsequent turns
(different messageCount) will almost always miss and provide no benefit. Additionally,
clearConversationCache() uses startsWith(conversationId), which can delete caches for other
conversations with IDs sharing the same prefix.
Code

lib/utils/context-cache.ts[R24-96]

+function generateCacheKey(conversationId: string, messageCount: number): string {
+  return `${conversationId}:${messageCount}`;
+}
+
+/**
+ * Cache conversation context to reduce token usage
+ */
+export function cacheConversationContext(
+  conversationId: string,
+  messages: CoreMessage[],
+  systemPrompt: string
+): void {
+  const messageCheckpoint = messages.length;
+  const cacheKey = generateCacheKey(conversationId, messageCheckpoint);
+
+  // Create a serialized prefix of the conversation
+  const cachedPrefix = messages
+    .slice(0, Math.floor(messages.length * 0.8)) // Cache first 80% of messages
+    .map(m => `${m.role}: ${typeof m.content === 'string' ? m.content : '[complex content]'}`)
+    .join('\n');
+
+  const cachedContext: CachedContext = {
+    conversationId,
+    messageCheckpoint,
+    cachedPrefix,
+    systemPrompt,
+    timestamp: Date.now(),
+    ttl: CACHE_TTL,
+  };
+
+  contextCache.set(cacheKey, cachedContext);
+
+  // Cleanup old entries if cache exceeds max size
+  if (contextCache.size > MAX_CACHE_SIZE) {
+    const oldestKey = Array.from(contextCache.entries())
+      .sort((a, b) => a[1].timestamp - b[1].timestamp)[0][0];
+    contextCache.delete(oldestKey);
+  }
+}
+
+/**
+ * Retrieve cached context if available and not expired
+ */
+export function getCachedContext(
+  conversationId: string,
+  messageCount: number
+): CachedContext | null {
+  const cacheKey = generateCacheKey(conversationId, messageCount);
+  const cached = contextCache.get(cacheKey);
+
+  if (!cached) return null;
+
+  // Check if cache has expired
+  if (Date.now() - cached.timestamp > cached.ttl) {
+    contextCache.delete(cacheKey);
+    return null;
+  }
+
+  return cached;
+}
+
+/**
+ * Clear cache for a specific conversation
+ */
+export function clearConversationCache(conversationId: string): void {
+  const keysToDelete: string[] = [];
+  for (const [key] of contextCache.entries()) {
+    if (key.startsWith(conversationId)) {
+      keysToDelete.push(key);
+    }
+  }
+  keysToDelete.forEach(key => contextCache.delete(key));
+}
Evidence
The key includes messageCount and is used both for writes and reads, so any change in message length
changes the key and prevents reuse. The deletion logic performs a prefix match on the full key
string, which is unsafe when conversation IDs are not uniquely prefix-free.

lib/utils/context-cache.ts[24-38]
lib/utils/context-cache.ts[67-83]
lib/utils/context-cache.ts[88-96]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The cache design is unlikely to produce hits across conversation turns because the key changes every time message count changes. Cache clearing can also delete entries for other conversations due to `startsWith()` prefix collisions.

## Issue Context
This file is newly introduced and currently appears unused, so fixing now prevents baking in broken behavior when it gets integrated.

## Fix Focus Areas
- lib/utils/context-cache.ts[24-38]
- lib/utils/context-cache.ts[67-83]
- lib/utils/context-cache.ts[88-96]

## Suggested fix
- Make the key stable per conversation + checkpoint, not per exact messageCount. Options:
 - Key by `conversationId` only and store the latest checkpoint inside the value.
 - Or key by `conversationId:${checkpoint}` where `checkpoint` is the cached prefix length (e.g., `Math.floor(messages.length * 0.8)`) and look up the most recent checkpoint <= current messageCount.
- Fix clear logic to match `conversationId` exactly:
 - Use `key.startsWith(conversationId + ':')`, or
 - Split keys on `:` and compare the first segment.
- (Optional) Consider periodic cleanup of expired entries during `getCachedContext()` or via a lightweight sweep to avoid accumulating expired entries until they’re accessed.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

Qodo Logo

Comment thread proxy.ts
import type { NextRequest } from "next/server"

export function middleware(request: NextRequest) {
export function proxy(request: NextRequest) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Middleware never runs 🐞 Bug ≡ Correctness

proxy.ts exports proxy() instead of exporting middleware() from a middleware.ts entrypoint,
so Next.js will not execute this code as middleware. Any request-handling logic in this file
(server-action bypass, redirects, future auth) becomes a no-op at runtime.
Agent Prompt
## Issue description
`proxy.ts` will not be executed as Next.js middleware. Next.js middleware must live in `middleware.ts` (or `middleware.js`) and export a function named `middleware` (or a default export).

## Issue Context
This PR appears to have renamed middleware to align with a perceived Next.js 16 change, but Next.js still uses the `middleware.*` convention. As a result, the redirect/auth logic is silently disabled.

## Fix Focus Areas
- proxy.ts[1-24]

## Suggested fix
- Rename `proxy.ts` back to `middleware.ts`.
- Rename `export function proxy` back to `export function middleware` (or `export default function middleware`).
- If you want to keep a `proxy.ts` helper, keep the logic there but re-export it from `middleware.ts`:
  - `import { proxy } from './proxy'`
  - `export function middleware(req) { return proxy(req) }`
- Ensure the `config.matcher` remains exported from the middleware entrypoint.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment thread package.json
Comment on lines +112 to 116
"babel-plugin-react-compiler": "^1.0.0",
"cross-env": "^7.0.3",
"eslint": "^8.57.1",
"eslint-config-next": "^14.2.28",
"eslint-config-next": "^16.x",
"postcss": "^8.5.3",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Eslint/next peer mismatch 🐞 Bug ☼ Reliability

eslint-config-next is upgraded to 16.x while eslint remains on 8.x, but the resolved
eslint-config-next@16.2.5 declares a peer dependency of eslint >=9.0.0. This can cause
dependency resolution failures or forced peer overrides depending on your npm/CI settings.
Agent Prompt
## Issue description
`eslint-config-next@16.x` requires `eslint >=9`, but the repo is pinned to `eslint ^8.57.1`. This may break installs (ERESOLVE) or require disabling peer checks.

## Issue Context
The lockfile already resolves `eslint-config-next` to 16.2.5 and declares the peer requirement, so the mismatch is present in the committed dependency set.

## Fix Focus Areas
- package.json[112-118]
- package-lock.json[11713-11733]

## Suggested fix
Choose one:
1) Upgrade eslint to a compatible major:
- Set `"eslint": "^9.x"` (and adjust any lint config changes needed).

2) Keep eslint v8:
- Downgrade/pin `eslint-config-next` to a version that supports eslint v8.

Then regenerate `package-lock.json` to reflect the consistent set.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
lib/utils/context-cache.ts (1)

1-115: ⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift

Context cache is never imported or used, and doesn't actually reduce Gemini token costs.

Two distinct concerns:

  1. Dead code: No file in this PR (researcher.tsx, resolution-search.tsx, index.ts) imports or calls any export from context-cache.ts. The utility is fully unreachable at runtime.

  2. Incorrect abstraction: Even if integrated, this implementation stores a local in-memory text copy of messages (cachedPrefix) but still passes the full messages array to the model on every call. To actually reduce Gemini API token costs, you must use the Gemini native context caching API: upload a stable context prefix to get a server-side cachedContent ID, then pass that ID in subsequent requests. A local Map cannot intercept or reduce tokens billed by the API.

Consider either integrating the getCachedContext/cacheConversationContext calls into researcher.tsx and resolution-search.tsx, or replacing this with a Gemini-native context cache client that holds a cachedContent name/ID and injects it into the createGoogleGenerativeAI model call.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@lib/utils/context-cache.ts` around lines 1 - 115, The context-cache.ts file
is dead and its in-memory cachedPrefix won't reduce Gemini token billing; either
remove the module or replace/integrate it with Gemini native caching: stop using
contextCache/ cachedPrefix and instead call the Gemini caching API to upload a
stable prefix and store a returned cachedContent ID (use generateCacheKey,
cacheConversationContext, getCachedContext, clearConversationCache as the
integration points), then change callers (e.g., researcher.tsx and
resolution-search.tsx) to pass the cachedContent ID to createGoogleGenerativeAI
instead of the full messages array; if you prefer to keep a local helper,
implement cacheConversationContext to call the Gemini upload endpoint and save
the cachedContent ID in contextCache (with CACHE_TTL/MAX_CACHE_SIZE semantics)
and have getCachedContext return that ID for injection into model requests.
lib/agents/resolution-search.tsx (1)

137-175: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Model-specific branding in the system prompt will mislead non-Gemini providers.

Line 137 hardcodes "(Gemini 3.1 Pro)" as the model's identity, and line 168 embeds "(ARC-AGI-2 Level)" benchmark marketing into an operational instruction. When getModel() returns Grok, GPT-5.1, Bedrock, or OpenAI (Line 192 does not pass usesTools, so the default path is taken for all key configurations), the model receives instructions asserting it is Gemini 3.1 Pro, which is incorrect and could distort its behaviour.

Remove model-specific identity claims from the prompt, or make the prompt text dynamic based on the resolved provider.

🛠️ Proposed fix
-As a geospatial analyst with advanced reasoning capabilities (Gemini 3.1 Pro), your task is to analyze...
+As an expert geospatial analyst, your task is to analyze...
-1. **Abstract Pattern Recognition (ARC-AGI-2 Level):** Look for non-obvious patterns...
+1. **Abstract Pattern Recognition:** Look for non-obvious patterns...
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@lib/agents/resolution-search.tsx` around lines 137 - 175, The prompt in
lib/agents/resolution-search.tsx contains hardcoded model branding strings
("(Gemini 3.1 Pro)" and "(ARC-AGI-2 Level)") which wrongly assert a provider
identity; update the prompt generation to derive model identity and benchmarks
dynamically from getModel() (or remove branding entirely) so non‑Gemini
providers (Grok, GPT-5.1, Bedrock, OpenAI) aren’t mislabelled, and ensure the
callsite that builds the prompt (reference getModel() and the prompt template in
resolution-search.tsx) correctly passes provider/context flags such as usesTools
when constructing model config so the prompt matches the actual runtime model.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@lib/agents/researcher.tsx`:
- Around line 25-31: The prompt heading currently hardcodes a vendor string
"Reasoning Approach (Gemini 3.1 Pro Enhanced):" which couples the prompt to a
specific provider; replace that literal with a model-agnostic heading such as
"Reasoning Approach:" (or "Reasoning Approach (model-agnostic):") in
lib/agents/researcher.tsx wherever that string appears, and mirror the same
change in the other file with the same issue (e.g., resolution-search.tsx);
ensure no provider brand names are embedded in prompt strings passed to the
agent so the prompts remain provider-agnostic.

In `@lib/utils/context-cache.ts`:
- Around line 88-96: clearConversationCache currently deletes keys using
key.startsWith(conversationId), causing collisions when another conversationId
begins with the same prefix; update the check in clearConversationCache to
include the separator used in cache keys (e.g., test
key.startsWith(conversationId + ':') and also allow key === conversationId) so
only the exact conversation namespace (and its children) are removed from
contextCache.

In `@next.config.mjs`:
- Line 19: The config enables reactCompiler: true which can increase build/CI
times; run a CI build with reactCompiler set to true (in next.config.mjs) to
measure impact and confirm it's acceptable, and if the build time regression is
unacceptable, revert/remove the reactCompiler: true flag or gate it behind an
environment variable (e.g., process.env.ENABLE_REACT_COMPILER) so CI can opt-in
only after validation.

In `@NEXTJS_GEMINI_RESEARCH_REPORT.md`:
- Line 75: Reference [2] in the Gemini 3.1 Pro benchmarks table cites a
non-authoritative source (vertu.com); replace that citation with the official
Google/DeepMind publication or the Gemini API/models documentation by updating
the reference entry for [2] and any in-text citations in the "Gemini 3.1 Pro"
table (the rows for ARC-AGI-2, APEX-Agents, GPQA Diamond) to point to the
official DeepMind blog post or the Gemini API docs; ensure the new reference URL
and title clearly identify Google/DeepMind as the source and remove or archive
the vertu.com link.

---

Outside diff comments:
In `@lib/agents/resolution-search.tsx`:
- Around line 137-175: The prompt in lib/agents/resolution-search.tsx contains
hardcoded model branding strings ("(Gemini 3.1 Pro)" and "(ARC-AGI-2 Level)")
which wrongly assert a provider identity; update the prompt generation to derive
model identity and benchmarks dynamically from getModel() (or remove branding
entirely) so non‑Gemini providers (Grok, GPT-5.1, Bedrock, OpenAI) aren’t
mislabelled, and ensure the callsite that builds the prompt (reference
getModel() and the prompt template in resolution-search.tsx) correctly passes
provider/context flags such as usesTools when constructing model config so the
prompt matches the actual runtime model.

In `@lib/utils/context-cache.ts`:
- Around line 1-115: The context-cache.ts file is dead and its in-memory
cachedPrefix won't reduce Gemini token billing; either remove the module or
replace/integrate it with Gemini native caching: stop using contextCache/
cachedPrefix and instead call the Gemini caching API to upload a stable prefix
and store a returned cachedContent ID (use generateCacheKey,
cacheConversationContext, getCachedContext, clearConversationCache as the
integration points), then change callers (e.g., researcher.tsx and
resolution-search.tsx) to pass the cachedContent ID to createGoogleGenerativeAI
instead of the full messages array; if you prefer to keep a local helper,
implement cacheConversationContext to call the Gemini upload endpoint and save
the cachedContent ID in contextCache (with CACHE_TTL/MAX_CACHE_SIZE semantics)
and have getCachedContext return that ID for injection into model requests.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: f174ce4a-9939-44ee-8631-2b90c0ab6361

📥 Commits

Reviewing files that changed from the base of the PR and between d0efce4 and bfb0c2e.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (9)
  • NEXTJS_GEMINI_RESEARCH_REPORT.md
  • lib/agents/researcher.tsx
  • lib/agents/resolution-search.tsx
  • lib/utils/context-cache.ts
  • lib/utils/index.ts
  • next.config.mjs
  • package.json
  • proxy.ts
  • tsconfig.json
📜 Review details
🔇 Additional comments (8)
tsconfig.json (1)

17-17: LGTM.

With "noEmit": true, "jsx": "react-jsx" only affects type-checking; it enables the automatic JSX transform (React 17+), which is appropriate for React 19.2. The .next/dev/types/**/*.ts path on line 36 is the standard Next.js 16 dev-types inclusion.

lib/utils/index.ts (1)

60-61: LGTM — gemini-3.1-pro-preview-customtools is a valid endpoint confirmed by official Google AI docs.

The gemini-3.1-pro-preview-customtools endpoint is documented for use with bash and custom tools, though Google notes "quality fluctuations in some use cases which don't benefit from such tools." Since usesTools: true is only passed by the researcher agent (which always has tool definitions registered), this conditional is well-scoped.

proxy.ts (1)

4-20: LGTM — rename is correct for Next.js 16.

proxy.ts replaces middleware.ts and now runs on the Node.js runtime; the required migration is renaming the file and the exported function to proxy. The logic, config, and matcher are unchanged and remain valid.

next.config.mjs (2)

14-16: LGTM — top-level turbopack is correct for Next.js 16.

experimental.turbopack moved out of experimental in Next.js 16; the configuration now lives at the top level of nextConfig. The root: '.' setting is appropriate given the transpilePackages: ['QCX', 'mapbox_mcp'] entries, which indicate locally-linked packages that need to be resolved from the project root.


19-20: 🏗️ Heavy lift

cacheComponents: true flips the entire caching model — verify all existing routes are migrated before enabling.

cacheComponents is the renamed successor to experimental.dynamicIO, which this project did not previously enable. This is a new opt-in, not a migration of an existing setting.

Route segment config (export const revalidate = ...) still works without cacheComponents; once it is enabled, that API is deprecated and you must migrate to "use cache" with cacheLife(). Additionally, Next.js's implicit fetch() caching is disabled under this model — nothing is cached by default.

If any route in the codebase uses export const revalidate, export const dynamic, or relies on automatic fetch de-duplication, those paths will lose their caching behaviour silently after this change. Audit all server components, route handlers, and data-fetching functions for implicit caching assumptions before merging.

package.json (3)

76-76: Core framework version alignment looks good.

next, react, and react-dom are upgraded consistently for the stated Next.js 16 / React 19.2 migration.

Also applies to: 82-83


108-109: Type definitions are correctly synced with React runtime.

Good call updating @types/react and @types/react-dom alongside the React 19.2 bump.


112-112: Tooling updates match the migration direction.

React Compiler and Next ESLint config updates are consistent with the framework upgrade.

Also applies to: 115-115

Comment thread lib/agents/researcher.tsx
Comment on lines +25 to +31
**Reasoning Approach (Gemini 3.1 Pro Enhanced):**
Before using any tool, break down complex queries into logical steps:
1. Understand the user's intent and constraints
2. Identify which tool(s) can best address each part
3. Explain your reasoning when selecting between multiple tools
4. Execute tools in the optimal order

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Same model-identity coupling as in resolution-search.tsx — remove the Gemini brand from the prompt heading.

Line 25's "Reasoning Approach (Gemini 3.1 Pro Enhanced):" heading will be received verbatim by Grok, GPT-5.1, or Bedrock Claude whenever those providers are active. The instruction itself is sound; the brand attribution is the problem.

🛠️ Proposed fix
-**Reasoning Approach (Gemini 3.1 Pro Enhanced):**
+**Reasoning Approach:**
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
**Reasoning Approach (Gemini 3.1 Pro Enhanced):**
Before using any tool, break down complex queries into logical steps:
1. Understand the user's intent and constraints
2. Identify which tool(s) can best address each part
3. Explain your reasoning when selecting between multiple tools
4. Execute tools in the optimal order
**Reasoning Approach:**
Before using any tool, break down complex queries into logical steps:
1. Understand the user's intent and constraints
2. Identify which tool(s) can best address each part
3. Explain your reasoning when selecting between multiple tools
4. Execute tools in the optimal order
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@lib/agents/researcher.tsx` around lines 25 - 31, The prompt heading currently
hardcodes a vendor string "Reasoning Approach (Gemini 3.1 Pro Enhanced):" which
couples the prompt to a specific provider; replace that literal with a
model-agnostic heading such as "Reasoning Approach:" (or "Reasoning Approach
(model-agnostic):") in lib/agents/researcher.tsx wherever that string appears,
and mirror the same change in the other file with the same issue (e.g.,
resolution-search.tsx); ensure no provider brand names are embedded in prompt
strings passed to the agent so the prompts remain provider-agnostic.

Comment on lines +88 to +96
export function clearConversationCache(conversationId: string): void {
const keysToDelete: string[] = [];
for (const [key] of contextCache.entries()) {
if (key.startsWith(conversationId)) {
keysToDelete.push(key);
}
}
keysToDelete.forEach(key => contextCache.delete(key));
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

clearConversationCache has a prefix-collision bug.

key.startsWith(conversationId) will incorrectly delete cache entries belonging to a different conversation whose ID begins with the same substring (e.g., clearing "abc" also clears "abcdef:5"). The separator should be included in the prefix test.

🐛 Proposed fix
-    if (key.startsWith(conversationId)) {
+    if (key.startsWith(conversationId + ':')) {
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@lib/utils/context-cache.ts` around lines 88 - 96, clearConversationCache
currently deletes keys using key.startsWith(conversationId), causing collisions
when another conversationId begins with the same prefix; update the check in
clearConversationCache to include the separator used in cache keys (e.g., test
key.startsWith(conversationId + ':') and also allow key === conversationId) so
only the exact conversation namespace (and its children) are removed from
contextCache.

Comment thread next.config.mjs
},

// reactCompiler and cacheComponents moved to top level in v16
reactCompiler: true,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial | ⚡ Quick win

reactCompiler: true increases build times — test in CI before shipping.

The reactCompiler option has been promoted from experimental to stable in Next.js 16 and is not enabled by default while the team gathers build performance data; enabling it increases compile times in development and builds because the React Compiler relies on Babel.

This is a non-breaking improvement, but confirm the build time increase is acceptable in your CI pipeline.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@next.config.mjs` at line 19, The config enables reactCompiler: true which can
increase build/CI times; run a CI build with reactCompiler set to true (in
next.config.mjs) to measure impact and confirm it's acceptable, and if the build
time regression is unacceptable, revert/remove the reactCompiler: true flag or
gate it behind an environment variable (e.g., process.env.ENABLE_REACT_COMPILER)
so CI can opt-in only after validation.


## References
[1] [Upgrading: Version 16 | Next.js](https://nextjs.org/docs/app/guides/upgrading/version-16)
[2] [Gemini 3.1 Pro: #1 Reasoning AI Benchmarks & API Guide | Google DeepMind](https://vertu.com/ai-tools/gemini-3-1-pro-benchmarks-api-specs-developer-guide-in-2026/)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Reference [2] uses a non-authoritative source for benchmark claims.

vertu.com is a luxury smartphone brand, not an official Google or DeepMind publication. All benchmark figures cited in the Gemini 3.1 Pro table (ARC-AGI-2, APEX-Agents, GPQA Diamond) trace back to this single non-authoritative reference. Replace with the official Google DeepMind blog post or the Gemini API models documentation.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@NEXTJS_GEMINI_RESEARCH_REPORT.md` at line 75, Reference [2] in the Gemini 3.1
Pro benchmarks table cites a non-authoritative source (vertu.com); replace that
citation with the official Google/DeepMind publication or the Gemini API/models
documentation by updating the reference entry for [2] and any in-text citations
in the "Gemini 3.1 Pro" table (the rows for ARC-AGI-2, APEX-Agents, GPQA
Diamond) to point to the official DeepMind blog post or the Gemini API docs;
ensure the new reference URL and title clearly identify Google/DeepMind as the
source and remove or archive the vertu.com link.

@ngoiyaeric
Copy link
Copy Markdown
Collaborator Author

@coderabbitai I've analyzed the build errors. Here are my questions to ensure I'm aligned with your expectations:

Dynamic Rendering: The routes /api/embeddings, /api/chats, and the root / page are failing during prerendering due to dynamic dependencies. I plan to mark them with export const dynamic = 'force-dynamic'. Is it acceptable for these to be dynamic, or do you require static generation for any of them?
Embeddings Route Environment: The embeddings route references local paths (/home/ubuntu/..., aef_index.csv). Since these won't be available during a standard Vercel build, is this feature intended for a specific environment? If so, should I add guardrails to prevent build-time failures while maintaining the functionality?
UUID Generation: The landing page generates a random UUID for the initial chat. Marking it dynamic is the quickest fix. Would you prefer this, or should I refactor it to generate the ID on the client to keep the page static?
Turbopack Warning: I noticed a warning about filesystem operations in app/api/embeddings/route.ts. I can attempt to suppress this or refactor how the index is loaded if you'd like to clean up the build logs.
Looking forward to your guidance!

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 7, 2026

Tip

For best results, initiate chat on the files or code changes.

@ngoiyaeric, great analysis! Here's my guidance on each of your questions:


1. Dynamic Rendering (/api/embeddings, /api/chats, /)

export const dynamic = 'force-dynamic' is the correct and standard fix for these routes. The prerendering failure occurs when a route in the app/ directory is initially generated statically at build time, but attempts to use dynamic server values (such as cookies or headers) at runtime.

That said, prefer static over dynamic rendering where possible — static pages are faster, cheaper, and cached by default. Only use dynamic = "force-dynamic" when the page genuinely needs fresh data on every request.

  • /api/embeddings and /api/chats — These are inherently runtime routes (they depend on user input, auth context, etc.), so force-dynamic is fully appropriate here.
  • Root / page — See question 3 below for a better alternative than force-dynamic.

2. Embeddings Route: Environment-Specific Local Paths

The local path references (/home/ubuntu/..., aef_index.csv) will not be available in a Vercel build environment and will cause build-time failures. I recommend adding explicit guardrails:

// app/api/embeddings/route.ts
import { NextResponse } from 'next/server';
import * as fs from 'fs';

export const dynamic = 'force-dynamic';

const INDEX_PATH = process.env.AEF_INDEX_PATH ?? '/home/ubuntu/aef_index.csv';

export async function GET(request: Request) {
  if (!INDEX_PATH || !fs.existsSync(INDEX_PATH)) {
    return NextResponse.json(
      { error: 'Embeddings index not available in this environment.' },
      { status: 503 }
    );
  }
  // ... rest of handler
}

Additionally, expose the path via an environment variable (e.g., AEF_INDEX_PATH) so it can be configured per environment without touching source code. For Vercel deployments where this feature isn't expected to work, simply leave the env var unset and let the route return a graceful 503.


3. UUID Generation on the Landing Page

Rather than marking the root page force-dynamic (which incurs a server round-trip on every visit), I'd recommend generating the UUID client-side to keep the page statically prerendered:

// app/page.tsx
'use client'; // only if needed, or isolate into a child Client Component

import { useState, useEffect } from 'react';

export default function HomePage() {
  const [chatId, setChatId] = useState<string | null>(null);

  useEffect(() => {
    setChatId(crypto.randomUUID());
  }, []);

  if (!chatId) return null; // or a skeleton/loading state

  return <Chat id={chatId} />;
}

Alternatively, you can isolate just the UUID-dependent part into a small "use client" component so the rest of the page remains a Server Component and can still be statically rendered. This is the more idiomatic App Router approach.


4. Turbopack Filesystem Warning in app/api/embeddings/route.ts

Turbopack statically analyzes modules at build time and flags filesystem operations (fs.readFileSync, fs.existsSync, etc.) in route handlers because they can cause issues during bundling. The cleanest fix is to lazy-load the index at request time rather than at module initialization:

// Instead of top-level:
// const index = fs.readFileSync(INDEX_PATH, 'utf-8'); // ❌ runs at module load

// Do this inside the handler:
export async function GET(request: Request) {
  const { default: fs } = await import('fs'); // lazy import
  if (!fs.existsSync(INDEX_PATH)) { ... }
  const index = fs.readFileSync(INDEX_PATH, 'utf-8');
  // ...
}

This defers the filesystem access to request time, which Turbopack handles cleanly, and also naturally avoids the build-time failure when the path doesn't exist.


Summary of recommended actions:

Issue Recommended Fix
/api/embeddings, /api/chats prerender failure Add export const dynamic = 'force-dynamic'
Local path in embeddings route Guard with fs.existsSync + env variable + graceful 503
UUID on root / page Generate client-side via useEffect + crypto.randomUUID() in a Client Component
Turbopack filesystem warning Lazy-import fs inside the handler body

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants