Skip to content

Conversation

@JunyiXu-nv
Copy link
Collaborator

@JunyiXu-nv JunyiXu-nv commented Dec 12, 2025

Summary by CodeRabbit

  • New Features

    • Added GET endpoint to retrieve individual stored responses with validation
    • Added DELETE endpoint to remove stored responses with validation
    • Enhanced response management with targeted removal capabilities
  • Tests

    • Added comprehensive tests for response retrieval and deletion operations
    • Added error case tests for invalid and non-existent response IDs
    • Extended test coverage for robust response handling

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Support openai described entrypoints. Ref: https://platform.openai.com/docs/api-reference/responses

Test Coverage

  • test_e2e.py::test_openai_responses_entrypoint

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@JunyiXu-nv JunyiXu-nv requested a review from LinPoly December 12, 2025 03:27
@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27968 [ run ] triggered by Bot. Commit: e529ee8

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 12, 2025

📝 Walkthrough

Walkthrough

The pull request introduces two new API endpoints (GET and DELETE) for retrieving and managing stored OpenAI responses. The OpenAI server now exposes these endpoints with corresponding handlers, while the response storage layer is upgraded from synchronous to asynchronous operations with support for targeted response deletion. Comprehensive integration and unit tests are added to validate the new functionality.

Changes

Cohort / File(s) Summary
OpenAI Server API Endpoints
tensorrt_llm/serve/openai_server.py
Added GET /v1/responses/{response_id} and DELETE /v1/responses/{response_id} routes with handlers openai_responses_get_response and openai_responses_delete_response; includes validation, error handling for invalid IDs and missing responses
Response Storage Layer
tensorrt_llm/serve/responses_utils.py
Converted _pop_response from sync to async method with optional resp_id parameter for targeted deletion and boolean return value; updated store_response to await async pop; modified load_response to return early on missing response IDs
Integration Test Infrastructure
tests/integration/defs/test_e2e.py,
tests/integration/test_lists/test-db/l0_a10.yml
Added test_openai_responses_entrypoint integration test that executes responses API test suite; registered test in l0_a10 execution list
Unit Tests
tests/unittest/llmapi/apps/_test_openai_responses_entrypoint.py
New test module with module-scoped fixtures (model, server, client); async test cases covering response retrieval, deletion, and error scenarios (invalid ID, non-existent response)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • responses_utils.py — Async conversion of _pop_response with new optional parameter and return value semantics; verify locking behavior and control flow changes in store_response integration
  • openai_server.py — Two new handlers with validation logic; confirm error helper usage and storage flag checks
  • Test module (_test_openai_responses_entrypoint.py) — Comprehensive fixture setup and parametrization; validate async test patterns and error expectation assertions

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 8.70% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title '[TRTLLM-8462][feat] Support GET/DELETE v1/responses/{response_id}' directly reflects the main changes in the PR—adding two new API endpoints for fetching and deleting responses.
Description check ✅ Passed The description includes a clear explanation of the feature (supporting OpenAI-style endpoints with a reference link), lists the relevant test coverage (test_openai_responses_entrypoint), and confirms adherence to contribution guidelines.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
tests/integration/defs/test_e2e.py (1)

1-2: Update NVIDIA copyright year to include 2025 (file modified in this PR).
As per coding guidelines.

tensorrt_llm/serve/responses_utils.py (2)

200-209: Type contract mismatch: load_response() can return None but return type is non-optional.
This change makes -> Optional[ResponsesResponse] the correct signature.

-    async def load_response(self, resp_id: str) -> ResponsesResponse:
+    async def load_response(self, resp_id: str) -> Optional[ResponsesResponse]:
         _responses_debug_log(
             f"ConversationHistoryStore loading resp: {resp_id}")
         async with self.responses_lock:
             if resp_id not in self.responses:
                 return None

210-267: Avoid mutable default arg and guard against missing prev_resp_id under concurrent DELETE.
With DELETE added, prev_resp_id can disappear between request validation and store_response(), causing a KeyError at Line 252. Also resp_msgs=[] is a shared default.

 async def store_response(self,
                          resp: ResponsesResponse,
-                         resp_msgs: Optional[
-                             Union[list[Message],
-                                   list[ChatCompletionMessageParam]]] = [],
+                         resp_msgs: Optional[
+                             Union[list[Message],
+                                   list[ChatCompletionMessageParam]]] = None,
                          prev_resp_id: Optional[str] = None) -> None:
@@
-        async with self.responses_lock:
+        resp_msgs = [] if resp_msgs is None else resp_msgs
+
+        async with self.responses_lock:
             self.responses[resp_id] = resp
             if len(self.responses) > self.response_capacity:
                 await self._pop_response()
@@
             elif prev_resp_id is not None:
-                if prev_resp_id not in self.response_to_conversation:
-                    logger.warning(
-                        f"Previous response id {prev_resp_id} not found in conversation store"
-                    )
-
-                conversation_id = self.response_to_conversation[prev_resp_id]
+                conversation_id = self.response_to_conversation.get(prev_resp_id)
+                if conversation_id is None:
+                    logger.warning(
+                        f"Previous response id {prev_resp_id} not found in conversation store; starting a new conversation"
+                    )
+                    conversation_id = _random_uuid()
+                    self.conversations[conversation_id] = []
                 self.conversations[conversation_id].extend(resp_msgs)
🧹 Nitpick comments (2)
tensorrt_llm/serve/openai_server.py (1)

1023-1054: Prefer a public store API for deletes + align invalid-id message with resp_ check.
Calling ConversationHistoryStore._pop_response() from OpenAIServer couples to internals; consider conversation_store.delete_response(resp_id) wrapper. Also _create_invalid_response_id_error() says “begins with 'resp'” but you enforce resp_.

 def _create_invalid_response_id_error(self, response_id: str) -> Response:
     return self.create_error_response(
         err_type="InvalidRequestError",
         message=(f"Invalid 'response_id': '{response_id}'. "
-                 "Expected an ID that begins with 'resp'."),
+                 "Expected an ID that begins with 'resp_'."),
     )
tests/unittest/llmapi/apps/_test_openai_responses_entrypoint.py (1)

37-46: Nice: validates retrieve returns the stored response (id + full payload).
Optional: consider a less strict comparison than full model_dump() if this becomes flaky across SDK/server versions (e.g., compare stable fields only).

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 093465e and e529ee8.

📒 Files selected for processing (5)
  • tensorrt_llm/serve/openai_server.py (2 hunks)
  • tensorrt_llm/serve/responses_utils.py (3 hunks)
  • tests/integration/defs/test_e2e.py (1 hunks)
  • tests/integration/test_lists/test-db/l0_a10.yml (1 hunks)
  • tests/unittest/llmapi/apps/_test_openai_responses_entrypoint.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., use from package.subpackage import foo and then foo.SomeClass() instead of from package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g., some_file.py)
Python class names should use PascalCase (e.g., class SomeClass)
Python function and method names should use snake_case (e.g., def my_awesome_function():)
Python local variable names should use snake_case, with prefix k for variable names that start with a number (e.g., k_99th_percentile = ...)
Python global variables should use upper snake_case with prefix G (e.g., G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g., MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g., self.x = 5 followed by """<type>: Description of 'x'""" )
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic

Files:

  • tests/integration/defs/test_e2e.py
  • tensorrt_llm/serve/openai_server.py
  • tensorrt_llm/serve/responses_utils.py
  • tests/unittest/llmapi/apps/_test_openai_responses_entrypoint.py
**/*.{cpp,h,cu,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top

Files:

  • tests/integration/defs/test_e2e.py
  • tensorrt_llm/serve/openai_server.py
  • tensorrt_llm/serve/responses_utils.py
  • tests/unittest/llmapi/apps/_test_openai_responses_entrypoint.py
🧠 Learnings (1)
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/integration/test_lists/test-db/l0_a10.yml
  • tests/integration/defs/test_e2e.py
  • tests/unittest/llmapi/apps/_test_openai_responses_entrypoint.py
🧬 Code graph analysis (1)
tensorrt_llm/serve/openai_server.py (1)
tensorrt_llm/serve/responses_utils.py (2)
  • load_response (200-208)
  • _pop_response (403-428)
🪛 Ruff (0.14.8)
tests/integration/defs/test_e2e.py

1656-1656: Unused function argument: llm_root

(ARG001)

tests/unittest/llmapi/apps/_test_openai_responses_entrypoint.py

72-72: Unused function argument: model

(ARG001)


78-78: Unused function argument: model

(ARG001)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (3)
tests/integration/test_lists/test-db/l0_a10.yml (1)

55-69: Good coverage wiring: new Responses API e2e is now exercised in pre-merge.

tensorrt_llm/serve/openai_server.py (1)

246-274: Route wiring for GET/DELETE on /v1/responses/{response_id} looks correct.

tests/unittest/llmapi/apps/_test_openai_responses_entrypoint.py (1)

1-8: Add NVIDIA copyright header (include 2025) to new Python file.
As per coding guidelines for **/*.py.

⛔ Skipped due to learnings
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Learnt from: xinhe-nv
Repo: NVIDIA/TensorRT-LLM PR: 8534
File: scripts/format_test_list.py:1-6
Timestamp: 2025-10-22T06:53:47.017Z
Learning: The file `scripts/format_test_list.py` in the TensorRT-LLM repository does not require the NVIDIA Apache-2.0 copyright header.
Learnt from: CR
Repo: NVIDIA/TensorRT-LLM PR: 0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-11-24T17:09:17.870Z
Learning: Applies to **/*.{cpp,h,cu,py} : All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top
Learnt from: EmmaQiaoCh
Repo: NVIDIA/TensorRT-LLM PR: 7370
File: tests/unittest/trt/model_api/test_model_quantization.py:24-27
Timestamp: 2025-08-29T14:07:45.863Z
Learning: In TensorRT-LLM's CI infrastructure, pytest skip markers (pytest.mark.skip) are properly honored even when test files have __main__ blocks that call test functions directly. The testing system correctly skips tests without requiring modifications to the __main__ block execution pattern.

Signed-off-by: Junyi Xu <[email protected]>
@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27973 [ run ] triggered by Bot. Commit: 14aa7b5

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27968 [ run ] completed with state ABORTED. Commit: e529ee8

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27973 [ run ] completed with state SUCCESS. Commit: 14aa7b5
/LLM/main/L0_MergeRequest_PR pipeline #21360 completed with status: 'FAILURE'

@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #28015 [ run ] triggered by Bot. Commit: 14aa7b5

@tensorrt-cicd
Copy link
Collaborator

PR_Github #28015 [ run ] completed with state SUCCESS. Commit: 14aa7b5
/LLM/main/L0_MergeRequest_PR pipeline #21396 completed with status: 'FAILURE'

@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #28091 [ run ] triggered by Bot. Commit: 14aa7b5

@tensorrt-cicd
Copy link
Collaborator

PR_Github #28091 [ run ] completed with state SUCCESS. Commit: 14aa7b5
/LLM/main/L0_MergeRequest_PR pipeline #21462 completed with status: 'FAILURE'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants