Skip to content

Conversation

@burnettk
Copy link
Contributor

@burnettk burnettk commented Dec 3, 2025

Summary by CodeRabbit

  • New Features

    • API logging: deferred, batched capture of API interactions (endpoint, method, payloads, status, duration) with optional process-instance tracking and a configurable enable/disable setting.
  • Chores

    • CI workflow push triggers updated and minor workflow formatting tweak.
  • Tests

    • Comprehensive unit tests for deferred API logging added; small test signature adjustment in an existing integration test.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 3, 2025

📝 Walkthrough

Walkthrough

Adds deferred, batched API request/response logging: a new APILogModel, an api_logging utility with a decorator and teardown-based flush, config flag SPIFFWORKFLOW_BACKEND_API_LOGGING_ENABLED, decorators applied to selected routes, initialization wiring, and unit tests plus minor CI workflow branch updates.

Changes

Cohort / File(s) Summary
Configuration & Init
spiffworkflow-backend/src/spiffworkflow_backend/config/default.py, spiffworkflow-backend/src/spiffworkflow_backend/__init__.py
Added SPIFFWORKFLOW_BACKEND_API_LOGGING_ENABLED (default False) and call to setup_deferred_logging(app) during app initialization.
Database Model
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py, spiffworkflow-backend/src/spiffworkflow_backend/load_database_models.py
New ORM model APILogModel mapped to api_log with fields (id, created_at, endpoint, method, request_body, response_body, status_code, duration_ms, process_instance_id); model imported into loader.
API Logging Utility
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py
New module implementing deferred, thread-safe logging: _log_queue, _queue_api_log_entry, _process_log_queue, setup_deferred_logging(app), and log_api_interaction decorator that captures request/response, extracts process_instance_id, and queues entries for batched DB writes.
Route Instrumentation
spiffworkflow-backend/src/spiffworkflow_backend/routes/messages_controller.py, spiffworkflow-backend/src/spiffworkflow_backend/routes/process_instances_controller.py
Imported and applied @log_api_interaction to message_send and process_instance_run; minor local refactor in process_instance_run_deprecated.
API Error
spiffworkflow-backend/src/spiffworkflow_backend/exceptions/api_error.py
Added to_dict(self) -> dict method delegating to serialized().
Tests
spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py, spiffworkflow-backend/tests/spiffworkflow_backend/integration/test_authentication.py
New comprehensive unit tests for deferred logging behavior and queue processing; one integration test signature updated to include app: Flask fixture.
CI Workflow
.github/workflows/build_docker_images.yml
Adjusted push triggers (removed some branches, added api-logs) and minor formatting newline change in final step.

Sequence Diagram(s)

sequenceDiagram
    participant Client as HTTP Client
    participant Flask as Flask Request Handler
    participant Decorator as log_api_interaction
    participant Queue as In-Memory Queue
    participant DB as Database (APILogModel)
    participant Logger as App Logger

    Client->>Flask: HTTP request -> decorated endpoint
    Flask->>Decorator: Invoke wrapper (start timer, capture request)
    Decorator->>Flask: Call original view
    alt view returns
        Flask->>Decorator: Return response (various types)
        Decorator->>Decorator: Extract status, body, process_instance_id, duration
    else view raises
        Flask->>Decorator: Exception
        Decorator->>Decorator: Capture error details, set status=500
    end
    Decorator->>Queue: _queue_api_log_entry(log payload)
    Note over Flask,Queue: Request ends -> teardown
    Flask->>Queue: teardown triggers _process_log_queue()
    alt within Flask context
        Queue->>DB: Bulk insert queued APILogModel entries
        DB->>DB: Commit
    else outside context / immediate
        Queue->>DB: Immediate bulk insert
    end
    alt DB error
        DB->>Logger: Log exception while processing
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Areas to pay attention to:
    • Thread-safety and lifecycle of the in-memory _log_queue and teardown integration.
    • Correct handling of Flask response variants (Response objects, tuples, non-JSON bodies).
    • Extraction logic and precedence for process_instance_id.
    • Tests that mock DB failures and verify queue flush behavior.

Possibly related PRs

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 51.43% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Api logs to db' accurately describes the main change: introducing API logging functionality that persists logs to the database.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch api-logs

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
spiffworkflow-backend/src/spiffworkflow_backend/routes/process_instances_controller.py (1)

78-88: Consider moving the cast import to the top of the file.

The local import of cast on Line 78 works but is unconventional. Moving it to the top-level imports would improve consistency with the rest of the codebase.

Apply this diff:

+from typing import Any
+from typing import cast
-from typing import Any

And remove the local import:

 def process_instance_run_deprecated(
     modified_process_model_identifier: str,
     process_instance_id: int,
     force_run: bool = False,
     execution_mode: str | None = None,
 ) -> flask.wrappers.Response:
-    from typing import cast
-
     return cast(
         flask.wrappers.Response,
         process_instance_run(
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1)

65-69: Remove redundant pass statement.

Line 69 contains a pass statement that is unnecessary after the logger.warning call. The exception handler already has a statement, so the pass is redundant.

Apply this diff:

                     except Exception:
                         # We might fail to parse JSON if the response is not actually JSON
                         # despite the header, or some other issue. We'll just skip the body.
                         logger.warning("Failed to parse response body as JSON", exc_info=True)
-                        pass
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1c83184 and e039ca4.

⛔ Files ignored due to path filters (1)
  • spiffworkflow-backend/migrations/versions/cae1a9624131_add_api_log_table.py is excluded by !spiffworkflow-backend/migrations/**
📒 Files selected for processing (9)
  • .github/workflows/build_docker_images.yml (2 hunks)
  • spiffworkflow-backend/src/spiffworkflow_backend/config/default.py (1 hunks)
  • spiffworkflow-backend/src/spiffworkflow_backend/exceptions/api_error.py (1 hunks)
  • spiffworkflow-backend/src/spiffworkflow_backend/load_database_models.py (1 hunks)
  • spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (1 hunks)
  • spiffworkflow-backend/src/spiffworkflow_backend/routes/messages_controller.py (2 hunks)
  • spiffworkflow-backend/src/spiffworkflow_backend/routes/process_instances_controller.py (2 hunks)
  • spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1 hunks)
  • spiffworkflow-backend/tests/spiffworkflow_backend/integration/test_api_logging.py (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2024-10-09T01:31:36.477Z
Learnt from: jasquat
Repo: sartography/spiff-arena PR: 1902
File: spiffworkflow-backend/src/spiffworkflow_backend/routes/process_instances_controller.py:584-586
Timestamp: 2024-10-09T01:31:36.477Z
Learning: Migration attempts for process instances are logged in a specific table, and the flask HTTP log suffices for logging purposes.

Applied to files:

  • spiffworkflow-backend/src/spiffworkflow_backend/routes/process_instances_controller.py
📚 Learning: 2025-11-21T16:41:27.750Z
Learnt from: Mwindo
Repo: sartography/spiff-arena PR: 2619
File: spiffworkflow-backend/src/spiffworkflow_backend/services/authentication_service.py:121-134
Timestamp: 2025-11-21T16:41:27.750Z
Learning: In the spiff-arena codebase, models that extend SpiffworkflowBaseDBModel automatically have their created_at_in_seconds and updated_at_in_seconds fields populated via SQLAlchemy event listeners (before_insert and before_update) defined in spiffworkflow-backend/src/spiffworkflow_backend/models/db.py. Manual timestamp setting is not required for these fields.

Applied to files:

  • spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py
🧬 Code graph analysis (7)
spiffworkflow-backend/src/spiffworkflow_backend/load_database_models.py (1)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (1)
  • APILogModel (9-20)
spiffworkflow-backend/src/spiffworkflow_backend/routes/process_instances_controller.py (2)
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1)
  • log_api_interaction (17-102)
spiffworkflow-backend/src/spiffworkflow_backend/models/process_model.py (1)
  • modified_process_model_identifier (76-77)
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (1)
  • APILogModel (9-20)
spiffworkflow-backend/src/spiffworkflow_backend/exceptions/api_error.py (1)
spiffworkflow-backend/src/spiffworkflow_backend/models/process_model.py (2)
  • to_dict (86-91)
  • serialized (106-114)
spiffworkflow-backend/src/spiffworkflow_backend/routes/messages_controller.py (1)
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1)
  • log_api_interaction (17-102)
spiffworkflow-backend/tests/spiffworkflow_backend/integration/test_api_logging.py (2)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (1)
  • APILogModel (9-20)
spiffworkflow-backend/tests/spiffworkflow_backend/helpers/base_test.py (2)
  • app_config_mock (617-623)
  • logged_in_headers (62-68)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (2)
spiffworkflow-backend/src/spiffworkflow_backend/models/db.py (1)
  • SpiffworkflowBaseDBModel (27-75)
spiffworkflow-backend/src/spiffworkflow_backend/__init__.py (1)
  • default (34-51)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (15)
  • GitHub Check: typeguard 3.12 / ubuntu-latest sqlite
  • GitHub Check: typeguard 3.11 / ubuntu-latest sqlite
  • GitHub Check: tests 3.11 / macos-latest sqlite
  • GitHub Check: tests 3.10 / ubuntu-latest sqlite
  • GitHub Check: tests 3.12 / ubuntu-latest postgres
  • GitHub Check: tests 3.11 / ubuntu-latest postgres
  • GitHub Check: tests 3.12 / ubuntu-latest mysql
  • GitHub Check: tests 3.12 / ubuntu-latest sqlite
  • GitHub Check: tests 3.11 / ubuntu-latest sqlite
  • GitHub Check: tests 3.11 / ubuntu-latest mysql
  • GitHub Check: tests 3.13.7 / ubuntu-latest mysql
  • GitHub Check: build
  • GitHub Check: create_docker_images (sartography/spiffworkflow-backend, spiffworkflow-backend, Backend component...
  • GitHub Check: create_docker_images (sartography/spiffworkflow-frontend, spiffworkflow-frontend, Frontend compon...
  • GitHub Check: create_docker_images (sartography/connector-proxy-demo, connector-proxy-demo, Connector proxy com...
🔇 Additional comments (2)
.github/workflows/build_docker_images.yml (2)

32-35: Verify whether the api-logs branch trigger should be permanent.

The addition of - api-logs to the push branches will trigger Docker image builds on every commit to this feature branch. Confirm whether this is intended to remain after the PR is merged to main, or if it should be removed before completion.


157-159: Trailing newline is a benign formatting adjustment.

The addition of the trailing newline is a minor stylistic change with no functional impact.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (1)

13-13: Use timezone-aware datetime instead of deprecated utcnow().

The use of datetime.utcnow() is deprecated in Python 3.12+. Consider using datetime.now(timezone.utc) for timezone-aware timestamps, as previously suggested.

🧹 Nitpick comments (1)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (1)

14-20: Consider reducing the number of indexes for better write performance.

The model currently has 6 indexes, which may impact INSERT performance on a high-volume logging table. Indexes on method, status_code, and duration_ms provide limited standalone value since these fields are typically filtered alongside other criteria.

Consider keeping only the most critical indexes:

-    endpoint: str = db.Column(db.String(255), index=True)
-    method: str = db.Column(db.String(10), index=True)
+    endpoint: str = db.Column(db.String(255), index=True)
+    method: str = db.Column(db.String(10))
     request_body: dict | None = db.Column(db.JSON)
     response_body: dict | None = db.Column(db.JSON)
-    status_code: int = db.Column(db.Integer, index=True)
+    status_code: int = db.Column(db.Integer)
     process_instance_id: int | None = db.Column(db.Integer, nullable=True, index=True)
-    duration_ms: int = db.Column(db.Integer, index=True)
+    duration_ms: int = db.Column(db.Integer)

Alternatively, consider composite indexes if common query patterns filter on multiple fields together (e.g., endpoint + status_code).

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e039ca4 and 4b34ca4.

⛔ Files ignored due to path filters (2)
  • spiffworkflow-backend/migrations/versions/4efc3d8655be_add_api_log_table.py is excluded by !spiffworkflow-backend/migrations/**
  • spiffworkflow-backend/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (1)
  • spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-21T16:41:27.750Z
Learnt from: Mwindo
Repo: sartography/spiff-arena PR: 2619
File: spiffworkflow-backend/src/spiffworkflow_backend/services/authentication_service.py:121-134
Timestamp: 2025-11-21T16:41:27.750Z
Learning: In the spiff-arena codebase, models that extend SpiffworkflowBaseDBModel automatically have their created_at_in_seconds and updated_at_in_seconds fields populated via SQLAlchemy event listeners (before_insert and before_update) defined in spiffworkflow-backend/src/spiffworkflow_backend/models/db.py. Manual timestamp setting is not required for these fields.

Applied to files:

  • spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py
🧬 Code graph analysis (1)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (2)
spiffworkflow-backend/src/spiffworkflow_backend/models/db.py (1)
  • SpiffworkflowBaseDBModel (27-75)
spiffworkflow-backend/src/spiffworkflow_backend/__init__.py (1)
  • default (34-51)
🔇 Additional comments (1)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (1)

1-10: LGTM! Clean model structure.

The imports and class declaration follow the established patterns in the codebase, properly extending SpiffworkflowBaseDBModel.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (5)
spiffworkflow-backend/test_deferred_logging.py (1)

25-55: Ruff T201/W292: prints and missing trailing newline in a script-like helper.

Ruff flags the print calls and lack of trailing newline. Since this looks like a manual verification script, you have options:

  • Convert prints to logging (or at least gate them) if you want to keep a clean lint run, or
  • Add # noqa: T201 at module level or per line, and ensure there is a trailing newline at EOF.

Example minimal change at top of file:

-#!/usr/bin/env python3
+#!/usr/bin/env python3
+# noqa: T201

And ensure a newline after main() at the end of the file.

spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (4)

21-25: Remove or use _log_worker_started / _log_worker_lock to avoid dead-code confusion.

The _log_worker_started flag and _log_worker_lock are defined but never used. They look like remnants of an earlier design (e.g., a background worker thread) and can be misleading.

Either wire them into an actual worker implementation or drop them for now to keep the module focused on the current teardown-driven batching approach.


39-49: Clarify or fix the “new session” comment in _process_log_queue.

The comment says “Use a new session to avoid transaction conflicts”, but the code uses the global db.session instead of creating a separate SQLAlchemy session. That mismatch is confusing for future readers and reviewers.

If you don’t intend to manage a truly separate session here, I’d update or remove the comment, e.g.:

-            # Use a new session to avoid transaction conflicts
             for entry in entries_to_commit:
                 db.session.add(entry)

Or rephrase the comment to describe that this is a separate commit performed during app teardown, not inside the main request transaction.


90-99: Use has_request_context() for clearer request context detection.

The current approach with if request: inside a try/except RuntimeError works but mixes control flow with exception handling. Flask recommends explicitly checking the request context:

-from flask import request
+from flask import has_request_context, request
...
-        if request:
-            try:
-                endpoint = request.path
-                method = request.method
-                request_body = request.get_json(silent=True)
-            except RuntimeError:
-                # Working outside of request context
-                pass
+        if has_request_context():
+            try:
+                endpoint = request.path
+                method = request.method
+                request_body = request.get_json(silent=True)
             except Exception:
                 request_body = None

This keeps behavior identical but makes the intent explicit and is the recommended Flask pattern.


51-61: Current implementation is safe—but consider adding explicit app context requirements for clarity.

The code already guards against missing app context. At line 78, log_api_interaction accesses current_app.config, which requires an active app context and will fail immediately if one doesn't exist. Therefore, _queue_api_log_entry cannot be called without an app context, and the fallback path (lines 58–61) only executes outside request context while app context still exists. In that scenario, db.session access in _process_log_queue is safe.

That said, the exception handling comment at line 59 is slightly misleading ("like in tests"), and explicit guards would improve maintainability if _queue_api_log_entry is reused elsewhere in the future. Consider either:

  • Adding a guard like current_app.app_context().push() in _process_log_queue for defensive safety, or
  • Adding a docstring to _queue_api_log_entry stating it requires an active app context.
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b4e2b4b and 09e778c.

📒 Files selected for processing (4)
  • spiffworkflow-backend/src/spiffworkflow_backend/__init__.py (2 hunks)
  • spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1 hunks)
  • spiffworkflow-backend/test_deferred_logging.py (1 hunks)
  • spiffworkflow-backend/tests/spiffworkflow_backend/integration/test_api_logging.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • spiffworkflow-backend/tests/spiffworkflow_backend/integration/test_api_logging.py
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-25T14:52:21.046Z
Learnt from: CR
Repo: sartography/spiff-arena PR: 0
File: spiffworkflow-backend/AGENTS.md:0-0
Timestamp: 2025-11-25T14:52:21.046Z
Learning: Run specific backend test functions using `uv run pytest -k <test_function_name>` from the spiffworkflow-backend directory

Applied to files:

  • spiffworkflow-backend/test_deferred_logging.py
🧬 Code graph analysis (3)
spiffworkflow-backend/test_deferred_logging.py (2)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (1)
  • APILogModel (10-23)
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (2)
  • log_api_interaction (74-162)
  • setup_deferred_logging (64-71)
spiffworkflow-backend/src/spiffworkflow_backend/__init__.py (2)
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1)
  • setup_deferred_logging (64-71)
spiffworkflow-backend/conftest.py (1)
  • app (50-51)
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (1)
  • APILogModel (10-23)
🪛 Ruff (0.14.7)
spiffworkflow-backend/test_deferred_logging.py

25-25: print found

Remove print

(T201)


37-37: print found

Remove print

(T201)


38-38: print found

Remove print

(T201)


43-43: print found

Remove print

(T201)


45-45: print found

Remove print

(T201)


46-46: print found

Remove print

(T201)


52-52: print found

Remove print

(T201)


53-53: print found

Remove print

(T201)


55-55: print found

Remove print

(T201)


58-58: No newline at end of file

Add trailing newline

(W292)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: build
  • GitHub Check: create_docker_images (sartography/connector-proxy-demo, connector-proxy-demo, Connector proxy com...
  • GitHub Check: create_docker_images (sartography/spiffworkflow-backend, spiffworkflow-backend, Backend component...
🔇 Additional comments (1)
spiffworkflow-backend/src/spiffworkflow_backend/__init__.py (1)

25-25: New deferred API logging wiring and placement look correct

Importing setup_deferred_logging and invoking it right after migrate.init_app(app, db) cleanly hooks the teardown-based logging into the app lifecycle, with DB and config already initialized and before routes/middleware are added. No functional or lifecycle issues stand out here.

Also applies to: 82-82

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1)

105-140: Plain dict/list responses are still logged as status 500.

The previous review flagged this issue. While tuple responses are now handled (lines 133-137), plain return values like return {"data": "ok"} still result in status_code remaining at 500 since lines 138-140 only set response_body without updating the status.

Apply this diff to default to 200 for successful non-Response returns:

-        status_code = 500
+        status_code = 200
         response_body = None
         process_instance_id = None
         response = None

         try:
             response = func(*args, **kwargs)
         except Exception as e:
             # If an exception occurs, we try to get the status code from it if it's an API error
+            status_code = 500
             if hasattr(e, "status_code"):
                 status_code = e.status_code
🧹 Nitpick comments (4)
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1)

22-26: Remove unused module-level variables.

_log_worker_started and _log_worker_lock are declared but never used in this module. These appear to be leftovers from a different implementation approach.

 # Thread-safe queue for deferred API log entries
 _log_queue: Queue[APILogModel] = Queue()
-_log_worker_started = False
-_log_worker_lock = threading.Lock()
spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py (3)

1-18: Clean up unused imports.

Static analysis identifies unused imports that should be removed:

  • json (line 3)
  • MagicMock (line 5)
  • setup_deferred_logging (line 11)
 """Unit tests for deferred API logging functionality."""

-import json
 from typing import Any
-from unittest.mock import patch, MagicMock
+from unittest.mock import patch
 from flask import Flask, g
 import pytest

 from spiffworkflow_backend.models.api_log_model import APILogModel
 from spiffworkflow_backend.models.db import db
 from spiffworkflow_backend.utils.api_logging import (
     log_api_interaction,
-    setup_deferred_logging,
     _queue_api_log_entry,
     _process_log_queue,
     _log_queue
 )

170-186: Address unused variable warning.

The result variable on line 173 is assigned but never used. Prefix with underscore to indicate it's intentionally unused.

             try:
                 with app.test_request_context("/test"):
                     # Test extraction from kwargs
-                    result = test_function_with_kwargs(process_instance_id=123)
+                    _ = test_function_with_kwargs(process_instance_id=123)

                     # Process any queued logs

296-299: Add trailing newline at end of file.

Static analysis flags missing newline at EOF (W292).

                 call_args = mock_logger.error.call_args[0][0]
                 assert "Failed to commit API log entries" in call_args
+
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 09e778c and 41ca3e7.

📒 Files selected for processing (2)
  • spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1 hunks)
  • spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-25T14:52:21.046Z
Learnt from: CR
Repo: sartography/spiff-arena PR: 0
File: spiffworkflow-backend/AGENTS.md:0-0
Timestamp: 2025-11-25T14:52:21.046Z
Learning: Run specific backend test functions using `uv run pytest -k <test_function_name>` from the spiffworkflow-backend directory

Applied to files:

  • spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py
🪛 Flake8 (7.3.0)
spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py

[error] 3-3: 'json' imported but unused

(F401)


[error] 5-5: 'unittest.mock.MagicMock' imported but unused

(F401)


[error] 11-11: 'spiffworkflow_backend.utils.api_logging.setup_deferred_logging' imported but unused

(F401)


[error] 173-173: local variable 'result' is assigned to but never used

(F841)

🪛 Ruff (0.14.7)
spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py

173-173: Local variable result is assigned to but never used

Remove assignment to unused variable result

(F841)


299-299: No newline at end of file

Add trailing newline

(W292)

🔇 Additional comments (4)
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (3)

40-49: Log entries are lost on commit failure.

When db.session.commit() fails, the entries are rolled back and removed from the queue with no retry mechanism. If this is acceptable for diagnostic logging, no action needed. If log durability matters, consider requeuing failed entries or persisting to a fallback (e.g., file).

Is losing log entries on transient DB errors acceptable for this use case?


80-103: LGTM!

The decorator setup and request capture logic handle edge cases properly, including working outside request context and failed JSON parsing.


142-165: Process instance ID extraction and log queuing look good.

The extraction logic correctly prioritizes response body over kwargs, and the deferred queuing approach addresses the transaction isolation concern from the previous review.

spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py (1)

21-277: Comprehensive test coverage.

The test suite covers key scenarios well: config toggles, log creation, exception handling, API error extraction, process_instance_id from response vs kwargs, queue processing inside/outside Flask context, batch processing, and error handling. Good work.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py (3)

156-188: Consider adding negative-path cases for process_instance_id extraction

This test nicely verifies that process_instance_id from the response body takes precedence over kwargs, and that the value is persisted on APILogModel, which matches the intended design where process_instance_id is just an integer field and not a ForeignKey. Based on learnings, it might be useful to add:

  • A case where only the kwarg is present and the response has no process_instance block (to assert fallback behavior), and
  • A case where neither source is present, asserting that process_instance_id is left None or handled as intended.

These would lock in the full priority and default behavior for future changes.


190-265: Deduplicate _log_queue clearing/setup across queue-focused tests

The queue-centric tests (test_queue_processing_outside_flask_context, test_batch_log_processing, test_teardown_handler_setup) each manually clear _log_queue (and often the APILogModel table) in slightly different ways. To keep things DRY and make future changes to queue setup less error‑prone, consider extracting a small helper or fixture that resets both DB state and _log_queue before each such test, for example on BaseTest:

 class TestDeferredAPILogging(BaseTest):
+    def _reset_api_logging_state(self) -> None:
+        db.session.query(APILogModel).delete()
+        db.session.commit()
+        while not _log_queue.empty():
+            _log_queue.get_nowait()
+
     def test_queue_processing_outside_flask_context(self, app: Flask) -> None:
         """Test that queueing works outside Flask request context."""
         with app.app_context():
-            # Clear existing logs and queue
-            db.session.query(APILogModel).delete()
-            db.session.commit()
-
-            # Clear the queue
-            while not _log_queue.empty():
-                _log_queue.get_nowait()
+            self._reset_api_logging_state()

Same helper could be reused in the other queue-related tests.


266-281: Isolate test_error_handling_in_log_processing by resetting DB and _log_queue state

Unlike the other tests, this one does not clear existing APILogModel rows or _log_queue before running. If other tests (now or in the future) leave entries in the queue or table, this test could end up exercising the error path with more data than intended or depend on global state. To keep tests independent and deterministic, consider resetting state at the top of this test:

     @patch("spiffworkflow_backend.utils.api_logging.logger")
     def test_error_handling_in_log_processing(self, mock_logger: Any, app: Flask) -> None:
         """Test error handling when log processing fails."""
         with app.app_context():
-            # Create a valid log entry but mock db.session.commit to raise an exception
+            # Ensure a clean state for this test
+            db.session.query(APILogModel).delete()
+            db.session.commit()
+            while not _log_queue.empty():
+                _log_queue.get_nowait()
+
+            # Create a valid log entry but mock db.session.commit to raise an exception
             test_entry = APILogModel(endpoint="/error_test", method="GET", status_code=200, duration_ms=100)

This keeps behavior self-contained while still verifying that _process_log_queue logs the commit failure via logger.error.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 41ca3e7 and d7fce32.

📒 Files selected for processing (2)
  • spiffworkflow-backend/tests/spiffworkflow_backend/integration/test_authentication.py (1 hunks)
  • spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: jasquat
Repo: sartography/spiff-arena PR: 2648
File: spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py:0-0
Timestamp: 2025-12-03T15:42:45.431Z
Learning: In the spiff-arena codebase, the APILogModel's process_instance_id field intentionally does not use a ForeignKey constraint to ProcessInstanceModel. This allows API logs to persist independently of the process instance lifecycle, even if the referenced process instances are deleted.
🧬 Code graph analysis (1)
spiffworkflow-backend/tests/spiffworkflow_backend/integration/test_authentication.py (1)
spiffworkflow-backend/conftest.py (1)
  • app (50-51)
🔇 Additional comments (2)
spiffworkflow-backend/tests/spiffworkflow_backend/integration/test_authentication.py (1)

304-308: LGTM! App fixture addition is appropriate.

The addition of the app: Flask parameter ensures the Flask application context is properly initialized for this test. While the parameter isn't explicitly referenced in the test body, it's a standard pytest pattern where fixtures are injected for their setup side effects—in this case, establishing the app context needed for db.session operations and the new API logging infrastructure introduced in this PR. This change also brings consistency with other test methods in the file.

spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py (1)

22-155: Decorator behavior tests are comprehensive and align well with the implementation

These tests cover the key paths: logging disabled by default, enabled happy path with request/response bodies, generic exceptions mapped to 500, and ApiError with custom status and details. They also correctly simulate teardown via g.has_pending_api_logs and _process_log_queue, which should make regressions in the decorator or teardown wiring very visible.

@burnettk burnettk merged commit 57f2c9e into main Dec 3, 2025
25 checks passed
@burnettk burnettk deleted the api-logs branch December 3, 2025 16:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants