-
Notifications
You must be signed in to change notification settings - Fork 67
Api logs to db #2648
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Api logs to db #2648
Conversation
📝 WalkthroughWalkthroughAdds deferred, batched API request/response logging: a new APILogModel, an api_logging utility with a decorator and teardown-based flush, config flag SPIFFWORKFLOW_BACKEND_API_LOGGING_ENABLED, decorators applied to selected routes, initialization wiring, and unit tests plus minor CI workflow branch updates. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as HTTP Client
participant Flask as Flask Request Handler
participant Decorator as log_api_interaction
participant Queue as In-Memory Queue
participant DB as Database (APILogModel)
participant Logger as App Logger
Client->>Flask: HTTP request -> decorated endpoint
Flask->>Decorator: Invoke wrapper (start timer, capture request)
Decorator->>Flask: Call original view
alt view returns
Flask->>Decorator: Return response (various types)
Decorator->>Decorator: Extract status, body, process_instance_id, duration
else view raises
Flask->>Decorator: Exception
Decorator->>Decorator: Capture error details, set status=500
end
Decorator->>Queue: _queue_api_log_entry(log payload)
Note over Flask,Queue: Request ends -> teardown
Flask->>Queue: teardown triggers _process_log_queue()
alt within Flask context
Queue->>DB: Bulk insert queued APILogModel entries
DB->>DB: Commit
else outside context / immediate
Queue->>DB: Immediate bulk insert
end
alt DB error
DB->>Logger: Log exception while processing
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (2)
spiffworkflow-backend/src/spiffworkflow_backend/routes/process_instances_controller.py (1)
78-88: Consider moving thecastimport to the top of the file.The local import of
caston Line 78 works but is unconventional. Moving it to the top-level imports would improve consistency with the rest of the codebase.Apply this diff:
+from typing import Any +from typing import cast -from typing import AnyAnd remove the local import:
def process_instance_run_deprecated( modified_process_model_identifier: str, process_instance_id: int, force_run: bool = False, execution_mode: str | None = None, ) -> flask.wrappers.Response: - from typing import cast - return cast( flask.wrappers.Response, process_instance_run(spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1)
65-69: Remove redundantpassstatement.Line 69 contains a
passstatement that is unnecessary after thelogger.warningcall. The exception handler already has a statement, so thepassis redundant.Apply this diff:
except Exception: # We might fail to parse JSON if the response is not actually JSON # despite the header, or some other issue. We'll just skip the body. logger.warning("Failed to parse response body as JSON", exc_info=True) - pass
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
spiffworkflow-backend/migrations/versions/cae1a9624131_add_api_log_table.pyis excluded by!spiffworkflow-backend/migrations/**
📒 Files selected for processing (9)
.github/workflows/build_docker_images.yml(2 hunks)spiffworkflow-backend/src/spiffworkflow_backend/config/default.py(1 hunks)spiffworkflow-backend/src/spiffworkflow_backend/exceptions/api_error.py(1 hunks)spiffworkflow-backend/src/spiffworkflow_backend/load_database_models.py(1 hunks)spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py(1 hunks)spiffworkflow-backend/src/spiffworkflow_backend/routes/messages_controller.py(2 hunks)spiffworkflow-backend/src/spiffworkflow_backend/routes/process_instances_controller.py(2 hunks)spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py(1 hunks)spiffworkflow-backend/tests/spiffworkflow_backend/integration/test_api_logging.py(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2024-10-09T01:31:36.477Z
Learnt from: jasquat
Repo: sartography/spiff-arena PR: 1902
File: spiffworkflow-backend/src/spiffworkflow_backend/routes/process_instances_controller.py:584-586
Timestamp: 2024-10-09T01:31:36.477Z
Learning: Migration attempts for process instances are logged in a specific table, and the flask HTTP log suffices for logging purposes.
Applied to files:
spiffworkflow-backend/src/spiffworkflow_backend/routes/process_instances_controller.py
📚 Learning: 2025-11-21T16:41:27.750Z
Learnt from: Mwindo
Repo: sartography/spiff-arena PR: 2619
File: spiffworkflow-backend/src/spiffworkflow_backend/services/authentication_service.py:121-134
Timestamp: 2025-11-21T16:41:27.750Z
Learning: In the spiff-arena codebase, models that extend SpiffworkflowBaseDBModel automatically have their created_at_in_seconds and updated_at_in_seconds fields populated via SQLAlchemy event listeners (before_insert and before_update) defined in spiffworkflow-backend/src/spiffworkflow_backend/models/db.py. Manual timestamp setting is not required for these fields.
Applied to files:
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py
🧬 Code graph analysis (7)
spiffworkflow-backend/src/spiffworkflow_backend/load_database_models.py (1)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (1)
APILogModel(9-20)
spiffworkflow-backend/src/spiffworkflow_backend/routes/process_instances_controller.py (2)
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1)
log_api_interaction(17-102)spiffworkflow-backend/src/spiffworkflow_backend/models/process_model.py (1)
modified_process_model_identifier(76-77)
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (1)
APILogModel(9-20)
spiffworkflow-backend/src/spiffworkflow_backend/exceptions/api_error.py (1)
spiffworkflow-backend/src/spiffworkflow_backend/models/process_model.py (2)
to_dict(86-91)serialized(106-114)
spiffworkflow-backend/src/spiffworkflow_backend/routes/messages_controller.py (1)
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1)
log_api_interaction(17-102)
spiffworkflow-backend/tests/spiffworkflow_backend/integration/test_api_logging.py (2)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (1)
APILogModel(9-20)spiffworkflow-backend/tests/spiffworkflow_backend/helpers/base_test.py (2)
app_config_mock(617-623)logged_in_headers(62-68)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (2)
spiffworkflow-backend/src/spiffworkflow_backend/models/db.py (1)
SpiffworkflowBaseDBModel(27-75)spiffworkflow-backend/src/spiffworkflow_backend/__init__.py (1)
default(34-51)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (15)
- GitHub Check: typeguard 3.12 / ubuntu-latest sqlite
- GitHub Check: typeguard 3.11 / ubuntu-latest sqlite
- GitHub Check: tests 3.11 / macos-latest sqlite
- GitHub Check: tests 3.10 / ubuntu-latest sqlite
- GitHub Check: tests 3.12 / ubuntu-latest postgres
- GitHub Check: tests 3.11 / ubuntu-latest postgres
- GitHub Check: tests 3.12 / ubuntu-latest mysql
- GitHub Check: tests 3.12 / ubuntu-latest sqlite
- GitHub Check: tests 3.11 / ubuntu-latest sqlite
- GitHub Check: tests 3.11 / ubuntu-latest mysql
- GitHub Check: tests 3.13.7 / ubuntu-latest mysql
- GitHub Check: build
- GitHub Check: create_docker_images (sartography/spiffworkflow-backend, spiffworkflow-backend, Backend component...
- GitHub Check: create_docker_images (sartography/spiffworkflow-frontend, spiffworkflow-frontend, Frontend compon...
- GitHub Check: create_docker_images (sartography/connector-proxy-demo, connector-proxy-demo, Connector proxy com...
🔇 Additional comments (2)
.github/workflows/build_docker_images.yml (2)
32-35: Verify whether theapi-logsbranch trigger should be permanent.The addition of
- api-logsto the push branches will trigger Docker image builds on every commit to this feature branch. Confirm whether this is intended to remain after the PR is merged tomain, or if it should be removed before completion.
157-159: Trailing newline is a benign formatting adjustment.The addition of the trailing newline is a minor stylistic change with no functional impact.
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py
Outdated
Show resolved
Hide resolved
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py
Outdated
Show resolved
Hide resolved
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (1)
13-13: Use timezone-aware datetime instead of deprecatedutcnow().The use of
datetime.utcnow()is deprecated in Python 3.12+. Consider usingdatetime.now(timezone.utc)for timezone-aware timestamps, as previously suggested.
🧹 Nitpick comments (1)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (1)
14-20: Consider reducing the number of indexes for better write performance.The model currently has 6 indexes, which may impact INSERT performance on a high-volume logging table. Indexes on
method,status_code, andduration_msprovide limited standalone value since these fields are typically filtered alongside other criteria.Consider keeping only the most critical indexes:
- endpoint: str = db.Column(db.String(255), index=True) - method: str = db.Column(db.String(10), index=True) + endpoint: str = db.Column(db.String(255), index=True) + method: str = db.Column(db.String(10)) request_body: dict | None = db.Column(db.JSON) response_body: dict | None = db.Column(db.JSON) - status_code: int = db.Column(db.Integer, index=True) + status_code: int = db.Column(db.Integer) process_instance_id: int | None = db.Column(db.Integer, nullable=True, index=True) - duration_ms: int = db.Column(db.Integer, index=True) + duration_ms: int = db.Column(db.Integer)Alternatively, consider composite indexes if common query patterns filter on multiple fields together (e.g., endpoint + status_code).
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
spiffworkflow-backend/migrations/versions/4efc3d8655be_add_api_log_table.pyis excluded by!spiffworkflow-backend/migrations/**spiffworkflow-backend/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (1)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-21T16:41:27.750Z
Learnt from: Mwindo
Repo: sartography/spiff-arena PR: 2619
File: spiffworkflow-backend/src/spiffworkflow_backend/services/authentication_service.py:121-134
Timestamp: 2025-11-21T16:41:27.750Z
Learning: In the spiff-arena codebase, models that extend SpiffworkflowBaseDBModel automatically have their created_at_in_seconds and updated_at_in_seconds fields populated via SQLAlchemy event listeners (before_insert and before_update) defined in spiffworkflow-backend/src/spiffworkflow_backend/models/db.py. Manual timestamp setting is not required for these fields.
Applied to files:
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py
🧬 Code graph analysis (1)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (2)
spiffworkflow-backend/src/spiffworkflow_backend/models/db.py (1)
SpiffworkflowBaseDBModel(27-75)spiffworkflow-backend/src/spiffworkflow_backend/__init__.py (1)
default(34-51)
🔇 Additional comments (1)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (1)
1-10: LGTM! Clean model structure.The imports and class declaration follow the established patterns in the codebase, properly extending SpiffworkflowBaseDBModel.
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (5)
spiffworkflow-backend/test_deferred_logging.py (1)
25-55: Ruff T201/W292: prints and missing trailing newline in a script-like helper.Ruff flags the
- Convert prints to logging (or at least gate them) if you want to keep a clean lint run, or
- Add
# noqa: T201at module level or per line, and ensure there is a trailing newline at EOF.Example minimal change at top of file:
-#!/usr/bin/env python3 +#!/usr/bin/env python3 +# noqa: T201And ensure a newline after
main()at the end of the file.spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (4)
21-25: Remove or use_log_worker_started/_log_worker_lockto avoid dead-code confusion.The
_log_worker_startedflag and_log_worker_lockare defined but never used. They look like remnants of an earlier design (e.g., a background worker thread) and can be misleading.Either wire them into an actual worker implementation or drop them for now to keep the module focused on the current teardown-driven batching approach.
39-49: Clarify or fix the “new session” comment in_process_log_queue.The comment says “Use a new session to avoid transaction conflicts”, but the code uses the global
db.sessioninstead of creating a separate SQLAlchemy session. That mismatch is confusing for future readers and reviewers.If you don’t intend to manage a truly separate session here, I’d update or remove the comment, e.g.:
- # Use a new session to avoid transaction conflicts for entry in entries_to_commit: db.session.add(entry)Or rephrase the comment to describe that this is a separate commit performed during app teardown, not inside the main request transaction.
90-99: Usehas_request_context()for clearer request context detection.The current approach with
if request:inside a try/except RuntimeError works but mixes control flow with exception handling. Flask recommends explicitly checking the request context:-from flask import request +from flask import has_request_context, request ... - if request: - try: - endpoint = request.path - method = request.method - request_body = request.get_json(silent=True) - except RuntimeError: - # Working outside of request context - pass + if has_request_context(): + try: + endpoint = request.path + method = request.method + request_body = request.get_json(silent=True) except Exception: request_body = NoneThis keeps behavior identical but makes the intent explicit and is the recommended Flask pattern.
51-61: Current implementation is safe—but consider adding explicit app context requirements for clarity.The code already guards against missing app context. At line 78,
log_api_interactionaccessescurrent_app.config, which requires an active app context and will fail immediately if one doesn't exist. Therefore,_queue_api_log_entrycannot be called without an app context, and the fallback path (lines 58–61) only executes outside request context while app context still exists. In that scenario,db.sessionaccess in_process_log_queueis safe.That said, the exception handling comment at line 59 is slightly misleading ("like in tests"), and explicit guards would improve maintainability if
_queue_api_log_entryis reused elsewhere in the future. Consider either:
- Adding a guard like
current_app.app_context().push()in_process_log_queuefor defensive safety, or- Adding a docstring to
_queue_api_log_entrystating it requires an active app context.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
spiffworkflow-backend/src/spiffworkflow_backend/__init__.py(2 hunks)spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py(1 hunks)spiffworkflow-backend/test_deferred_logging.py(1 hunks)spiffworkflow-backend/tests/spiffworkflow_backend/integration/test_api_logging.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- spiffworkflow-backend/tests/spiffworkflow_backend/integration/test_api_logging.py
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-25T14:52:21.046Z
Learnt from: CR
Repo: sartography/spiff-arena PR: 0
File: spiffworkflow-backend/AGENTS.md:0-0
Timestamp: 2025-11-25T14:52:21.046Z
Learning: Run specific backend test functions using `uv run pytest -k <test_function_name>` from the spiffworkflow-backend directory
Applied to files:
spiffworkflow-backend/test_deferred_logging.py
🧬 Code graph analysis (3)
spiffworkflow-backend/test_deferred_logging.py (2)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (1)
APILogModel(10-23)spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (2)
log_api_interaction(74-162)setup_deferred_logging(64-71)
spiffworkflow-backend/src/spiffworkflow_backend/__init__.py (2)
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1)
setup_deferred_logging(64-71)spiffworkflow-backend/conftest.py (1)
app(50-51)
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1)
spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py (1)
APILogModel(10-23)
🪛 Ruff (0.14.7)
spiffworkflow-backend/test_deferred_logging.py
25-25: print found
Remove print
(T201)
37-37: print found
Remove print
(T201)
38-38: print found
Remove print
(T201)
43-43: print found
Remove print
(T201)
45-45: print found
Remove print
(T201)
46-46: print found
Remove print
(T201)
52-52: print found
Remove print
(T201)
53-53: print found
Remove print
(T201)
55-55: print found
Remove print
(T201)
58-58: No newline at end of file
Add trailing newline
(W292)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build
- GitHub Check: create_docker_images (sartography/connector-proxy-demo, connector-proxy-demo, Connector proxy com...
- GitHub Check: create_docker_images (sartography/spiffworkflow-backend, spiffworkflow-backend, Backend component...
🔇 Additional comments (1)
spiffworkflow-backend/src/spiffworkflow_backend/__init__.py (1)
25-25: New deferred API logging wiring and placement look correctImporting
setup_deferred_loggingand invoking it right aftermigrate.init_app(app, db)cleanly hooks the teardown-based logging into the app lifecycle, with DB and config already initialized and before routes/middleware are added. No functional or lifecycle issues stand out here.Also applies to: 82-82
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1)
105-140: Plain dict/list responses are still logged as status 500.The previous review flagged this issue. While tuple responses are now handled (lines 133-137), plain return values like
return {"data": "ok"}still result instatus_coderemaining at 500 since lines 138-140 only setresponse_bodywithout updating the status.Apply this diff to default to 200 for successful non-Response returns:
- status_code = 500 + status_code = 200 response_body = None process_instance_id = None response = None try: response = func(*args, **kwargs) except Exception as e: # If an exception occurs, we try to get the status code from it if it's an API error + status_code = 500 if hasattr(e, "status_code"): status_code = e.status_code
🧹 Nitpick comments (4)
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (1)
22-26: Remove unused module-level variables.
_log_worker_startedand_log_worker_lockare declared but never used in this module. These appear to be leftovers from a different implementation approach.# Thread-safe queue for deferred API log entries _log_queue: Queue[APILogModel] = Queue() -_log_worker_started = False -_log_worker_lock = threading.Lock()spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py (3)
1-18: Clean up unused imports.Static analysis identifies unused imports that should be removed:
json(line 3)MagicMock(line 5)setup_deferred_logging(line 11)"""Unit tests for deferred API logging functionality.""" -import json from typing import Any -from unittest.mock import patch, MagicMock +from unittest.mock import patch from flask import Flask, g import pytest from spiffworkflow_backend.models.api_log_model import APILogModel from spiffworkflow_backend.models.db import db from spiffworkflow_backend.utils.api_logging import ( log_api_interaction, - setup_deferred_logging, _queue_api_log_entry, _process_log_queue, _log_queue )
170-186: Address unused variable warning.The
resultvariable on line 173 is assigned but never used. Prefix with underscore to indicate it's intentionally unused.try: with app.test_request_context("/test"): # Test extraction from kwargs - result = test_function_with_kwargs(process_instance_id=123) + _ = test_function_with_kwargs(process_instance_id=123) # Process any queued logs
296-299: Add trailing newline at end of file.Static analysis flags missing newline at EOF (W292).
call_args = mock_logger.error.call_args[0][0] assert "Failed to commit API log entries" in call_args +
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py(1 hunks)spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-25T14:52:21.046Z
Learnt from: CR
Repo: sartography/spiff-arena PR: 0
File: spiffworkflow-backend/AGENTS.md:0-0
Timestamp: 2025-11-25T14:52:21.046Z
Learning: Run specific backend test functions using `uv run pytest -k <test_function_name>` from the spiffworkflow-backend directory
Applied to files:
spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py
🪛 Flake8 (7.3.0)
spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py
[error] 3-3: 'json' imported but unused
(F401)
[error] 5-5: 'unittest.mock.MagicMock' imported but unused
(F401)
[error] 11-11: 'spiffworkflow_backend.utils.api_logging.setup_deferred_logging' imported but unused
(F401)
[error] 173-173: local variable 'result' is assigned to but never used
(F841)
🪛 Ruff (0.14.7)
spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py
173-173: Local variable result is assigned to but never used
Remove assignment to unused variable result
(F841)
299-299: No newline at end of file
Add trailing newline
(W292)
🔇 Additional comments (4)
spiffworkflow-backend/src/spiffworkflow_backend/utils/api_logging.py (3)
40-49: Log entries are lost on commit failure.When
db.session.commit()fails, the entries are rolled back and removed from the queue with no retry mechanism. If this is acceptable for diagnostic logging, no action needed. If log durability matters, consider requeuing failed entries or persisting to a fallback (e.g., file).Is losing log entries on transient DB errors acceptable for this use case?
80-103: LGTM!The decorator setup and request capture logic handle edge cases properly, including working outside request context and failed JSON parsing.
142-165: Process instance ID extraction and log queuing look good.The extraction logic correctly prioritizes response body over kwargs, and the deferred queuing approach addresses the transaction isolation concern from the previous review.
spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py (1)
21-277: Comprehensive test coverage.The test suite covers key scenarios well: config toggles, log creation, exception handling, API error extraction, process_instance_id from response vs kwargs, queue processing inside/outside Flask context, batch processing, and error handling. Good work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py (3)
156-188: Consider adding negative-path cases forprocess_instance_idextractionThis test nicely verifies that
process_instance_idfrom the response body takes precedence over kwargs, and that the value is persisted onAPILogModel, which matches the intended design whereprocess_instance_idis just an integer field and not aForeignKey. Based on learnings, it might be useful to add:
- A case where only the kwarg is present and the response has no
process_instanceblock (to assert fallback behavior), and- A case where neither source is present, asserting that
process_instance_idis leftNoneor handled as intended.These would lock in the full priority and default behavior for future changes.
190-265: Deduplicate_log_queueclearing/setup across queue-focused testsThe queue-centric tests (
test_queue_processing_outside_flask_context,test_batch_log_processing,test_teardown_handler_setup) each manually clear_log_queue(and often theAPILogModeltable) in slightly different ways. To keep things DRY and make future changes to queue setup less error‑prone, consider extracting a small helper or fixture that resets both DB state and_log_queuebefore each such test, for example onBaseTest:class TestDeferredAPILogging(BaseTest): + def _reset_api_logging_state(self) -> None: + db.session.query(APILogModel).delete() + db.session.commit() + while not _log_queue.empty(): + _log_queue.get_nowait() + def test_queue_processing_outside_flask_context(self, app: Flask) -> None: """Test that queueing works outside Flask request context.""" with app.app_context(): - # Clear existing logs and queue - db.session.query(APILogModel).delete() - db.session.commit() - - # Clear the queue - while not _log_queue.empty(): - _log_queue.get_nowait() + self._reset_api_logging_state()Same helper could be reused in the other queue-related tests.
266-281: Isolatetest_error_handling_in_log_processingby resetting DB and_log_queuestateUnlike the other tests, this one does not clear existing
APILogModelrows or_log_queuebefore running. If other tests (now or in the future) leave entries in the queue or table, this test could end up exercising the error path with more data than intended or depend on global state. To keep tests independent and deterministic, consider resetting state at the top of this test:@patch("spiffworkflow_backend.utils.api_logging.logger") def test_error_handling_in_log_processing(self, mock_logger: Any, app: Flask) -> None: """Test error handling when log processing fails.""" with app.app_context(): - # Create a valid log entry but mock db.session.commit to raise an exception + # Ensure a clean state for this test + db.session.query(APILogModel).delete() + db.session.commit() + while not _log_queue.empty(): + _log_queue.get_nowait() + + # Create a valid log entry but mock db.session.commit to raise an exception test_entry = APILogModel(endpoint="/error_test", method="GET", status_code=200, duration_ms=100)This keeps behavior self-contained while still verifying that
_process_log_queuelogs the commit failure vialogger.error.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
spiffworkflow-backend/tests/spiffworkflow_backend/integration/test_authentication.py(1 hunks)spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: jasquat
Repo: sartography/spiff-arena PR: 2648
File: spiffworkflow-backend/src/spiffworkflow_backend/models/api_log_model.py:0-0
Timestamp: 2025-12-03T15:42:45.431Z
Learning: In the spiff-arena codebase, the APILogModel's process_instance_id field intentionally does not use a ForeignKey constraint to ProcessInstanceModel. This allows API logs to persist independently of the process instance lifecycle, even if the referenced process instances are deleted.
🧬 Code graph analysis (1)
spiffworkflow-backend/tests/spiffworkflow_backend/integration/test_authentication.py (1)
spiffworkflow-backend/conftest.py (1)
app(50-51)
🔇 Additional comments (2)
spiffworkflow-backend/tests/spiffworkflow_backend/integration/test_authentication.py (1)
304-308: LGTM! App fixture addition is appropriate.The addition of the
app: Flaskparameter ensures the Flask application context is properly initialized for this test. While the parameter isn't explicitly referenced in the test body, it's a standard pytest pattern where fixtures are injected for their setup side effects—in this case, establishing the app context needed fordb.sessionoperations and the new API logging infrastructure introduced in this PR. This change also brings consistency with other test methods in the file.spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_deferred_api_logging.py (1)
22-155: Decorator behavior tests are comprehensive and align well with the implementationThese tests cover the key paths: logging disabled by default, enabled happy path with request/response bodies, generic exceptions mapped to 500, and
ApiErrorwith custom status and details. They also correctly simulate teardown viag.has_pending_api_logsand_process_log_queue, which should make regressions in the decorator or teardown wiring very visible.
Summary by CodeRabbit
New Features
Chores
Tests
✏️ Tip: You can customize this high-level summary in your review settings.