-
-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Fix: show success message for PDF uploads in frontend #531
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
PR Compliance Guide 🔍Below is a summary of compliance checks for this PR:
Compliance status legend🟢 - Fully Compliant🟡 - Partial Compliant 🔴 - Not Compliant ⚪ - Requires Further Human Verification 🏷️ - Compliance label |
|||||||||||||||||||||||||
WalkthroughBackend refactors resume processing into a service with text extraction, LLM JSON parsing, and data sanitization; router adds imports and logging. Added Pyright config and updated FastAPI/Starlette constraints. Frontend replaces drag-and-drop uploader with a minimal component and new/updated hooks, and restructures resume upload/improve API calls. Minor devDependency bump. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor U as User
participant FE as Frontend FileUpload component
participant H as useFileUpload Hook
participant API as Backend /upload
participant RS as ResumeService.process_resume
participant LLM as LLM
participant ST as Storage
U->>FE: Select resume file
FE->>H: handleFileChange(file)
H->>API: POST /upload (FormData)
API->>RS: process_resume(file_content, filename, user_id)
RS->>RS: _extract_text()
RS->>LLM: Request structured JSON
LLM-->>RS: JSON-like text
RS->>RS: parse_llm_json_response() and validate_and_sanitize_resume_data()
RS->>ST: Store resume & metadata (TODO)
ST-->>RS: resume_id
RS-->>API: { resume_id, request_id, message }
API-->>H: JSON response
H-->>FE: onUploadSuccess(resume_id)
FE-->>U: Show success/URL
alt Error
RS-->>API: error { detail }
API-->>H: HTTP error
H-->>FE: onUploadError(message)
FE-->>U: Show error
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
PR Code Suggestions ✨Explore these optional code suggestions:
|
|||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
apps/backend/app/api/router/v1/resume.py (1)
97-105: Endpoint claims DB storage, but service doesn’t store — fix contract.convert_and_store_resume returns an ID but has a TODO for DB persistence; nothing is actually stored. The success message asserts “stored in the DB”, which is misleading.
Two approaches:
- Implement storage in ResumeService.convert_and_store_resume and return the persisted resume_id, or
- Switch to process_resume (after implementing _store_resume).
I can provide a minimal storage stub or wire-up if you confirm the target models and repository API. As per coding guidelines
Also applies to: 120-124
apps/backend/requirements.txt (1)
1-56: Add missing dependencies for resume_service
- apps/backend/app/services/resume_service.py imports
docx2txtandPyPDF2, but neither is declared in requirements.txt; add pinned entries for both to avoid runtime ImportError.
🧹 Nitpick comments (13)
apps/backend/app/api/router/v1/resume.py (4)
5-5: Remove unused imports.parse_llm_json_response and validate_and_sanitize_resume_data are imported but unused.
-from app.services.resume_service import parse_llm_json_response, validate_and_sanitize_resume_data +# (remove unused imports)
58-63: Harden file-type validation — inspect magic bytes, not just content_type.content_type is client-controlled. Prefer server-side sniffing (you already depend on magika) before processing.
- if file.content_type not in allowed_content_types: + if file.content_type not in allowed_content_types: raise HTTPException( status_code=status.HTTP_400_BAD_REQUEST, detail="Invalid file type. Only PDF and DOCX files are allowed.", ) + # Optional: server-side validation using magika + # from magika import Magika + # detector = Magika() + # kind = detector.identify_bytes(await file.read()) + # if kind.mime_type not in allowed_content_types: + # raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Invalid file content.")As per coding guidelines
82-84: Avoid logging potentially sensitive file metadata at info level.Filenames can contain PII. Consider downgrading to debug or redacting name.
-logger.info(f"Received file upload: {file.filename}, type: {file.content_type}, size: {len(file_bytes)} bytes") +logger.debug("Received file upload: type=%s size=%d", file.content_type, len(file_bytes))
120-124: Consider consistent response format.Other endpoints return JSONResponse with headers; this one returns a plain dict. For consistency, consider JSONResponse with a Pydantic response model.
- return { - "message": f"File {file.filename} successfully processed as MD and stored in the DB", - "request_id": request_id, - "resume_id": resume_id, - } + return JSONResponse( + content={ + "message": f"File {file.filename} successfully processed as MD and stored in the DB", + "request_id": request_id, + "resume_id": resume_id, + } + )As per coding guidelines
apps/backend/app/services/resume_service.py (2)
46-47: Prefer logging.exception in except blocks.Captures stack traces without manual traceback plumbing.
- self.logger.error(f"Failed to convert and store resume: {str(e)}") + self.logger.exception("Failed to convert and store resume") @@ - self.logger.error(f"Error extracting text from {filename}: {str(e)}") + self.logger.exception("Error extracting text from %s", filename) @@ - logger.error(f"JSON parse failed even after cleaning: {e2}") - logger.error(f"Problematic JSON section: {response_text[max(0, e2.pos-50):min(len(response_text), e2.pos+50)]}") + logger.exception("JSON parse failed even after cleaning") + logger.error("Problematic JSON section: %s", response_text[max(0, e2.pos-50):min(len(response_text), e2.pos+50)])Also applies to: 71-72, 119-121
139-139: Preserve exception context with raise ... from.Helps distinguish parsing errors from handler errors.
- raise ValueError(f"Failed to parse JSON response: {str(e2)}") + raise ValueError("Failed to parse JSON response") from e2apps/frontend/hooks/useFileUpload.ts (4)
3-6: Type responses strictly; avoidany.Use the shared API type for strong typing and stricter state. Also type
uploadResponse.
[As per coding guidelines]Apply within the shown ranges:
interface UseFileUploadProps { - onUploadSuccess?: (response: any) => void; + onUploadSuccess?: (response: ResumeUploadResponse) => void; onUploadError?: (error: string) => void; } @@ - const [uploadResponse, setUploadResponse] = useState<any>(null); + const [uploadResponse, setUploadResponse] = useState<ResumeUploadResponse | null>(null);Add this import at the top of the file:
import type { ResumeUploadResponse } from "../lib/api/resume";Also applies to: 15-16
60-65: Harden non-OK response parsing (may not be JSON).Current code assumes JSON; many servers return text. Use content-type detection and fall back to text.
- if (!response.ok) { - const errData = await response.json(); - console.error("Upload failed:", errData); - throw new Error(errData.detail || "Upload failed"); - } + if (!response.ok) { + const ct = response.headers.get("content-type") || ""; + let detail = `Upload failed (status ${response.status})`; + if (ct.includes("application/json")) { + const errData = await response.json().catch(() => ({})); + detail = (errData as any).detail || detail; + } else { + const errText = await response.text().catch(() => ""); + if (errText) detail += ` - ${errText.substring(0, 200)}`; + } + throw new Error(detail); + }
69-72: Avoidalertin a hook; delegate UI to the component.Side effects like alerts are UX-hostile and brittle under StrictMode. Use the provided callbacks and let the component render success state.
[As per coding guidelines]- if (onUploadSuccess) onUploadSuccess(data); - // Show success message to user - alert(`File uploaded successfully: ${selectedFile.name}`); + onUploadSuccess?.(data);
72-78: Type the catch param asunknown.Avoid
anyin strict TS and narrow safely.
[As per coding guidelines]- } catch (err: any) { - const message = - err instanceof Error ? err.message : "Unexpected upload error"; + } catch (err: unknown) { + const message = err instanceof Error ? err.message : "Unexpected upload error";apps/frontend/lib/api/resume.ts (1)
20-24: Gracefully parse non-JSON errors.Avoid assuming JSON on error; fall back to text.
- if (!response.ok) { - const errorData = await response.json(); - console.error('Upload failed:', errorData); - throw new Error(errorData.detail || 'Upload failed'); - } + if (!response.ok) { + const ct = response.headers.get('content-type') || ''; + let detail = `Upload failed (status ${response.status})`; + if (ct.includes('application/json')) { + const errorData = await response.json().catch(() => ({} as any)); + detail = (errorData as any).detail || JSON.stringify(errorData).slice(0, 200) || detail; + } else { + const text = await response.text().catch(() => ''); + if (text) detail += ` - ${text.slice(0, 200)}`; + } + throw new Error(detail); + }apps/frontend/hooks/use-file-upload.ts (2)
276-281: Error filtering by filename substring is brittle.Filtering errors with
includes(fileName)may remove unrelated entries. Prefer associating errors with file IDs or exact name matches.- if (fileToRemove?.file && 'name' in fileToRemove.file) { - updatedErrors = prev.errors.filter((err: string) => !err.includes(fileToRemove.file.name as string)) - } + if (fileToRemove?.file && 'name' in fileToRemove.file) { + const name = String(fileToRemove.file.name); + updatedErrors = prev.errors.filter((err: string) => !err.startsWith(name + ':')) + }And when pushing errors elsewhere in this hook, prefix with
${fileName}: ${message}so they can be precisely removed.
21-32: Expose and enforceaccept/maxSizeoptions (currently unused).Options
acceptandmaxSizeare declared but not enforced inaddFilesAndUpload. Consider validating before queuing uploads.Also applies to: 197-214
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
apps/frontend/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (9)
apps/backend/app/api/router/v1/resume.py(2 hunks)apps/backend/app/services/resume_service.py(1 hunks)apps/backend/pyrightconfig.json(1 hunks)apps/backend/requirements.txt(1 hunks)apps/frontend/components/common/file-upload.tsx(1 hunks)apps/frontend/hooks/use-file-upload.ts(4 hunks)apps/frontend/hooks/useFileUpload.ts(1 hunks)apps/frontend/lib/api/resume.ts(1 hunks)apps/frontend/package.json(1 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
apps/backend/app/api/router/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
apps/backend/app/api/router/**/*.py: Use dependency injection for database sessions and services (e.g., Depends(get_db_session))
Use Pydantic models for all request and response validation in FastAPI endpoints
Validate file type and size for uploads (PDF/DOCX only) in backend file upload handlers
Use descriptive resource names for RESTful endpoints (e.g., /api/v1/resumes, /api/v1/jobs)
Implement proper HTTP status codes and error responses in API endpoints
Use consistent response formats with proper typing in API endpoints
Accept multipart/form-data for file uploads in API endpoints
Return structured JSON responses with consistent schemas in API endpoints
Implement streaming responses for long-running operations in API endpoints
Use proper pagination for list endpoints in API routes
Implement health check endpoints in backend API
Files:
apps/backend/app/api/router/v1/resume.py
apps/backend/app/api/router/v1/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Version APIs with /v1/ prefix for future compatibility
Files:
apps/backend/app/api/router/v1/resume.py
apps/frontend/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use TypeScript strict mode with proper interface definitions in frontend code
Files:
apps/frontend/components/common/file-upload.tsxapps/frontend/lib/api/resume.tsapps/frontend/hooks/useFileUpload.tsapps/frontend/hooks/use-file-upload.ts
apps/frontend/components/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
apps/frontend/components/**/*.{ts,tsx}: Follow component composition patterns with Radix UI primitives in frontend components
Use Tailwind CSS utility classes with component variants in frontend components
Implement error boundaries and loading states for better UX in frontend components
Files:
apps/frontend/components/common/file-upload.tsx
apps/frontend/lib/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Create custom hooks for state management and API interactions in frontend code
Files:
apps/frontend/lib/api/resume.ts
apps/frontend/lib/api/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Create API client functions in lib/api/ for frontend-backend communication
Files:
apps/frontend/lib/api/resume.ts
apps/backend/app/{models,services}/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use SQLAlchemy ORM with async sessions for database access
Files:
apps/backend/app/services/resume_service.py
apps/backend/app/services/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
apps/backend/app/services/**/*.py: Use temporary files for document processing and always clean up after processing
Implement async processing for file operations in backend services
Cache processed results when appropriate in backend services
Files:
apps/backend/app/services/resume_service.py
apps/backend/app/{agent,services}/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use structured prompts and validate AI model responses against Pydantic models
Files:
apps/backend/app/services/resume_service.py
🧬 Code graph analysis (5)
apps/backend/app/api/router/v1/resume.py (1)
apps/backend/app/services/resume_service.py (4)
parse_llm_json_response(95-139)parse_llm_json_response(300-302)validate_and_sanitize_resume_data(142-240)validate_and_sanitize_resume_data(305-307)
apps/frontend/components/common/file-upload.tsx (1)
apps/frontend/hooks/use-file-upload.ts (1)
useFileUpload(41-328)
apps/frontend/lib/api/resume.ts (2)
apps/frontend/components/jd-upload/text-area.tsx (1)
JobDescriptionUploadTextArea(13-160)apps/frontend/components/common/resume_previewer_context.tsx (2)
ImprovedResult(56-58)Data(41-54)
apps/frontend/hooks/useFileUpload.ts (1)
apps/frontend/hooks/use-file-upload.ts (1)
useFileUpload(41-328)
apps/frontend/hooks/use-file-upload.ts (2)
setup.sh (1)
error(58-58)apps/frontend/hooks/useFileUpload.ts (1)
useFileUpload(8-97)
🪛 Ruff (0.14.0)
apps/backend/app/services/resume_service.py
20-20: Unused method argument: file_type
(ARG002)
20-20: Unused method argument: content_type
(ARG002)
35-35: Local variable resume_text is assigned to but never used
Remove assignment to unused variable resume_text
(F841)
43-43: Consider moving this statement to an else block
(TRY300)
46-46: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
46-46: Use explicit conversion flag
Replace with conversion flag
(RUF010)
68-68: Abstract raise to an inner function
(TRY301)
68-68: Avoid specifying long messages outside the exception class
(TRY003)
71-71: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
71-71: Use explicit conversion flag
Replace with conversion flag
(RUF010)
119-119: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
120-120: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
139-139: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
139-139: Avoid specifying long messages outside the exception class
(TRY003)
139-139: Use explicit conversion flag
Replace with conversion flag
(RUF010)
273-276: Consider moving this statement to an else block
(TRY300)
279-279: Use explicit conversion flag
Replace with conversion flag
(RUF010)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: cubic · AI code reviewer
🔇 Additional comments (2)
apps/frontend/package.json (1)
29-29: Safe dev-only bump; verify Node runtime alignment and lockfile update.
Confirm that any declared Node version in engines/CI/workflows/.nvmrc/.node-version aligns with TypeScript types for Node 20, and ensure a lockfile exists and has been updated after bumping@types/node.apps/backend/pyrightconfig.json (1)
1-7: LGTM.Basic Pyright config is sensible (venv, Python 3.12, basic mode). No issues.
| import docx2txt | ||
| import PyPDF2 | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Imported libraries not declared in requirements.
docx2txt and PyPDF2 are imported but not pinned in apps/backend/requirements.txt — will raise ImportError in production.
Either:
- Add docx2txt and PyPDF2 pins, or
- Replace with existing deps (e.g., pdfminer.six for PDFs; markitdown for DOCX/PDF) to avoid new packages.
🤖 Prompt for AI Agents
In apps/backend/app/services/resume_service.py around lines 7 to 9, the module
imports docx2txt and PyPDF2 but those packages are not declared/pinned in
apps/backend/requirements.txt which will cause ImportError in production; fix by
either (A) adding pinned versions for docx2txt and PyPDF2 to
apps/backend/requirements.txt (open the file and append package==<semver>
entries, run pip-compile or equivalent to lock versions), or (B) replace the two
imports and related usage in this file with already-declared dependencies (for
example use pdfminer.six to extract PDF text and a project-approved DOCX parser)
and remove the unused imports, then update requirements if the replacement libs
aren’t already declared.
| async def convert_and_store_resume(self, file_bytes: bytes, file_type: str, filename: str, content_type: str = "md") -> str: | ||
| """ | ||
| Converts resume file (PDF/DOCX) to text using MarkItDown and stores it in the database. | ||
| Convert and store a resume file. | ||
| Args: | ||
| file_bytes: Raw bytes of the uploaded file | ||
| file_type: MIME type of the file ("application/pdf" or "application/vnd.openxmlformats-officedocument.wordprocessingml.document") | ||
| file_bytes: Raw file content | ||
| file_type: MIME type of the file | ||
| filename: Original filename | ||
| content_type: Output format ("md" for markdown or "html") | ||
| content_type: Target content type (md/html) | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Complete storage or align API; also fix unused params/vars.
- convert_and_store_resume doesn’t store (TODO), yet returns an ID.
- Unused params: file_type, content_type; Unused var: resume_text.
- async def convert_and_store_resume(self, file_bytes: bytes, file_type: str, filename: str, content_type: str = "md") -> str:
+ async def convert_and_store_resume(self, file_bytes: bytes, filename: str, content_type: str = "md") -> str:
@@
- resume_text = self._extract_text(file_bytes, filename)
+ resume_text = self._extract_text(file_bytes, filename)
@@
- # TODO: Save to database
- # This is a placeholder - implement actual database storage
-
- return resume_id
+ # TODO: Save to database and return the persisted ID
+ # resume_id = await self._store_resume({"content_type": content_type, "text": resume_text}, user_id=None)
+ return resume_idI can wire this to your models if you share the storage API.
Also applies to: 33-47
🧰 Tools
🪛 Ruff (0.14.0)
20-20: Unused method argument: file_type
(ARG002)
20-20: Unused method argument: content_type
(ARG002)
🤖 Prompt for AI Agents
In apps/backend/app/services/resume_service.py around lines 20-29 (and also
apply same changes to lines 33-47), convert_and_store_resume currently
extracts/returns data but does not persist it, returns an ID without storing,
and leaves unused parameters (file_type, content_type) and variable resume_text;
implement or call the storage API/model to persist the converted resume (e.g.,
create a Resume record or call the provided storage function) and return the
real stored record ID, or if storage is intentionally out of scope, remove the
unused params and resume_text and change the return to the converted content;
ensure file_type and content_type are either used (e.g., to decide conversion
logic and storage metadata) or removed from the signature, and update
docstring/comments to reflect the final behavior.
| for page in pdf_reader.pages: | ||
| text += page.extract_text() + "\n" | ||
| return text |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against None from PyPDF2.extract_text() to avoid TypeError.
extract_text can return None for some pages; concatenating with a string raises an exception.
- for page in pdf_reader.pages:
- text += page.extract_text() + "\n"
+ for page in pdf_reader.pages:
+ page_text = page.extract_text() or ""
+ text += page_text + ("\n" if page_text else "")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for page in pdf_reader.pages: | |
| text += page.extract_text() + "\n" | |
| return text | |
| for page in pdf_reader.pages: | |
| page_text = page.extract_text() or "" | |
| text += page_text + ("\n" if page_text else "") | |
| return text |
🤖 Prompt for AI Agents
In apps/backend/app/services/resume_service.py around lines 57 to 59,
page.extract_text() can return None which will raise a TypeError when
concatenated; change the loop to guard against None by assigning text_chunk =
page.extract_text() or "" (or skip when None) and then append text_chunk + "\n"
to text so concatenation always uses a string.
| def _extract_text(self, content: bytes, filename: str) -> str: | ||
| """Extract text from file content.""" | ||
| # TODO: Implement your text extraction logic | ||
| # This should handle PDF, DOCX, etc. | ||
| pass | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Duplicate _extract_text overrides the real implementation.
This second definition shadows the earlier working extractor, leaving _extract_text as a pass. All callers receive None, breaking processing.
Apply this diff to remove the duplicate stub:
- def _extract_text(self, content: bytes, filename: str) -> str:
- """Extract text from file content."""
- # TODO: Implement your text extraction logic
- # This should handle PDF, DOCX, etc.
- pass📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _extract_text(self, content: bytes, filename: str) -> str: | |
| """Extract text from file content.""" | |
| # TODO: Implement your text extraction logic | |
| # This should handle PDF, DOCX, etc. | |
| pass | |
🤖 Prompt for AI Agents
In apps/backend/app/services/resume_service.py around lines 282-287 there is a
duplicate stubbed definition of _extract_text that simply passes and shadows the
earlier, working implementation; remove this duplicate stub (delete lines
defining this second _extract_text) so the original extractor remains in effect,
then run tests or lints to ensure no other duplicates exist and that all callers
now receive the real extracted text.
| fastapi==0.119.0 | ||
| starlette>=0.37.0,<0.48.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Duplicate Starlette constraints (range + exact) — pick one.
You currently specify both starlette>=0.37.0,<0.48.0 (Line 16) and starlette==0.47.2 (Line 50). This is redundant and can confuse resolvers. Keep a single source of truth.
Suggest removing the exact pin and rely on the range:
- starlette==0.47.2Also applies to: 50-50
🌐 Web query:
What Starlette version range does FastAPI 0.119.0 require?
💡 Result:
FastAPI 0.119.0 requires Starlette >=0.40.0, <0.49.0. [1]
Sources:
- FastAPI release notes (Starlette supported range updated to >=0.40.0,<0.49.0). [1]
Unify Starlette constraint
Update to FastAPI 0.119.0’s required range and remove duplicate exact pin:
apps/backend/requirements.txt
- starlette>=0.37.0,<0.48.0
+ starlette>=0.40.0,<0.49.0
- starlette==0.47.2Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In apps/backend/requirements.txt around lines 15 to 16, the Starlette constraint
should match FastAPI 0.119.0’s required range and we must remove any duplicate
exact pin; update the Starlette requirement to the version range required by
FastAPI 0.119.0 (replace the current line with the proper >=/< range specified
by FastAPI 0.119.0) and remove any duplicate exact starlette pins elsewhere in
the file so only the unified range remains.
| interface FileUploadProps { | ||
| onUploadSuccess?: (fileUrl: string) => void; | ||
| onUploadError?: (error: string) => void; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prop type expects fileUrl but backend returns { resume_id, ... }.
Align the prop to accept the structured upload response instead of a URL.
-interface FileUploadProps {
- onUploadSuccess?: (fileUrl: string) => void;
- onUploadError?: (error: string) => void;
-}
+interface FileUploadProps {
+ onUploadSuccess?: (result: ResumeUploadResponse) => void;
+ onUploadError?: (error: string) => void;
+}Add this import near the top:
import type { ResumeUploadResponse } from "../../lib/api/resume";🤖 Prompt for AI Agents
In apps/frontend/components/common/file-upload.tsx around lines 6 to 9, the
onUploadSuccess prop is typed to receive a fileUrl string but the backend
returns a structured object (e.g., { resume_id, ... }); update the prop to
accept the backend response type: import the ResumeUploadResponse type from
../../lib/api/resume near the top and change onUploadSuccess?: (response:
ResumeUploadResponse) => void; also update any internal calls and usages to pass
the full response object (and rename the parameter from fileUrl to response or
resume) so consumers receive the structured upload data.
| const [state, actions] = useFileUpload({ | ||
| uploadUrl: 'http://127.0.0.1:8000/api/v1/resumes/upload', | ||
| onUploadSuccess: (file, response) => { | ||
| console.log('Upload successful:', response); | ||
| setFeedback(`${file.file.name} uploaded successfully!`); | ||
| if (typeof response.url === 'string') { | ||
| onUploadSuccess?.(response.url); | ||
| } | ||
| }, | ||
| onUploadError: (file, error) => { | ||
| console.error('Upload error:', error); | ||
| setFeedback(`Error uploading ${file.file.name}: ${error}`); | ||
| onUploadError?.(error); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Avoid hardcoded localhost and handle response shape consistently.
- Hardcoding
http://127.0.0.1:8000breaks non-local environments. - The success handler checks
response.url, which likely doesn’t exist; useresume_id(or the full response).
- const [state, actions] = useFileUpload({
- uploadUrl: 'http://127.0.0.1:8000/api/v1/resumes/upload',
- onUploadSuccess: (file, response) => {
- console.log('Upload successful:', response);
- setFeedback(`${file.file.name} uploaded successfully!`);
- if (typeof response.url === 'string') {
- onUploadSuccess?.(response.url);
- }
- },
+ const apiUrl = process.env.NEXT_PUBLIC_API_URL || "http://127.0.0.1:8000";
+ const [state, actions] = useFileUpload({
+ // NOTE: Verify the correct server path (see verification script).
+ uploadUrl: `${apiUrl}/upload`,
+ onUploadSuccess: (_file, response) => {
+ setFeedback(`${_file.file.name} uploaded successfully!`);
+ if (response && 'resume_id' in response) {
+ onUploadSuccess?.(response as ResumeUploadResponse);
+ }
+ },
onUploadError: (file, error) => {
console.error('Upload error:', error);
setFeedback(`Error uploading ${file.file.name}: ${error}`);
onUploadError?.(error);
}
});To confirm the correct endpoint across the repo, run:
🏁 Script executed:
#!/bin/bash
# Inspect all upload endpoints referenced in the repo
rg -n -C2 -S '/upload|/api/v1/resumes/upload'Length of output: 2080
🏁 Script executed:
#!/bin/bash
echo "=== useFileUpload.ts ==="
sed -n '1,200p' apps/frontend/hooks/useFileUpload.ts
echo
echo "=== lib/api/resume.ts ==="
sed -n '1,200p' apps/frontend/lib/api/resume.ts
echo
echo "=== backend resume router ==="
sed -n '1,200p' apps/backend/app/api/router/v1/resume.pyLength of output: 11075
Fix file upload URL override and response handling
- In apps/frontend/hooks/useFileUpload.ts, extend UseFileUploadProps to accept an optional uploadUrl and use it (falling back to
${process.env.NEXT_PUBLIC_API_URL}/upload) instead of the hardcoded localhost path. - In apps/frontend/components/common/file-upload.tsx (lines 15-29), remove the unused uploadUrl prop, adjust the onUploadSuccess callback to
(response: ResumeUploadResponse), and useresponse.resume_id(nourlfield) when invoking the parent callback. - Consolidate endpoint definitions—use the same env-driven base URL and correct
/api/v1/resume/uploadpath in both hook and shared API util.
🤖 Prompt for AI Agents
In apps/frontend/components/common/file-upload.tsx around lines 15 to 29, the
component currently hardcodes and/or passes an unused uploadUrl and expects
response.url; update it to stop passing uploadUrl to useFileUpload, change the
onUploadSuccess handler signature to accept the hook's ResumeUploadResponse, and
call the parent onUploadSuccess with response.resume_id (not response.url). Also
ensure the hook and shared API utils consistently derive the base URL from
NEXT_PUBLIC_API_URL and use the correct endpoint /api/v1/resume/upload; extend
UseFileUploadProps in apps/frontend/hooks/useFileUpload.ts to accept an optional
uploadUrl that falls back to `${process.env.NEXT_PUBLIC_API_URL}/upload` and
make the hook use that value so the component no longer needs to supply a
localhost URL.
| export const improveResume = async ({ resume_id, job_id, stream = false }: ResumeImprovement) => { | ||
| try { | ||
| const response = await fetch(`${API_URL}/improve`, { | ||
| method: 'POST', | ||
| headers: { | ||
| 'Content-Type': 'application/json', | ||
| }, | ||
| body: JSON.stringify({ | ||
| resume_id, | ||
| job_id, | ||
| }), | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Return parsed JSON and wire stream (signature change otherwise breaks callers).
Current code returns Response and ignores stream. Parse JSON and include stream in the request body.
-export const improveResume = async ({ resume_id, job_id, stream = false }: ResumeImprovement) => {
+export const improveResume = async ({ resume_id, job_id, stream = false }: ResumeImprovement) => {
try {
const response = await fetch(`${API_URL}/improve`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
- body: JSON.stringify({
- resume_id,
- job_id,
- }),
+ body: JSON.stringify({ resume_id, job_id, stream }),
});
@@
- return response;
+ // Return parsed JSON so callers get data, not a Response object.
+ return await response.json();Verify and fix call sites expecting the old signature or a Response:
Also confirm the intended upload/improve endpoints across the repo:
Also applies to: 59-63
🏁 Script executed:
#!/bin/bash
# Find all improveResume calls and show surrounding lines
rg -n -C3 '\bimproveResume\s*\(' --type=ts --type=tsxLength of output: 27
🏁 Script executed:
#!/bin/bash
# Map upload/improve endpoints in code
rg -n -C2 -S '/upload|/api/v1/resumes/upload|/improve|/api/v1/resumes/improve'Length of output: 2797
🏁 Script executed:
#!/bin/bash
# Search all .ts and .tsx files for improveResume calls
rg -n -C3 '\bimproveResume\s*\(' --glob '*.{ts,tsx}'Length of output: 543
🏁 Script executed:
#!/bin/bash
# Show the signature and imports around improveResume in resume.ts
rg -n -C5 'export const improveResume' --glob 'apps/frontend/lib/api/resume.ts'Length of output: 693
Parse JSON, include stream in improveResume and update callers
- In apps/frontend/lib/api/resume.ts (lines 41-52), add
streamtoJSON.stringifybody and changereturn responsetoreturn await response.json(). - In call sites (e.g. apps/frontend/components/jd-upload/text-area.tsx:66), change
await improveResume(resumeId, jobId)
to
await improveResume({ resume_id: resumeId, job_id: jobId, stream })
and handle the returned JSON.
🤖 Prompt for AI Agents
In apps/frontend/lib/api/resume.ts around lines 41 to 52, the POST body omits
the stream flag and the function returns the raw Response instead of parsed
JSON; update the JSON.stringify payload to include stream, and change the
function to return await response.json(). Also update all call sites (e.g.
apps/frontend/components/jd-upload/text-area.tsx line ~66) to call improveResume
with the object form improveResume({ resume_id: resumeId, job_id: jobId, stream
}) and adjust callers to handle the returned JSON result.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
15 issues found across 10 files
Prompt for AI agents (all 15 issues)
Understand the root cause of the following 15 issues and fix them.
<file name="apps/frontend/lib/api/resume.ts">
<violation number="1" location="apps/frontend/lib/api/resume.ts:1">
Rule violated: **Flag Security Vulnerabilities**
Hard-coding an HTTP fallback for API_URL causes resume uploads and improvements to run over plain HTTP whenever the env var is unset, breaking the TLS requirement and enabling MITM interception. Replace the fallback with an HTTPS endpoint or require the environment URL to be configured.</violation>
<violation number="2" location="apps/frontend/lib/api/resume.ts:15">
Rule violated: **Check System Design and Architectural Patterns**
The API client now calls `/upload` and `/improve` directly, but the FastAPI router still exposes these endpoints under `/api/v1/resumes/...`, so every resume upload/improve request will 404. This violates the established API layer contract and breaks the frontend-backend integration.</violation>
<violation number="3" location="apps/frontend/lib/api/resume.ts:41">
Rule violated: **API Integration and Error Handling Patterns**
Changing improveResume to destructure an object means callers must pass an object. Existing code still passes resumeId and jobId as positional arguments, so resume_id and job_id become undefined and the POST body omits the required identifiers, causing the improve API call to fail.</violation>
<violation number="4" location="apps/frontend/lib/api/resume.ts:41">
Rule violated: **Detect Unused or Redundant Configuration Parameters**
`improveResume` destructures a new `stream` configuration flag that is never used, leaving a redundant parameter that violates our "Detect Unused or Redundant Configuration Parameters" rule. Please remove the unused `stream` argument and matching interface field to avoid confusion about streaming support.</violation>
</file>
<file name="apps/backend/app/services/resume_service.py">
<violation number="1" location="apps/backend/app/services/resume_service.py:20">
Rule violated: **Detect Unused or Redundant Configuration Parameters**
The new convert_and_store_resume signature still accepts file_type and content_type, but the implementation never reads either argument. Under Detect Unused or Redundant Configuration Parameters, please remove or use these parameters so the API surface reflects actual behavior.</violation>
<violation number="2" location="apps/backend/app/services/resume_service.py:49">
Rule violated: **Check System Design and Architectural Patterns**
This duplicate `_extract_text` definition overwrites the working extractor with a no-op, so the resume parsing layer of `ResumeService` disappears. That breaks the backend processing flow and violates our service-layer architecture pattern. Please keep the real extractor instead of redefining it as a stub.</violation>
<violation number="3" location="apps/backend/app/services/resume_service.py:120">
Rule violated: **Flag Security Vulnerabilities**
Avoid logging raw slices of the LLM response; these contain resume PII and leak sensitive user data into server logs. Remove the data-bearing log entry and rely on non-sensitive diagnostics.</violation>
<violation number="4" location="apps/backend/app/services/resume_service.py:288">
Rule violated: **Check System Design and Architectural Patterns**
`process_resume` still depends on `_call_llm` for the LLM integration layer, but this method is now a no-op `pass`. That leaves the service without its orchestration boundary and causes downstream failures, violating the established backend architecture.</violation>
<violation number="5" location="apps/backend/app/services/resume_service.py:293">
Rule violated: **Check System Design and Architectural Patterns**
`ResumeService` is supposed to persist structured resumes, but `_store_resume` is now just a stub. That removes the persistence layer from this service, violating our backend system design patterns and preventing the flow from ever completing.</violation>
</file>
<file name="apps/frontend/hooks/use-file-upload.ts">
<violation number="1" location="apps/frontend/hooks/use-file-upload.ts:23">
Rule violated: **Detect Unused or Redundant Configuration Parameters**
`maxSize` is still exposed as a configuration option, but every piece of code that consumed it (the `validateFile` helper and the validation call inside `addFilesAndUpload`) was deleted in this change. Consumers can still pass a max size expecting enforcement, yet it no longer does anything, leaving a redundant configuration parameter and removing the file-size guard. Please either restore the validation or drop the option from the API to satisfy the configuration-parameter rule.</violation>
<violation number="2" location="apps/frontend/hooks/use-file-upload.ts:24">
Rule violated: **Detect Unused or Redundant Configuration Parameters**
`accept` remains in the hook options, but after removing `getInputProps` and all other references, nothing applies it to the file input anymore. Callers can configure allowed MIME types but the hook ignores the value, leaving a redundant configuration parameter and silently relaxing file-type restrictions. Please wire the option back into the input handling or remove it from the API to comply with the configuration-parameter rule.</violation>
<violation number="3" location="apps/frontend/hooks/use-file-upload.ts:156">
`FileMetadata.url` no longer captures the backend URL after a successful upload, so non-image uploads end up with an empty string despite the response providing the persisted URL. Please populate `url` from the server response so downstream consumers still receive the usable resume link.</violation>
<violation number="4" location="apps/frontend/hooks/use-file-upload.ts:321">
Rule violated: **Check System Design and Architectural Patterns**
Returning only add/remove/clear actions here removes the drag/drop callbacks and getInputProps that consumers rely on to drive isDragging and wire the underlying input, breaking the hook’s public contract and violating our System Design guideline on preserving module abstractions.</violation>
</file>
<file name="apps/backend/app/api/router/v1/resume.py">
<violation number="1" location="apps/backend/app/api/router/v1/resume.py:5">
Remove the unused `parse_llm_json_response` and `validate_and_sanitize_resume_data` import to avoid dead code and linter warnings.</violation>
</file>
<file name="apps/frontend/components/common/file-upload.tsx">
<violation number="1" location="apps/frontend/components/common/file-upload.tsx:16">
Rule violated: **API Integration and Error Handling Patterns**
The uploadUrl is hardcoded to localhost, so deployed clients will post to their own machine instead of the backend, breaking resume uploads. Restore the environment-based API URL to keep the integration working across environments.</violation>
</file>
Since this is your first cubic review, here's how it works:
- cubic automatically reviews your code and comments on bugs and improvements
- Teach cubic by replying to its comments. cubic learns from your replies and gets better over time
- Ask questions if you need clarification on any suggestion
React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai to give feedback, ask questions, or re-run the review.
| formData.append('file', file); | ||
|
|
||
| console.log('Sending file upload request to:', `${API_URL}/upload`); | ||
| const response = await fetch(`${API_URL}/upload`, { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rule violated: Check System Design and Architectural Patterns
The API client now calls /upload and /improve directly, but the FastAPI router still exposes these endpoints under /api/v1/resumes/..., so every resume upload/improve request will 404. This violates the established API layer contract and breaks the frontend-backend integration.
Prompt for AI agents
Address the following comment on apps/frontend/lib/api/resume.ts at line 15:
<comment>The API client now calls `/upload` and `/improve` directly, but the FastAPI router still exposes these endpoints under `/api/v1/resumes/...`, so every resume upload/improve request will 404. This violates the established API layer contract and breaks the frontend-backend integration.</comment>
<file context>
@@ -1,54 +1,63 @@
+ formData.append('file', file);
+
+ console.log('Sending file upload request to:', `${API_URL}/upload`);
+ const response = await fetch(`${API_URL}/upload`, {
+ method: 'POST',
+ body: formData,
</file context>
| # TODO: Implement your LLM call logic | ||
| pass | ||
|
|
||
| async def _store_resume(self, data: dict, user_id: Optional[str]) -> str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rule violated: Check System Design and Architectural Patterns
ResumeService is supposed to persist structured resumes, but _store_resume is now just a stub. That removes the persistence layer from this service, violating our backend system design patterns and preventing the flow from ever completing.
Prompt for AI agents
Address the following comment on apps/backend/app/services/resume_service.py at line 293:
<comment>`ResumeService` is supposed to persist structured resumes, but `_store_resume` is now just a stub. That removes the persistence layer from this service, violating our backend system design patterns and preventing the flow from ever completing.</comment>
<file context>
@@ -1,314 +1,307 @@
+ # TODO: Implement your LLM call logic
+ pass
+
+ async def _store_resume(self, data: dict, user_id: Optional[str]) -> str:
+ """Store resume in database."""
+ # TODO: Implement your database storage logic
</file context>
| # This should handle PDF, DOCX, etc. | ||
| pass | ||
|
|
||
| async def _call_llm(self, text: str) -> str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rule violated: Check System Design and Architectural Patterns
process_resume still depends on _call_llm for the LLM integration layer, but this method is now a no-op pass. That leaves the service without its orchestration boundary and causes downstream failures, violating the established backend architecture.
Prompt for AI agents
Address the following comment on apps/backend/app/services/resume_service.py at line 288:
<comment>`process_resume` still depends on `_call_llm` for the LLM integration layer, but this method is now a no-op `pass`. That leaves the service without its orchestration boundary and causes downstream failures, violating the established backend architecture.</comment>
<file context>
@@ -1,314 +1,307 @@
+ # This should handle PDF, DOCX, etc.
+ pass
+
+ async def _call_llm(self, text: str) -> str:
+ """Call LLM to structure resume."""
+ # TODO: Implement your LLM call logic
</file context>
| self.logger.error(f"Failed to convert and store resume: {str(e)}") | ||
| raise | ||
|
|
||
| def _extract_text(self, content: bytes, filename: str) -> str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rule violated: Check System Design and Architectural Patterns
This duplicate _extract_text definition overwrites the working extractor with a no-op, so the resume parsing layer of ResumeService disappears. That breaks the backend processing flow and violates our service-layer architecture pattern. Please keep the real extractor instead of redefining it as a stub.
Prompt for AI agents
Address the following comment on apps/backend/app/services/resume_service.py at line 49:
<comment>This duplicate `_extract_text` definition overwrites the working extractor with a no-op, so the resume parsing layer of `ResumeService` disappears. That breaks the backend processing flow and violates our service-layer architecture pattern. Please keep the real extractor instead of redefining it as a stub.</comment>
<file context>
@@ -1,314 +1,307 @@
+ self.logger.error(f"Failed to convert and store resume: {str(e)}")
+ raise
+
+ def _extract_text(self, content: bytes, filename: str) -> str:
+ """Extract text from file content."""
+ try:
</file context>
| state, | ||
| { | ||
| addFiles: addFilesAndUpload, // Expose internal addFilesAndUpload as addFiles | ||
| addFiles: addFilesAndUpload, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rule violated: Check System Design and Architectural Patterns
Returning only add/remove/clear actions here removes the drag/drop callbacks and getInputProps that consumers rely on to drive isDragging and wire the underlying input, breaking the hook’s public contract and violating our System Design guideline on preserving module abstractions.
Prompt for AI agents
Address the following comment on apps/frontend/hooks/use-file-upload.ts at line 321:
<comment>Returning only add/remove/clear actions here removes the drag/drop callbacks and getInputProps that consumers rely on to drive isDragging and wire the underlying input, breaking the hook’s public contract and violating our System Design guideline on preserving module abstractions.</comment>
<file context>
@@ -446,102 +303,26 @@ export const useFileUpload = (
state,
{
- addFiles: addFilesAndUpload, // Expose internal addFilesAndUpload as addFiles
+ addFiles: addFilesAndUpload,
removeFile,
clearFiles,
</file context>
| return json.loads(cleaned) | ||
| except json.JSONDecodeError as e2: | ||
| logger.error(f"JSON parse failed even after cleaning: {e2}") | ||
| logger.error(f"Problematic JSON section: {response_text[max(0, e2.pos-50):min(len(response_text), e2.pos+50)]}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rule violated: Flag Security Vulnerabilities
Avoid logging raw slices of the LLM response; these contain resume PII and leak sensitive user data into server logs. Remove the data-bearing log entry and rely on non-sensitive diagnostics.
Prompt for AI agents
Address the following comment on apps/backend/app/services/resume_service.py at line 120:
<comment>Avoid logging raw slices of the LLM response; these contain resume PII and leak sensitive user data into server logs. Remove the data-bearing log entry and rely on non-sensitive diagnostics.</comment>
<file context>
@@ -1,314 +1,307 @@
+ return json.loads(cleaned)
+ except json.JSONDecodeError as e2:
+ logger.error(f"JSON parse failed even after cleaning: {e2}")
+ logger.error(f"Problematic JSON section: {response_text[max(0, e2.pos-50):min(len(response_text), e2.pos+50)]}")
+
+ # Third attempt: try to extract JSON from markdown code blocks
</file context>
| logger.error(f"Problematic JSON section: {response_text[max(0, e2.pos-50):min(len(response_text), e2.pos+50)]}") | |
| logger.error("JSON parse failed even after cleaning; sensitive response withheld.") |
| stream?: boolean; | ||
| } | ||
|
|
||
| export const improveResume = async ({ resume_id, job_id, stream = false }: ResumeImprovement) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rule violated: Detect Unused or Redundant Configuration Parameters
improveResume destructures a new stream configuration flag that is never used, leaving a redundant parameter that violates our "Detect Unused or Redundant Configuration Parameters" rule. Please remove the unused stream argument and matching interface field to avoid confusion about streaming support.
Prompt for AI agents
Address the following comment on apps/frontend/lib/api/resume.ts at line 41:
<comment>`improveResume` destructures a new `stream` configuration flag that is never used, leaving a redundant parameter that violates our "Detect Unused or Redundant Configuration Parameters" rule. Please remove the unused `stream` argument and matching interface field to avoid confusion about streaming support.</comment>
<file context>
@@ -1,54 +1,63 @@
+ stream?: boolean;
+}
+
+export const improveResume = async ({ resume_id, job_id, stream = false }: ResumeImprovement) => {
+ try {
+ const response = await fetch(`${API_URL}/improve`, {
</file context>
| async def convert_and_store_resume( | ||
| self, file_bytes: bytes, file_type: str, filename: str, content_type: str = "md" | ||
| ): | ||
| async def convert_and_store_resume(self, file_bytes: bytes, file_type: str, filename: str, content_type: str = "md") -> str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rule violated: Detect Unused or Redundant Configuration Parameters
The new convert_and_store_resume signature still accepts file_type and content_type, but the implementation never reads either argument. Under Detect Unused or Redundant Configuration Parameters, please remove or use these parameters so the API surface reflects actual behavior.
Prompt for AI agents
Address the following comment on apps/backend/app/services/resume_service.py at line 20:
<comment>The new convert_and_store_resume signature still accepts file_type and content_type, but the implementation never reads either argument. Under Detect Unused or Redundant Configuration Parameters, please remove or use these parameters so the API surface reflects actual behavior.</comment>
<file context>
@@ -1,314 +1,307 @@
- async def convert_and_store_resume(
- self, file_bytes: bytes, file_type: str, filename: str, content_type: str = "md"
- ):
+ async def convert_and_store_resume(self, file_bytes: bytes, file_type: str, filename: str, content_type: str = "md") -> str:
"""
- Converts resume file (PDF/DOCX) to text using MarkItDown and stores it in the database.
</file context>
| maxSize?: number // in bytes | ||
| accept?: string // Comma-separated string of accepted file types | ||
| maxSize?: number | ||
| accept?: string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rule violated: Detect Unused or Redundant Configuration Parameters
accept remains in the hook options, but after removing getInputProps and all other references, nothing applies it to the file input anymore. Callers can configure allowed MIME types but the hook ignores the value, leaving a redundant configuration parameter and silently relaxing file-type restrictions. Please wire the option back into the input handling or remove it from the API to comply with the configuration-parameter rule.
Prompt for AI agents
Address the following comment on apps/frontend/hooks/use-file-upload.ts at line 24:
<comment>`accept` remains in the hook options, but after removing `getInputProps` and all other references, nothing applies it to the file input anymore. Callers can configure allowed MIME types but the hook ignores the value, leaving a redundant configuration parameter and silently relaxing file-type restrictions. Please wire the option back into the input handling or remove it from the API to comply with the configuration-parameter rule.</comment>
<file context>
@@ -1,85 +1,50 @@
- maxSize?: number // in bytes
- accept?: string // Comma-separated string of accepted file types
+ maxSize?: number
+ accept?: string
multiple?: boolean
- initialFiles?: FileMetadata[] // To initialize with already uploaded files
</file context>
| maxFiles?: number | ||
| maxSize?: number // in bytes | ||
| accept?: string // Comma-separated string of accepted file types | ||
| maxSize?: number |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rule violated: Detect Unused or Redundant Configuration Parameters
maxSize is still exposed as a configuration option, but every piece of code that consumed it (the validateFile helper and the validation call inside addFilesAndUpload) was deleted in this change. Consumers can still pass a max size expecting enforcement, yet it no longer does anything, leaving a redundant configuration parameter and removing the file-size guard. Please either restore the validation or drop the option from the API to satisfy the configuration-parameter rule.
Prompt for AI agents
Address the following comment on apps/frontend/hooks/use-file-upload.ts at line 23:
<comment>`maxSize` is still exposed as a configuration option, but every piece of code that consumed it (the `validateFile` helper and the validation call inside `addFilesAndUpload`) was deleted in this change. Consumers can still pass a max size expecting enforcement, yet it no longer does anything, leaving a redundant configuration parameter and removing the file-size guard. Please either restore the validation or drop the option from the API to satisfy the configuration-parameter rule.</comment>
<file context>
@@ -1,85 +1,50 @@
maxFiles?: number
- maxSize?: number // in bytes
- accept?: string // Comma-separated string of accepted file types
+ maxSize?: number
+ accept?: string
multiple?: boolean
</file context>
Description
This PR fixes the issue where PDF uploads were not showing a success message on the frontend. Now, PDF files display the success notification just like DOCX files.
Changes Made
file-upload.tsxto handle PDF MIME type correctly.use-file-upload.tsanduseFileUpload.tshooks to support PDF responses.resume.pyandresume_service.pyfor consistent file handling.Testing
Linked Issue
Closes #475
Summary by cubic
Fixed missing success notification for PDF resume uploads in the frontend. PDF files now show a clear “uploaded successfully” message, consistent with DOCX.
Bug Fixes
Refactors
Summary by CodeRabbit
New Features
Refactor
Chores