-
Notifications
You must be signed in to change notification settings - Fork 82
(IMPLEMENTATION) / Comprehensive Example Restoration for Ianvs(Robot) - Lifelong Learning Example #297
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: Abhishek Kumar <[email protected]>
d79cf6b to
e092acc
Compare
Summary of ChangesHello @abhishek-8081, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request finalizes the comprehensive implementation of the lifelong learning robot example within the Ianvs project, completing a major milestone for the LFX Mentorship program. The changes focus on enhancing the robustness and efficiency of the core lifelong learning paradigm. This includes streamlining result handling during model evaluation, improving metric computation to adapt to different output scenarios, and updating various configuration files for a more standardized and explicit setup. The PR also reflects adjustments in the integration strategy for external models, suggesting a more modular approach to their use within the evaluation framework. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request restores a comprehensive lifelong learning example. The changes primarily involve refactoring the lifelong_learning paradigm to support a no-inference mode and updating various configuration files. My review has identified a few issues, mainly related to hardcoded absolute paths in YAML configuration files which affect the portability of the example. I have suggested using relative paths or generic paths as documented. Additionally, there's a logic bug in the _train method in lifelong_learning.py concerning environment variable settings that needs to be addressed.
| self.dataset.test_url, | ||
| "test") | ||
|
|
||
| return None, self.system_metric_info |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The run method returns None for the test result in no-inference mode, which is inconsistent with other modes and the method's docstring. The evaluation result is available in the test_res variable from the my_eval call. It should be returned to be consistent.
| return None, self.system_metric_info | |
| return test_res, self.system_metric_info |
| if rounds < 1: | ||
| os.environ["CLOUD_KB_INDEX"] = cloud_task_index | ||
| os.environ["OUTPUT_URL"] = train_output_dir | ||
| if rounds < 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The environment variables CLOUD_KB_INDEX and OUTPUT_URL are now set only if rounds < 1. This will cause issues in subsequent training rounds (rounds >= 1). Additionally, the if rounds < 1: check is duplicated. These variables should be set unconditionally before the if block.
os.environ["CLOUD_KB_INDEX"] = cloud_task_index
os.environ["OUTPUT_URL"] = train_output_dir| # job name of bechmarking; string type; | ||
| name: "benchmarkingjob" | ||
| # the url address of job workspace that will reserve the output of tests; string type; | ||
| workspace: "/home/abhishek/projects/kumar/ianvs/lifelong_learning_bench/robot-workspace-test" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
|
||
| # the url address of test environment configuration file; string type; | ||
| # the file format supports yaml/yml; | ||
| testenv: "/home/abhishek/projects/kumar/ianvs/examples/robot/lifelong_learning_bench/semantic-segmentation/testenv/testenv-robot.yaml" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - name: "rfnet_lifelong_learning" | ||
| # the url address of test algorithm configuration file; string type; | ||
| # the file format supports yaml/yml | ||
| url: "/home/abhishek/projects/kumar/ianvs/examples/robot/lifelong_learning_bench/semantic-segmentation/testalgorithms/rfnet/rfnet_algorithm-simple.yaml" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| train_index: "/home/abhishek/cloud-robotics/640x480/train-index.txt" | ||
| # the url address of test dataset index; string type; | ||
| test_index: "/home/abhishek/cloud-robotics/640x480/test-index.txt" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoded user-specific absolute paths are used for train_index and test_index. This makes the example not portable. Please use the generic data paths as documented (e.g., /data/datasets/...) or relative paths.
train_index: "/data/datasets/robot_dataset/train-index-mix.txt"
# the url address of test dataset index; string type;
test_index: "/data/datasets/robot_dataset/test-index.txt"| # metric name; string type; | ||
| name: "accuracy" | ||
| # the url address of python file | ||
| url: "/home/abhishek/projects/kumar/ianvs/examples/robot/lifelong_learning_bench/semantic-segmentation/testenv/accuracy.py" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # metric name; string type; | ||
| - name: "accuracy" | ||
| # the url address of python file | ||
| url: "/home/abhishek/projects/kumar/ianvs/examples/robot/lifelong_learning_bench/semantic-segmentation/testenv/accuracy.py" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| self.edge_task_index, tasks_detail, test_res = self.my_eval(self.cloud_task_index, | ||
| self.dataset.test_url, | ||
| r) | ||
| task_avg_score = {'accuracy':0.0} | ||
| i = 0 | ||
| for detail in tasks_detail: | ||
| i += 1 | ||
| scores = detail.scores | ||
| entry = detail.entry | ||
| LOGGER.info(f"{entry} scores: {scores}") | ||
| task_avg_score['accuracy'] += scores['accuracy'] | ||
| task_avg_score['accuracy'] = task_avg_score['accuracy']/i | ||
| self.system_metric_info[SystemMetricType.TASK_AVG_ACC.value] = task_avg_score | ||
| LOGGER.info(task_avg_score) | ||
| # job = self.build_paradigm_job(ParadigmType.LIFELONG_LEARNING.value) | ||
| # inference_dataset = self.dataset.load_data(self.dataset.test_url, "eval", | ||
| # feature_process=_data_feature_process) | ||
| # kwargs = {} | ||
| # test_res = job.my_inference(inference_dataset, **kwargs) | ||
| #del job |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In no-inference mode, the final evaluation result test_res is computed but then None is returned by the run method. This contradicts the method's docstring which states it returns a numpy.ndarray. While the logic in testcase.py is adapted to handle None, this makes the code confusing as test_res becomes an unused variable. It's better to return test_res if it's the intended result, or rename it to _ if it's meant to be ignored.
| if paradigm_result is None: | ||
| continue | ||
| metric_res[metric_name] = metric_func(test_dataset.y, paradigm_result) | ||
| if paradigm_result is None: | ||
| metric_res["accuracy"] = metric_res["task_avg_acc"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic handles the case where paradigm_result is None by assigning task_avg_acc to accuracy. This seems to be a workaround for the no-inference mode. While it works, it makes the control flow a bit complex. A better approach might be to have the paradigm always return a consistent data structure, even if it's just the accuracy score, to avoid this special handling in the TestCase.
|
Please review this. |
hsj576
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now, in the GitHub “Files changed” view, all files under the lifelong learning example are marked as changed. Please fix this so that only the files where you actually modified code in the lifelong learning example are shown as changed.
LFX Mentorship 2025 Term 3: Complete Life-Long learning Example(Robot) Implementation for Ianvs
What type of PR is this?
example restoration
What this PR does / why we need it
This PR completes the full implementation of the life-long robot example in the Ianvs project as part of the LFX Mentorship 2025 Term 3.
All major components are complete (example code, tests, documentation).
The only remaining task is CI/CD integration.
Which issue(s) this PR fixes?
Fixes #287 #263 #230
@MooreZheng @hsj576