Project
Team: 2 members.
Project Selection:
Option 1: Choose from the list of pre-defined project ideas provided below.
Option 2: Propose your own project idea, which requires approval from the course team.
Project Workflow
Each team is required to:
Implement their chosen project idea, building a model or system that addresses a specific medical data science problem.
Evaluate their approach using appropriate metrics (accuracy, precision, recall, etc.), and compare results to existing state-of-the-art methods.
Document their progress and findings in both a formal report and presentation, in
English, using the
IEEE format available
here -
IMPORTANT.
Milestones and Deliverables
2. M2 (18.11.24) - Dataset Collection and Baseline Results (1p)
3. M3 (18.12.24) - Own Contribution (1p)
4. M4 (08.01.25) - Final paper + Presentation (1p)
Grading System
Examples of Project Ideas
1. Bad Posture Detection
2. Smoker Detection
Objective: Identify whether a person is a smoker based on lung capacity, voice analysis, or X-ray images.
Dataset: Gather data from publicly available voice or medical image datasets.
Note: Each of the modalities (audio, video, image) chosen, or their combination may result in a different project, without much overlap.
3. Retinal Lesion Detection
4. Fracture Detection in X-rays
Objective: Develop a model that identifies fractures in X-ray images, which could help radiologists in making faster diagnoses.
Proposed Datasets: MURA, RSNA etc.
5. Cancer Detection from Histopathology Images
6. Alzheimer’s Disease Progression Prediction
7. Interpretation of Knee MRI
8. Your own project
We encourage you to choose and define your own project.
Potential contributions
Depending on the task and recent work, contributions may be:
New pipelines: Some solutions are implemented using a pipeline of models. You can tackle some parts of it and try to improve them.
Different architecture: You can modify the structure of a well established model, BUT the modification should be based on sound reasons, even though in the end it may not give better results. Random “Mutations” of known models won't count as a contribution.
Augmentation techniques: Check if you can augment the data in a new way. Maybe synthetically generated data can help, or not.
A new benchmark or new evaluation metrics: If you feel the tests in the literature are not robust to some cases, you can design a new set of qualitative tests. This should translate in at least a couple of hundred new tests / examples.
Explainability: If you feel that the works you reviewed do not provide much insight into the decisions that are being made, well, you can work on that: evaluate existing explainability tools under your task conditions.
Cross-task adaptation: Explore whether techniques that worked in related domains can be adapted to your task.
Robustness to noise/adversaries: Investigate how the system performs under noisy, adversarial, or out-of-distribution inputs, and propose methods to improve robustness.
Human-in-the-loop integration: Design hybrid workflows where humans assist the model (or vice versa) to achieve better results than either alone.
Tips for contributions:
Check limitations and future work: Most papers will have discussions around their limitations and propose future work items. Sometimes it is just that the authors did not focus on that aspect.
Error analysis on the baseline: You can analyse the errors made by your baseline and try to propose targeted solutions. In this case, the baseline should be a well performing model from the related work, not a simple finetuned architecture.