New pipelines: Some solutions are implemented using a pipeline of models. You can tackle some parts of it and try to improve them.
Different architecture: You can modify the structure of a well established model, BUT the modification should be based on sound reasons, even though in the end it may not give better results. Random “Mutations” of known models won't count as a contribution.
Augmentation techniques: Check if you can augment the data in a new way. Maybe synthetically generated data can help, or not.
A new benchmark or new evaluation metrics: If you feel the tests in the literature are not robust to some cases, you can design a new set of qualitative tests. This should translate in at least a couple of hundred new tests / examples.
Explainability: If you feel that the works you reviewed do not provide much insight into the decisions that are being made, well, you can work on that: evaluate existing explainability tools under your task conditions.
Cross-task adaptation: Explore whether techniques that worked in related domains can be adapted to your task.
Robustness to noise/adversaries: Investigate how the system performs under noisy, adversarial, or out-of-distribution inputs, and propose methods to improve robustness.
Human-in-the-loop integration: Design hybrid workflows where humans assist the model (or vice versa) to achieve better results than either alone.