Research Project 🧠
Models for my research project in Breast Cancer Detection
Image Classification • Updated • 21Note Using VGG16
keanteng/densenet201-breast-cancer-classification-0603
Image Classification • Updated • 17Note Using DenseNet-201
keanteng/swin-v2-breast-cancer-classification-0602
Image Classification • Updated • 15Note First run Swin-V2-Base Pretrained ImageNet-1K
keanteng/swin-v2-breast-cancer-classification-0603
Image Classification • Updated • 17Note Second run Swin-V2-Base Pretrained ImageNet-1k | Different Learning Rate than 0602
keanteng/swin-v2-large-ft-breast-cancer-classification-0603
Image Classification • Updated • 25 • 1Note Base Model: Pretrained on ImageNet-22k and Fine-Tuned on ImageNet-1k
keanteng/swin-v2-large-breast-cancer-classification-0603
Image Classification • Updated • 18Note Base Model: Pretrained on ImageNet-22k Only
keanteng/efficientnet-b7-breast-cancer-classification-0603
Image Classification • Updated • 11Note Bad performance for some unknown reasons, might be architectural issues ⚠️, infinite validation loss when using auto_grad, and high validation loss after the fix
keanteng/efficientnet-b7-breast-cancer-classification-0603-2
Image Classification • Updated • 14Note Bad performance for some unknown reasons, might be architectural issues ⚠️, infinite validation loss when using auto_grad, and high validation loss after the fix
keanteng/efficientnet-b7-breast-cancer-classification-0603-3
Image Classification • Updated • 15Note Bad performance for some unknown reasons, might be architectural issues ⚠️, infinite validation loss when using auto_grad, and high validation loss after the fix
keanteng/efficientnet-b0-breast-cancer-classification-0604-1
Image Classification • Updated • 11Note Slightly performance increase. Efficient-b7 poor performance due to incorrect images size. Add weight penalty, parameters adjustment and improved image preporcessing.
keanteng/efficientnet-b0-breast-cancer-classification-0604-2
Image Classification • Updated • 23Note Better improvement but not best. LR = 1e4 with accumulation step of 4 seems to be the sweet spot, for the model. Validation accuracy can sometimes cross 50% which is good.