Pengembangan Otomasi Inventaris Farmasi Rumah Sakit Gigi dan Mulut Berbasis YOLO

  • Siti Salmiah Rumah Sakit Gigi dan Mulut Universitas Sumatera Utara
  • Khairul Abdi RSGM USU
Keywords: inventaris farmasi, deteksi objek, YOLO, pharmacy inventory, object detection, YOLO

Abstract

Abstrak—Penelitian ini mengembangkan sistem otomasi inventaris farmasi pada rumah sakit gigi dan mulut berbasis deteksi objek menggunakan YOLO. Dataset disusun dari 183 citra produk farmasi berformat JPEG yang mencakup 11 kelas, kemudian dibagi menjadi data latih dan validasi dengan rasio 80:20. Proses anotasi dilakukan menggunakan Label Studio dan disimpan dalam format label YOLO (.txt) berisi class_id serta koordinat bounding box ter-normalisasi. Model YOLO11s dilatih menggunakan bobot pralatih selama 60 epoch dengan ukuran input 640 piksel. Evaluasi dilakukan menggunakan precision, recall, F1-score, [email protected], dan [email protected]:0.95. Hasil terbaik diperoleh pada epoch ke-55 dengan precision 0.9641, recall 0.9218, F1-score 0.9425, [email protected] 0.9796, serta [email protected]:0.95 0.7565. Nilai [email protected] yang tinggi menunjukkan kemampuan deteksi yang sangat baik pada ambang IoU standar, sedangkan [email protected]:0.95 mengindikasikan masih adanya ruang peningkatan presisi lokalisasi bounding box pada ambang IoU yang lebih ketat. Sistem yang diusulkan berpotensi mempercepat inspeksi stok dan meningkatkan konsistensi pencatatan inventaris berbasis citra.


Kata kunci: inventaris farmasi, deteksi objek, YOLO

Abstract—This study develops an automated pharmacy inventory approach for a dental and oral hospital using YOLO-based object detection. A dataset of 183 product images covering 11 classes was collected and split into training and validation sets with an 80:20 ratio. Annotations were created using Label Studio and exported in YOLO format (.txt) with normalized bounding box coordinates. A YOLO11s model with pretrained weights was trained for 60 epochs using a 640-pixel input size. Performance was evaluated using precision, recall, F1-score, [email protected], and [email protected]:0.95. The best checkpoint (epoch 55) achieved 0.9641 precision, 0.9218 recall, 0.9425 F1-score, 0.9796 [email protected], and 0.7565 [email protected]:0.95. The high [email protected] indicates strong detection capability under standard IoU, while the lower [email protected]:0.95 suggests opportunities to improve bounding-box localization at stricter IoU thresholds. The proposed approach can accelerate stock inspection and improve consistency of image-based inventory recording.


Keywords: pharmacy inventory, object detection, YOLO

Downloads

Download data is not yet available.

References

1. D. Simchi-Levi, P. Kaminsky, and E. Simchi-Levi, Designing and Managing the Supply Chain: Concepts, Strategies, and Case Studies, 3rd ed. New York, NY, USA: McGraw-Hill, 2007.
2. D. Waters, Inventory Control and Management, 2nd ed. Chichester, UK: John Wiley & Sons, 2003.
3. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (NeurIPS), 2012.
4. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, real-time object detection,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2016.
5. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “SSD: Single Shot MultiBox Detector,” in Proc. European Conf. Computer Vision (ECCV), 2016.
6. I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cambridge, MA, USA: MIT Press, 2016.
7. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in Advances in Neural Information Processing Systems (NeurIPS), 2015.
8. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2016.
9. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556, 2014.
10. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
11. K. Fukushima, “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,” Biological Cybernetics, vol. 36, pp. 193–202, 1980.
12. C. M. Bishop, Pattern Recognition and Machine Learning. New York, NY, USA: Springer, 2006.
13. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
14. J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” in ICLR Workshop, 2015.
15. J. Bergstra and Y. Bengio, “Random search for hyper-parameter optimization,” Journal of Machine Learning Research, vol. 13, pp. 281–305, 2012.
16. S. B. Kotsiantis, “Supervised Machine Learning: A review of classification techniques,” Informatica, vol. 31, pp. 249–268, 2007.
17. D. M. W. Powers, “Evaluation: From precision, recall and F-measure to ROC, informedness, markedness & correlation,” Journal of Machine Learning Technologies, vol. 2, no. 1, pp. 37–63, 2011.
Statistik
Abstract View: 8
ARTIKEL Download: 2
Published
2025-12-30