poisoningattack


2023年12月17日发(作者:极致沉迷(臣年)po)

[4] Xiao, H., Xiao, H., Eckert, C.: Adversarial label flips attack on support vector machines. In: ECAI. pp. 870–875 (2012)[5] Biggio, B., Nelson, B., Laskov, P.: Support vector machines under adversarial label noise. ACML 20, 97–112 (2011)[6] Bru ckner, M., Kanzow, C., Scheffer, T.: Static prediction games for adversarial learning problems. The Journal of MachineLearning Research 13(1), 2617–2654 (2012)[7] Bru ckner, M., Scheffer, T.: Stackelberg games for adversarial prediction problems. In: Proceedings of the 17th ACMSIGKDD international conference on Knowledge discovery and data mining. pp. 547–555. ACM (2011)[8] Biggio, B., Corona, I., Fumera, G., Giacinto, G., Roli, F.: Bagging classifiers for fighting poisoning attacks in adversarialclassification tasks. In: Multiple Classifier Systems, pp. 350–359. Springer (2011)[9] Laishram and V. V. Phoha. Curie: A method for protecting SVM classifier from poisoning attack. arXiv, (2016)


本文发布于:2024-09-24 08:19:29,感谢您对本站的认可!

本文链接:https://www.17tex.com/fanyi/9681.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:沉迷   作者
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2024 Comsenz Inc.Powered by © 易纺专利技术学习网 豫ICP备2022007602号 豫公网安备41160202000603 站长QQ:729038198 关于我们 投诉建议