Zum Inhalt springenZur Suche springen

Kalendertermin

Dr. Vitek Stritecky and Dr. Petr. Spelda (Charles University, Prague Czech Republic): Machine Learning & Epistemology (Zoom)

Aus den Instituten Forschungsseminar Theoretische Philosophie Forschungsseminar

Abstract:

Our talk is centered around the notion of robustness in empirical and theoretical machine learning (ML). For the empirical part, the talk analyses a new risk minimization method for training ML models, whereas in theoretical we focus on predictors of the models’ generalization error. We will show that in both cases robustness is inferred from the kind of optimality which depends on invariance of predictors’ outputs under changing conditions. Such robustness then depends on inductive inferences about data-generating distributions. This kind of optimality used in ML causes identical failures as the ‘straight rule’, because inaccessible and unstable ground truths prevent the convergence of frequency limits and the justification of robustness by distributional presuppositions. As a consequence of utilizing the straight rule to achieve robustness, in some parts of ML Hume’s Problem has been ‘reloaded’. We will outline our work-in-progress which seeks to apply higher order inductive rules in theoretical ML and leverage their provable optimality.

Speaker:

Petr Spelda and Vit Stritecky work on connections between modern theories of induction and robust machine learning (ML), which is considered to be a broader sociotechnical process rather than an independently achievable state of technology. In our most recent work, we asked about the role of human induction in ML and the nature of sociotechnical contracts that underlie deployments of ML models in dynamic environments. Our earlier works include a future-oriented analysis concerned with the environmental impacts of ‘gratuitous’ generalization capabilities created by certain ML practices and a philosophy of science work on possible interactions between scientific realism and generative ML models. Affiliated with the Department of Security Studies of Charles University as its founding members, we work in the parts of Security Science focused on technology.

Spelda P., Stritecky V. (2021). Human Induction in Machine Learning: A Survey of the Nexus. ACM Computing Surveys, forthcoming. http://dx.doi.org/10.1145/3444691.

Spelda P., Stritecky V. (2020). The future of human-artificial intelligence nexus and its environmental costs. Futures 117. https://doi.org/10.1016/j.futures.2020.102531.

Spelda P., Stritecky V. (2020). What Can Artificial Intelligence Do for Scientific Realism? Axiomatheshttps://doi.org/10.1007/s10516-020-09480-0.  

Veranstaltungsdetails

26.01.2021, 18:30 Uhr - 20:15 Uhr
Institut für Theoretische Philosophie
Verantwortlichkeit: