AstroAI Lunch Talks - October 28, 2024 - David Vartanyan & Congyue Deng
28 Oct 2024 - Joshua Wing
The video can be found here: https://www.youtube.com/watch?v=pvO9_izu8rg
Speaker 1: David Vartanyan (Carnegie Observatory)
Title: Avenues into, and Prospects for, ML Applications in Supernovae
Abstract: Core-collapse supernovae (CCSNe) have been known to explode. Recent theoretical success in reproducing these explosions permits the luxury of asking why. The computational expense and the stochasticity of the CCSNe problem endow ML as a valuable, and perhaps viable, asset to resolving the longstanding CCSNe problem. I will present a simple, intuitive metric, enabled by early forays of ML applied to the CCSNe context, that can ab-initio predict explosion outcome with 90% fidelity. These results are corroborated by recent work using a CNN classifier. I will conclude with prospects to transition from simple classification to predictive utility in the broad range of multi-scale CCSNe diagnostics that couple the neutrino-driven central engine of CCSNe with their vibrant displays as remnants.
Speaker 2: Congyue Deng (Stanford University)
Title: ”The reality of the universe is geometrical.” – E. A. Burtt. The Metaphysical Foundations of Modern Physical Science.
Abstract: Deep learning frameworks, whether supervised or unsupervised, have achieved remarkable success a large variety of problems in astrophysics. However, despite their ability to extract high-level information from data, they often struggle to capture exact geometric relationships. Even in the simplest cases, for example, pointcloud networks trained on well-aligned objects (e.g. chairs in an upright position) can fail when tested on objects in arbitrary poses (e.g. chairs in random orientations under an SE(3) transformation). This highlights the networks’ lack of geometric understanding of pose changes and, more broadly, group actions and geometric relations. These limitations are common across many learning frameworks, impacting their robustness and generalizability – particularly in real-world applications where explainability and trustworthiness are critical, such as processing data from scientific experiments. On the other hand, geometry is a language that is widely adopted in describing physical laws. Incorporating and enforcing geometric relations in neural networks pave a way of building deep learning systems that can understand and follow physical laws. In this talk, I will demonstrate how naively constructed neural networks fail to understand geometric transformations in a variety of scenarios. I will then introduce a series of works on incorporating geometric operators into the latent spaces of neural networks, enabling them to expressively represent different classes of geometric transformations, from the simplest linear transformations to the more complex multi-body movements and continuous diffeomorphism. In the end, I will briefly discuss the possible future directions of applying geometric-aware deep learning to astrophysical problems.
Watch the talk below!