2023 Computer Science Colloquium Series
The Debate Over “Understanding” in AI’s Large Language Models
Melanie Mitchell, Santa Fe Institute
Wednesday, September 20, 2023, 2:00 PM
Location: Larrañaga Engineering Auditorium (Centennial 1041)
Abstract:
I will survey a current, heated debate in the AI research community on whether large pre-trained language models can be said to "understand" language -- and the physical and social situations language encodes -- in any important sense. I will describe arguments that have been made for and against such understanding, and, more generally, will discuss what methods can be used to fairly evaluate understanding and intelligence in AI systems. I will conclude with key questions for the broader sciences of intelligence that have arisen in light of these discussions.
Bio:
Melanie Mitchell is Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction and analogy-making in artificial intelligence systems. Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her 2009 book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award, and her 2019 book Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux) has been shortlisted for the 2023 Cosmos Prize for Scientific Writing.
Improving Human-Automation Collaboration in Motion Planning
Torin Adamson, The University of New Mexico
Wednesday, September 13, 2023, 2:00 PM
Location: Larrañaga Engineering Auditorium (Centennial 1041)
Abstract:
Human-automation collaboration is becoming a part of everyday life as AI agents help us drive, make decisions, and solve a variety of other tasks. However, for safe and effective collaboration, humans must trust the AI to a degree appropriate to its capabilities. Humans must also understand the intentions behind the AI’s decisions, and communication between the two must convey sufficient information about the task without being overwhelming. To explore these issues, existing studies are typically carried out in laboratory settings built for investigating problems with specific applications. While these provide excellent data quality by tightly controlling experiment parameters, they can be combined with continuous, longer-term studies that contextualize behavioral data with the subject’s daily life for a more complete understanding. Human behavior that evolves over time driven by external factors outside the lab cannot be fully captured by studies that involve single participation sessions. These external factors form the “human context” that illuminates predictions of how behavior changes around automation. Video game adaptations of human-automation collaboration studies can provide insight into how humans would share control with an AI in a dynamic environment. In this thesis, the video game environments investigated include how humans and AI collaborate two distinct tasks, namely: obstacle avoidance, and informing motion planning algorithms for molecular docking prediction. While we showed that consumer devices such as laptops, game controllers, and tablets proved sufficient at this task in controlled experiments, they had yet to be tested outside of the lab, during each participant’s lives. Therefore, these game interfaces were adapted for a mobile device (such as a smartphone) and studies were repeated across multiple sessions per participant to investigate if they could be replicated in this new environment. Finally, we modified the dynamic obstacle avoidance study by extending it over a period of 180 days per participant and by recoding the human context data through wearables in addition to gameplay data. The goals of this large study were two-fold: First, we tested the impact of adjusting different parameters of the AI agent to understand how the performance preference of individuals could be understood. Secondly, we investigated how an AI agent could adjust itself to help the human achieve more desired outcomes. Data analysis that compared recorded human interventions against each AI’s typical behavior revealed several strategies that suggest preferences between quick task completion or safety. One player in particular exhibited a sudden shift in behavior, taking control from the most evasive AI to prioritize reaching goals faster. This shift correlates with one in their contextual data, including heart rate and physical activity that indicates some change in their life could be related, possibly indicating that human-automation collaboration is a dynamic process influenced not only by the hardware and software limitations, but also by human behavior.
Bio:
Torin Adamson is a PhD candidate at the University of New Mexico researching human-automation collaboration for motion planning. Their current work involves studying the behavior of human operators when paired with algorithms in interactive game environments, exploring how their behavior might change over long periods of time or in response to external factors in their daily lives. They have a background in video game engineering and realtime computer graphics, as well as creating networked game applications. In their free time, they experiment with game engines and working with open source software.
Helping Scientists Explore Data Through Rich Metadata and Advanced Data Management Tool
Jay Lofstead, Sandia National Laboratories
Wednesday, September 6, 2023, 2:00 PM
Location: Larrañaga Engineering Auditorium (Centennial 1041)
Abstract:
Scientific observations and simulations generate enormous data volumes that must be explored to gain new insights into physics phenomena. Two mature tool generations have beenproposed with each offering different capabilities to aid scientists in their analysis tasks. A third generation is currently being explored that offers deep, custom tagging and advanced querying against tags and data. While each tool generation continues to offer value as new generations are created, the third generation tools offer deep functionality that can be significantly helpful for scientists. This talk will explore this field as well as a proposed fourth generation tool being researched currently that will further aid complex analysis tasks for scientists. The work requires deeper understanding of data management tools, such as databases, indexing and data reduction techniques, including lossy compression, and advanced derived quantity querying capabilities. Accelerating these advanced operations without exploding data volumes is a challenging task that can offer tremendous benefits.
Bio:
Jay Lofstead is a Principal Member of Technical Staff at Sandia National Laboratories. His research interests focus around large scale data management and trusting scientific computing. In particular, he works on storage, IO, metadata, workflows, reproducibility, software engineering, machine learning, and operating system-level support for any of these topics. Broadly across these topics, he is also deeply interested in ethics related to these topics and computing in general and how to drive inclusivity across the computation-related science domains. Dr. Lofstead received his Ph.D. in Computer Science from the Georgia Institute of Technology in 2010.
Information Processing in Living Things
Lance R. Williams, The University of New Mexico
Wednesday, May 3, 2023, 2:00 PM
Location: Larrañaga Engineering Auditorium (Centennial 1041)
Abstract:
The universe uses human eyes to look at itself by means of ill-posed computational processes hosted on self-assembling computers called 'brains' made of self-replicating machines called 'cells' that translate and copy self-descriptions stored on large molecules.
Bio:
Lance Williams has always had an avid interest in science and the natural world. As a boy, he enjoyed drawing microorganisms that he observed with his microscope, launching model rockets, collecting fossils and building electronic circuits. He learned to program computers at age twelve and was writing programs simulating spiking neural networks by age fifteen. He attended Pennsylvania State University and earned a BS in Computer Science in 1985. He attended University of Massachusetts, Amherst and earned his MS and PhD degrees in Computer Science in 1988 and 1994. His dissertation was supervised by Allen Hanson and its topic was perceptual completion of surface boundaries in computer vision. He was a postdoctoral scientist at NEC Research Institute in Princeton, NJ from 1993-1997. His work on neural models of perceptual completion with David Jacobs and Karvel Thornber remains his most cited, and led to his appointment as Assistant Prof. of Computer Science at The University of New Mexico in 1997. At UNM he has taught undergraduate and graduate level classes on applied mathematics, image and signal processing, and programming languages. He has supervised the dissertations of five PhD students to completion, the first in 2006 and the most recent in 2022. Since 2010, his research has focused on asynchronous, spatially distributed, self-replicating parallel programs.
Intelligent Assistive Technologies for People with Visual Impairments
Hae-Na Lee, Stony Brook University
Wednesday, April 19, 2023, 2:00 PM
Location: Larrañaga Engineering Auditorium (Centennial 1041)
Abstract:
People with visual impairments (PVI) interact with computers and smartphones using assistive technology software that make digital content accessible either through audio or via magnification. Unfortunately, these assistive technologies are not tailored for usability - the ease and efficiency with which PVI can interact with computer and mobile applications. Consequently, PVI need to expend significantly more time and effort than sighted people to do even basic computer and smartphone tasks. In this talk, I will discuss how artificial intelligence techniques can be leveraged to mitigate this usability gap between sighted people and PVI in word processing applications, such as Microsoft Word and Google Docs. Specifically, I will present my dissertation work on customized assistive technologies that enhance usability by dynamically extending applications’ user interfaces with intelligent auxiliary interfaces that are tailored for either blind or low vision users. The key idea underlying these technologies is to automatically capture some of the important application features using machine learning models, and then present these features via alternative interfaces that are conveniently and efficiently navigable using a screen reader or a screen magnifier. I will then discuss the emerging usability challenges in social media due to the presence of unconventional language such as out-of-vocabulary words. I will lastly conclude by presenting some of my current projects and future research plans in this field.
Bio:
Hae-Na Lee is currently a PhD candidate under the supervision of Dr. I.V. Ramakrishnan in the Department of Computer Science at Stony Brook University. Before joining Stony Brook University, she completed her master’s degree at the Seoul National University in South Korea. Her current research focus is on accessible computing, where she leverages her background in computer vision and machine learning to devise intelligent usability-enhancing assistive technologies for people with visual impairments. She has published her research works in top-tier venues including ACM CHI, IUI, and ASSETS.
Securing the IoT at Home
Jean Camp, Indiana University
Wednesday, March 29, 2023, 2:00 PM
Location: Larrañaga Engineering Auditorium (Centennial 1041)
Abstract:
BGP enables a network of networks, and is also a network of trust. The most clear instantiation of that trust is the updating of router tables based on unsubstantiated announcements. The positive result of this trust is that the networks are resilient and recover quickly. Yet the very trust that enables resilience creates risks. Threats to the control plane have included political interference, misguided network configurations, and other mischief. My goal is illustrate three different solutions informed by an understanding of the purpose and scale of BGP hijacks. I begin by exploring the application of regression and clustering using theories grounded in offline crime. The myriad factors in available data (technical, rates of change, economic, and geopolitical) were used to differentiate and categorize past hijacks. The results suggested that two very different types of hijacks occur: those implemented for national interests and those for profit. The next step was to identify nation-state Internet-scale hijacks; showing a simple analysis of burstiness can identify such attacks an order of magnitude more quickly than previous approaches. Second, for small enterprises which may be targeted with an attack I describe a proof of concept enabling those with the expertise and knowledge to set their own risk profiles in order to balance availability with confidentiality. Third, for local, targeted attacks on small networks without expert network management, a highly customized cosine similarity approach that would be ineffective at the large scale proves highly efficacious. I acknowledge how SDN can change the playing field, complimenting the solution in one case yet undermining it in another. In this work I began with the application of macro-economics of security. I then describe how these results drove the design of three systems for hijack detection grounded in the understanding of BGP hijacks' motivations and scale at different points in the network.
Bio:
Jean Camp is a Professor of Informatics and Computer Science. Her research focuses on the intersection of human and technical trust, with the goal of building for end to end empowerment. She was a member of the 2022 class of Fellows of the ACM. She was selected as a Fellow of the Institute of Electronic and Electrical Engineers since 2018. She was elected a Fellow of the American Association for the Advancement of Science in 2017. She was inducted into Sigma Xi – the national research honor society - in 2017. She is currently employed Professor at the Luddy School with appointments in Informatics and Computing Science at Indiana University. She joined Indiana after eight years at Harvard’s Kennedy School where her courses were also listed in Harvard Law, Harvard Business, and the Engineering Systems Division of MIT. She spent the year after earning my doctorate from Carnegie Mellon as a Senior Member of the Technical Staff at Sandia National Laboratories. She began her career as an engineer at Catawba Nuclear Station after a double major in electrical engineering and mathematics, followed by a MSEE in optoelectronics at University of North Carolina at Charlotte.
Toward Scalable Autonomy
Aleksandra Faust, Google Brain
Wednesday, March 22, 2023, 2:00 PM
CS Colloquium Spring 23
https://unm.zoom.us/j/99722905204
Meeting ID: 997 2290 5204
Passcode: 437243
Abstract:
Autonomous agent in the real world is any system, from autonomous cars, service robots to digital assistants, that works in the real world to perform meaningful tasks. All these applications share common challenges. The agents need to generalize well, learn new tasks and adapt to new and changing environments. They need to be competent and safe. And it should be tractable to training. This talk will discuss scaling up autonomous agents in the real world, along generalization, quality, and cost dimensions, anchored on robotics, autonomous driving, and digital assistant applications. Traditionally, we have been focusing mostly on the edges of this constraint triangle, and in the first part of the talk we will explore examples of efficient generalization, efficient high-quality agents, and high-quality generalists through the lens of social navigation, locomotion, and web agents. However, recent advances in machine learning enable us to start seeing progress in the tracktable high-performing general agents. The second part of the talk will discuss our current and emerging work on the generalist agents, including some the roles of model capacity and data quality. We will conclude with a look into what the challenges and opportunities that future might hold, specifically how the progress in Large Language and Generative models might benefit autonomy at scale, and how the field might be changing.
Bio:
Aleksandra Faust is a Senior Staff Research Scientist, Autonomous Agents research lead, and Reinforcement Learning research team co-founder at Google Brain. Her research is centered around safe and scalable autonomous systems for social good, including reinforcement learning, planning, and control for robotics, autonomous driving, and digital assistants. Previously, Aleksandra founded and led Task and Motion Planning research in Robotics at Google, machine learning for self-driving car planning and controls in Waymo, and was a senior researcher in Sandia National Laboratories. She earned a Ph.D. in Computer Science at the University of New Mexico with distinction, and a Master's in Computer Science from the University of Illinois at Urbana-Champaign. Aleksandra won the IEEE RAS Early Career Award for Industry, the Tom L. Popejoy Award for the best doctoral dissertation at the University of New Mexico in the period of 2011-2014, and was named Distinguished Alumna by the University of New Mexico School of Engineering. Her work has been featured in the New York Times, PC Magazine, ZdNet, VentureBeat, and was awarded Best Paper in Service Robotics at ICRA 2018, Best Paper in Reinforcement Learning for Real Life (RL4RL) at ICML 2019, and Best Paper of IEEE Computer Architecture Letters in 2020.
Safety Analysis of Cyber-Physical Systems using Functional Overapproximations
Xin Chen, University of Dayton, OH
Wednesday, March 1, 2023, 2:00 PM
Larrañaga Engineering Auditorium (CENT 1041)
Abstract:
Cyber-Physical Systems (CPSs) are computer-controlled physical processes and are an important class of autonomous systems. CPSs often operate in safety-critical environments such as autonomous driving, medical monitoring and aircraft control. However, the formal analysis of CPSs is notoriously difficult due to the complexity of the mixed continuous and discrete dynamics. In this talk, we present a system of techniques to obtain functional overapproximations (or overapproximate functions) for the CPS executions under the combinations of various dynamics including ODEs, switching rules and neural networks. The integration of these methods allows to effectively overapproximate the reachable sets under highly nonlinear hybrid dynamics and prove the safety of a system. Additionally, we show that the functional overapproximation idea can be used to solve security problems on CPSs, such as attack detection and system recovery. The presented techniques in this talk have been implemented in the tools Flow* and POLAR.
Bio:
Dr. Xin Chen is an assistant professor in the Department of Computer Science at the University of Dayton. He received his Ph.D. in Computer Science from RWTH Aachen University in 2015, and was a postdoctoral research associate at the University of Colorado Boulder. Dr. Chen is primarily interested in the application of formal methods to solve the safety and security problems on autonomous systems. He developed a series of Taylor model-based techniques to verify the safety of Cyber-Physical Systems with learning-enabled components, and several online monitoring and recovery approaches to improve the resilience of systems with potential failures or cyber-attacks. Besides, Dr. Chen is the primary developer of the CPS verification tool Flow* and the main developer of the neural network verification tool POLAR. His research was funded by the U.S. Air Force Research Laboratory.
Unleashing the Power of Data Visualization: The Key to Unlocking Valuable Insights!
Ronak Etemadpour, Radiology UNM Health and Sciences
Wednesday, February 22, 2023, 2:00 PM
Larrañaga Engineering Auditorium (CENT 1041)
Abstract:
Visualization is a powerful tool for amplifying one's perception of data and facilitating deeper and faster insights that can improve decision-making in a wide range of applications, including AI. When working with multidimensional datasets, visualization methods that map the data into lower dimensions are particularly useful for improving human decision-making. My research explores three different themes for using visualization to enhance decision-making: 1) I utilize external tools such as eye tracking and Electroencephalogram (EEG) to assess eye distribution and brain activity while individuals make decisions and perform tasks on generated scatterplots using different visual measures. This approach allows us to better understand how people perceive and process visual information, which can inform the development of more effective visualization tools for AI and radiology; 2) I create different visual interfaces to enable people to make better decisions. For example, I developed a multi-coordinated view visualization for environmental justice in NYC, based on sensor-based data that observed the relationships between their daily activities and their houses' indoor air quality, additionally, I found the hidden relationships between different features of publicly available data (NYC 311) and reported gas leaks in NYC; 3) I combine existing visualization methods with different machine learning and statistical methods for different data exploration purposes. For instance, I analyzed California's water resources using Canonical correlation and Dynetvis visual analytic tools to provide a more holistic view of complex data sets. My current research interests revolve around the topic of decision-making improvements using visualization techniques in various domains, including AI and health because of the importance of informed decision-making in developing and deploying fair and ethical AI systems. Visualization tools can enable decision-makers to understand the data and algorithms driving AI systems, hence promoting fairness, transparency, and accountability in AI.
Bio:
Dr. Etemadpour has a PhD in Computer Science from Jacobs University Bremen, Germany and her research interests are about Data Visualization, Human Computer Interaction, and Data analytics. Recently she joined UNM as a Research Associate Professor in the Radiology Department. Due to family reasons, she had to move to UNM before accepting a promotion as an associate professor at City College of New York. She has provided guidance to several undergraduate and graduate students, including a Ph.D. student who recently graduated and is hired as an assistant professor at a university in New York City. She has gained a vast experience of teaching computer science courses, conducting research studies and running funded projects through her career as an Assistant Professor at The City College of New York and Oklahoma State University as well as a Postdoctoral researcher at The University of Arizona. She helps people to understand the high dimensional data better by visualizing them and assists them to make an informed decision by better perceiving the information that is developed visually. She explores different machine learning techniques, conducts different user experiences, and designs different visualization tools for multidimensional data challenges utilizing some tools that are commonly used in various psychology and neuroscience fields such as Eye tracking and electroencephalogram (EEG) tools.
Scaling HPC Applications via Predictable And Reliable Data Reduction
Sian Jin, Indiana University
Wednesday, February 15, 2023, 2:00 PM
Larrañaga Engineering Auditorium (CENT 1041)
Abstract:
For scientists and engineers, large-scale computer systems are one of the most powerful tools to solve complex high-performance computing (HPC) problems, such as large-scale machine learning, cosmological simulation, climate change, water management, and vaccine and drug design. With the ever-increasing computing power, such as the new generation of exascale (one exaflop or a billion billion calculations per second) supercomputers, the gap between computing power and limited storage capacity and I/O bandwidth has become a major challenge for scientists and engineers. This talk will introduce predictable and reliable data reduction techniques for scaling HPC Applications, including machine learning and scientific applications. The talk will cover how we design and leverage the lossy compression to advanced HPC and ML systems (e.g., GPU-based heterogeneous systems) and improve the performance for large-scale data processing applications (e.g., HPC simulations and ML model training).
Bio:
Sian Jin is a Ph.D. Candidate in the Department of Intelligent Systems Engineering at Indiana University, under the supervision of Prof. Dingwen Tao. He received his bachelor degree in physics from Beijing Normal University in 2018. His research interest falls in High-performance computing (HPC) data reduction & lossy compression for improving the performance of scientific data analytics & management, as well as for large-scale machine learning & deep learning. Six of his Ph.D. studies have been published as first-author papers in prestigious conferences, including SC, VLDB, ICDE, HPDC, and IPDPS. Email: sianjin@iu.edu.
Understanding and Improving Secure Development from a Human-Centered Perspective
Kelsey Fulton, University of Maryland
Wednesday, February 1, 2023, 2:00 PM
Larrañaga Engineering Auditorium (CENT 1041)
Abstract:
Secure software development remains a difficult and expensive task. In order to make progress, it is important to understand the human and organizational factors that help – or harm – secure development processes. My work aims to understand these factors through the use of qualitative and quantitative methodology, including interviews, large-scale surveys, and code review for vulnerabilities. In this talk, I will highlight how and why developers introduce vulnerabilities, as well as why current secure tooling, interventions, and organizational processes fail developers and security professionals and how we can improve them. First, I will discuss why and how developers introduced, found, and fixed different types of vulnerabilities, empirically uncovering an overwhelming need for investment in tooling or processes that can uncover and correct conceptual misunderstandings of security concepts. Then, I will present two studies exploring current issues with secure tooling and security communities through the use of interviews and a survey. Going forward, I plan to study the security assumptions developers make in order to improve security tooling, processes, and resources.
Bio:
Kelsey Fulton is a sixth year PhD candidate at University of Maryland. Their research applies a human-centric approach to secure software development with an emphasis on mental models and processes of software developers and the usability and improvement of secure development tools. Their work has been published in top security conferences and recognized with a best paper award at the USENIX Security Symposium. They received their master's degree in computer science from University of Maryland in 2019 and their bachelor's degree in computer science and mathematics from Millersville University in 2017.
Leveraging Multimodal Sensing for Enhancing the Security and Privacy of Mobile Systems
Habiba Farrukh, Purdue University
Wednesday, January 25, 2023, 2:00 PM
Larrañaga Engineering Auditorium (CENT 1041)
Abstract:
Mobile systems, such as smartphones, wearables (e.g., smart watches, AR/VR headsets), and IoT devices, have evolved significantly from being just a method of communication to sophisticated sensing devices that monitor and control several aspects of our lives. While these devices' advanced computing and sensing capabilities enable several useful applications, they also make them attractive targets for attackers, leading to several security threats and loss of privacy. In this talk, I explore how we can leverage multimodal sensing to design secure and usable user and device authentication systems for mobile and IoT devices through a combination of computer vision, machine learning, and cryptographic methods. In particular, I will present two systems for mobile security and privacy: a liveness detection system for protecting face authentication systems on mobile devices, and a group pairing system for IoT devices with different sensing modalities. Through this research, we develop tools and algorithms that allow developers to implement effective and user-friendly systems for user and device authentication.
Bio:
Habiba Farrukh is a Ph.D. candidate in the Computer Science department at Purdue University, where she is advised by Professor Z. Berkay Celik. Habiba has conducted research on a variety of topics, including mobile and IoT security and privacy and human-centered computing. Her dissertation focuses on leveraging multimodal sensing on mobile and IoT devices to provide rigorous security and privacy guarantees for these systems. She received the Bilsland Dissertation Fellowship for her dissertation in 2021. She has led a team of 5 students to improve the usability and conformity of Android authorization APIs and WearOS permission model for the Google ASPIRE (Android Security and PrIvacy REsearch) projects in 2021 and 2022. She is expected to earn her Ph.D. in the Spring of 2023. Habiba also interned with the Machine Learning Science team at Amazon Robotics. More information available at https://habiba-farrukh.github.io/.
Algorithm Design for Solving Hard Problems
Christoph Weidenbach, Max Planck Institute Informatics, Saarbruecken, Germany
Wednesday, December 7, 2022, 2:00 PM
Location: VIRTUAL
Zoom Link: https://unm.zoom.us/j/99722905204
Passcode: 437243
Abstract:
Computer science is about problem solving with the help of a computer. A problem is hard, if its solution requires exponentially many resources, in particular time, in general. Hard problems are ubiquitous in the real-world.
In the talk I try to explain why algorithm design for hard problems is different from algorithm design for simple problems and why we have seen breakthroughs in solving hard problems in the past 20 years.
Bio:
Christoph Weidenbach is an independent research group leader at the Max Planck Institute for Informatics in Germany. His research interests are in automated reasoning and its application to real-world problems. The Max Planck Society is one of the world leading societies in research and its informatics institutes are among the leading informatics institutes in the world.
Geometric mechanics formulations and structure-preserving discretizations for physics
Christopher Eldred, Sandia National Labs
Wednesday, November 30, 2022, 2:00 PM
Larrañaga Engineering Auditorium (Centennial 1041)
Abstract:
Geometric mechanics formulations (such as variational/Lagrangian, Hamiltonian, metriplectic, GENERIC, etc.) underlie essentially all of the major equations sets used in the modeling of physics, for both reversible (entropy-conserving) and irreversible (entropy-generating) dynamics. This includes both solid and fluid mechanics, along with other areas of classical physics. Utilizing these formulations, structure-preserving (SP) spatial, temporal and spatiotemporal discretizations such as compatible Galerkin methods, symplectic integrators and discrete exterior calculus (DEC) can be developed by emulating the key features of the relevant continuous geometric structure. A SP discretization approach leads to schemes with many useful properties such as freedom for spurious computational modes, consistent energetics and controlled dissipation of enstrophy or thermodynamic entropy. This talk will discuss the use of geometric mechanics formulations and structure-preserving discretizations in physics, illustrated with some examples from atmospheric modeling.
Bio:
Dr. Christopher Eldred has a PhD in Atmospheric Science at Colorado State University with David Randall (2015). He did two postdocs in France, the first one at LMD in Paris with Thomas Dubos, and another in the AIRSEA INRIA team in Grenoble with Laurent Debreu. He has been a staff scientist at SNL since 2019 in computational mathematics. He works on geometric mechanics formulations and associated structure-preserving discretizations for physics (continuum mechanics and kinetic models).
Mining the Mechanisms of Rapid Evolution
Davorka Gulisija, University of New Mexico
Wednesday, November 16, 2022, 2:00 PM
Larrañaga Engineering Auditorium (Centennial 1041)
Abstract:
Rapid (contemporary) evolution underlies some of the greatest challenges that currently face humanity, including viral adaptation, pesticide and antibiotic resistance, biological invasions, host-associated microbiome shifts, and response to climate change. My group develops mathematical models, computational and statistical approaches, and genomic data analyses toward comprehensive theoretical and statistical frameworks for rapid evolution. The talk will present an overview of the projects currently conducted in my group.
Bio:
Prof. Davorka Gulisija completed her Ph.D. in Computational Genetics in 2013 at the University of Wisconsin – Madison and her postdoc in Mathematical Biology in 2019 at the University of Pennsylvania. She has been an assistant professor in the Department of Biology at UNM since 2020.
Molecular dynamics simulation and physics-based modeling in the age of machine learning
Susan R. Atlas, University of New Mexico
Wednesday, November 9, 2:00 PM
Location: Larrañaga Engineering Auditorium (Centennial 1041)
Abstract:
Molecular dynamics (MD) simulation is an essential tool for exploring the energy landscapes and interactions of complex biophysical and materials systems. Applications range from probing ligand-protein interactions and elucidating the design principles of molecular machines, to optimizing the composition of novel high-entropy alloys. In recent years, machine learning models of MD force fields have grown increasingly sophisticated, expanding high-fidelity simulations to new classes of systems. In this talk, we describe a complementary, physics-based approach to force field design: the ensemble DFT charge-transfer embedded-atom method (ECT-EAM), and show how this approach can overcome the intrinsic chemical scaling limits faced by ML approaches. This is accomplished by incorporating the quantum mechanics of chemical bonding through an ensemble density functional theory reformulation [1-3] of the embedded-atom method for elemental materials. ECT-EAM’s expressive power derives from its description of charge transfer and charge polarization in terms of weighted ensembles of ionic and atomic excited state basis densities. We discuss how this principled framework provides a compact yet flexible representation of atomistic interactions, encompassing both local and system-wide effects, and covalent and ionic bonding. Finally, we describe scaling to very large systems through chemical potential equalization, to adjust the ensemble weights at each dynamical time step of a simulation.
Acknowledgements: NSF; DoD/DTRA CB Basic Research Program (Grant No. HDTRA1-09-1-0018).
[1] SR Atlas. J. Phys. Chem. A 125, 3760−3775 (2021).
[2] G Amo-Kwao and SR Atlas, Radial basis function electron densities with asymptotic constraints (preprint, University of NM, 2022).
[3] K Muralidharan, SM Valone, and SR Atlas. arXiv:cond-mat/0705.0857v1 (2007).
Bio:
Prof. Susan R. Atlas is a theoretical and computational chemical physicist who is currently Associate Professor of Chemistry and Biology and Research Professor of Physics and Astronomy at UNM. She is a Steering Committee Member of the Center for Quantum Information and Control, and Associate Director of the UNM Nanoscience and Microsystems Program. Prof. Atlas has served as a Program Director at NSF in the Directorate of Mathematical and Physical Sciences, Division of Chemistry, and as Director of the UNM Center for Advanced Research Computing. Prof. Atlas received her B.A. in Mathematics and Physics summa cum laude from Queens College of the City University of New York, and her A.M. in Physics and Ph.D. in Chemical Physics, supported by an NSF Pre-doctoral Fellowship, from Harvard University. She was a postdoctoral fellow in the Center for Nonlinear Studies and Chemical and Laser Sciences Division at Los Alamos National Laboratory, and held a Scientist position at Thinking Machines Corporation, a pioneering parallel supercomputing company. Her research interests lie in the areas of strongly correlated electronic, atomistic, and molecular genomic systems, density functional theory, molecular biophysics, scientific supercomputing, and statistical pattern recognition. She is co-inventor on a patent identifying a novel subtype of high-risk pediatric leukemia from gene expression data using machine learning techniques.
Searching for a Needle in Millions of Haystacks: A Story of HPC Monitoring and Analysis
Ben Schwaller, Sandia National Labs
Wednesday, November 2, 2:00 PM
Location: Larrañaga Engineering Auditorium (Centennial 1041)
Abstract:
High-performance Computing (HPC) systems, more commonly known as supercomputers, present a uniquely difficult use case for anomaly detection and classification. We will briefly discuss what a supercomputer is, how it is used, and the current methodologies of system and application data collection. We will discuss the challenges these datasets, which can be multiple TBs in size, pose and what information we are trying to learn from them. Finally, we will discuss what novel analysis techniques have been applied to extract crucial information about HPC systems.
Bio:
Ben Schwaller is a Senior Computer Science R&D at Sandia National Laboratories in the HPC Development division. He specializes in data analysis and visualization of High-performance Computing (HPC) monitoring datasets for actionable intelligence to users and administrators. He also manages several university research collaborations, heads an HPC analysis collaboration with Los Alamos and Lawrence Livermore, and performs network analysis for cybersecurity efforts at SNL. Ben earned his Bachelor’s of Science from the University of Florida in Electrical Engineering and his Master’s of Science from the University of Pittsburgh in Electrical and Computer Engineering. Through graduate school, Ben worked in the Space, High-performance, and Resilient Computing (SHREC) research group carrying out performance optimizations and architecture comparison studies for space computing.
Can Humans Learn From Machine Learning In Drug Discovery?
Tudor I. Oprea, Roivant Sciences
Wednesday, October 19, 2022, 2:00 PM
Zoom Meeting: https://unm.zoom.us/j/99722905204
Meeting ID: 997 2290 5204
Passcode: 437243
Abstract:
The fundamental building blocks for data science and informatics (DSI) in drug discovery (and in general) are data, information, and knowledge. To these, wisdom can be added with the appropriate domain expertise. As an integral DSI activity, machine learning (ML) can incorporate all types of building blocks by attributing higher weights to information and knowledge, respectively. In drug discovery, humans have successfully deployed ML across the three main pillars, as follows: Diseases: EMR processing, nosology, ontology, and EMR-based ML for Dx & mechanisms; Targets: Knowledge graphs, computational biology, target selection & validation; Drugs: virtual screening, vaccine/antibody/drug design. Acquiring high-quality data is critical to the success of ML model deployment. This talk will discuss our efforts to annotate information for drug targets [1], diseases [2, 3], proteins [4], and drugs [5]. This information is accessible through the Pharos portal, https://pharos.nih.gov/, and DrugCentral, https://drugcentral.org/, respectively. By combining Pharos-derived information, we developed a gene+disease /protein /pathway /phenotype knowledge graph (KG), which helped us identify five novel genes associated with Alzheimer's disease [6]. While this KG-based ML was validated experimentally, it is no longer reproducible; a few lessons from this ML model development will be discussed. At the small molecule level, we developed a multi-model ML platform to predict anti-COVID-19 activities [7], which we discuss in the context of discovering and repurposing drugs intelligently [8] during the COVID-19 pandemic. Our efforts in this area [9] are discussed in the context of supporting in vivo evidence [10]. The issue of "data shortage" for ML model building in drug discovery is discussed in the context of access to high-quality, relevant data. One challenge is "internal compatibility" regarding assays in the public domain. ML model building requires careful consideration of "meta-data" and assay design. Context informed our decisions when modeling water solubility, permeability, and hERG binding. Some lessons from this ML model development will be discussed. DSI in drug discovery faces several societal pressure points. In addition to reducing the time from idea to cure, others include i) lowering the number of animal experiments [11]; ii) reducing the use of hazardous chemicals with a focus on sustainability [12] and "green chemistry." Most of these goals can be addressed with the systematic use of ML models. Using past learnings, pharmaceutical companies are actively leveraging different flavors of ML to accelerate drug discovery while reducing the number of in vivo experiments and environmental impact.
References
1) Santos R, et al. Nat Rev Drug Discov. 2017, 16:19-34. doi: 10.1038/nrd.2016.230
2) Haendel M, et al. Nat Rev Drug Discov. 2020, 19:77-78. doi: 10.1038/d41573-019-00180-y
3) Grissa D, et al. Database (Oxford) 2022, baac019. doi: 10.1093/database/baac019
4) Oprea TI, et al. Nat Rev Drug Discov. 2018, 17:317-332. doi: 10.1038/nrd.2018.14
5) Sheils TK, et al. Nucleic Acids Res. 2021, 49:D1334-D1346. doi: 10.1093/nar/gkaa993
6) Binder J, et al. Commun Biol. 2022, 5:125. doi: 10.1038/s42003-022-03068-7
7) KC GB, et al. Nat Mach Intell. 2021, 3:527–535. doi: 10.1038/s42256-021-00335-w
8) Levin JM, et al. Nat Biotechnol. 2020, 38:1127-1131. doi: 10.1038/s41587-020-0686-x
9) Bocci G, et al. ACS Pharmacol Transl Sci. 2020, 3:1278-1292. doi: 10.1021/acsptsci.0c00131
10) L. Si et al. Nat Biomed Eng. 2021, 5:815-829. doi: 10.1038/s41551-021-00718-9
11) Madden JC, et al. Altern Lab Anim. 2020, 48:146-172. doi: 10.1177/0261192920965977
12) Wołos A, et al. Nature 2022, 604:668-676. doi: 10.1038/s41586-022-04503-9
Bio:
Tudor I. Oprea is a digital drug hunter with three decades of experience in machine learning and knowledge management applied to target and drug discovery. He holds an MD/PhD from the University of Medicine and Pharmacy in Timişoara, Romania. At AstraZeneca, he developed the “lead-like approach” and “ChemGPS”. At the University of New Mexico, he helped evaluate 505 bio-assays spanning hundreds of thousands of chemicals. This led to the identification of 7 NIH-designated chemical probes, including the first agonist and antagonist for GPER, and later inhibitors for the GLUT2, GLUT3 and GLUT5 transporters. Tudor has been developing machine learning models since 1989, first in cheminformatics and QSAR, and later in disease and target biology. His team developed DrugCentral and Pharos, part of an NIH Common Fund project. He co-authored over 300 publications, 10 US patents, and edited 2 books on informatics in drug discovery (see Google Scholar profile). He is currently VP, Translational Informatics at Roivant Sciences, and Professor Emeritus of Medicine at UNM School of Medicine. His team is currently building an end-to-end machine learning suite for in silico drug discovery.
The Lean Proof Assistant: Past, Present, and Future
Leonardo de Moura, Microsoft Research
Wednesday, October 12, 2022, 2:00 PM
Zoom Meeting: https://unm.zoom.us/j/99722905204
Meeting ID: 997 2290 5204
Passcode: 437243
Abstract:
Lean is a proof assistant and functional programming language being developed at Microsoft Research. Lean is used and contributed to by an active community of mathematicians around the world. Lean is the proof assistant of choice for the mathematics community, and the Lean mathematical library (Mathlib) has contributions from over two hundred mathematicians, one million lines of code, and lauded by Quanta Magazine as one of 2020’s biggest breakthroughs in mathematics. Lean is unique in having provided real, community-recognized assistance to research-level mathematics, including helping Fields Medalist Peter Scholze confirm a new theorem. The collaboration with Scholze has been reported in the journal Nature. More information about Lean can be found at http://leanprover.github.io. The interactive book “Theorem Proving in Lean” is the standard reference for Lean.
Bio:
Leonardo de Moura is a Senior Principal Researcher at Microsoft in the RiSE group. He joined Microsoft in 2006, and before that, he was a Computer Scientist at SRI International. He works with theorem proving and automated reasoning. In 2010, he received the Haifa Verification Award for his work on automated reasoning. In 2014, one of his articles, “Z3: an efficient SMT solver,” was nominated as the most influential tool paper in the first 20 years of TACAS. In 2015, Z3 received the Programming Languages Software Award from the ACM. He received the Herbrand award in 2019 to recognize his contributions to SMT solving, including its theory, implementation, and application to a wide range of academic and industrial needs. He received the CAV award in 2021 for his contributions to SMT solving. His current project, the Lean theorem prover, has been featured in many popular science magazines, including Nature, Wired, Quanta, BigThink, to cite a few. For more information about his work: http://leodemoura.github.io
Everything Old is New Again: The Past, Present, and Future of Analog Computing
Ben Feinberg, Sandia National Labs
Wednesday, October 5, 2022, 2:00 PM
Location: Larrañaga Engineering Auditorium (Centennial 1041)
Abstract:
Analog computing is currently experiencing a resurgence of interest as the scaling trends which have driven digital computing over the past half century have slowed. At the same time, developments in non-volatile memories have enabled new types of dense and highly-programmable analog kernels with the potential for order of magnitude improvements over convectional digital systems. Despite this potential, the historical challenges for analog computing around precision and reliability remain. This talk will discuss recent developments in analog computing for neural networks, signal processing, and scientific computing and how device-to-application co-design can help overcome the precision and reliability challenges. This talk will also discuss the challenges and opportunities as we go from the analog kernels of today to heterogeneous analog systems of the future. These heterogeneous systems can enable more applications to take advantage of the efficiencies of analog computing, but also require new techniques to enable programmers to harness this potential.
Bio:
Ben Feinberg is a Senior Member of Technical Staff in the Scalable Computer Architecture Group at Sandia National Laboratories. He works on a variety of projects related to energy-efficient and reliable computing for autonomous systems including projects for the DOE Vehicle Technology Office and DARPA. Prior to joining Sandia in 2019, Ben completed his PhD in Electrical Engineering at the University of Rochester.
The State of Computer Science Education in New Mexico
Paige Prescott
Wednesday, May 4, 2022, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/96675948342
Passcode: 130697
Abstract:
Computer Science education in the K-12 arena is getting a lot of attention nationally and locally as the demand for computing skills continues to grow. Developing an ecosystem to expand computer science education in New Mexico requires a multi-pronged approach that engages stakeholders and their unique interests in CS. In this colloquium, I will discuss the New Mexico context of computer science education including the history and growth over time. Driven by a vision that all K-12 students should be taught this important content, I will share data that helps to inform my work to expand access to computer science education. Included in this talk will be information about how CS looks at different grade levels and how teachers and schools are approaching this work.
Bio:
Paige Prescott has been involved in Computer Science education for more than fifteen years. As Executive Director of Computer Science Alliance, she is interested in strengthening the community of people involved in computer science education and to advocate on a state, district and local level to see more computer science offerings in New Mexico, especially to the underserved areas in rural and tribal communities. She is pursuing a PhD in Organization, Information and Learning Sciences at UNM where she is focusing on computer science education and the teacher experience. Paige has trained over 1000 teachers in NM and around the US to bring computer science to their students and has written curriculum for integration of computer science into science classes. She was recognized by the New Mexico Tech Council as a Women in Technology Honoree for her work in CS education.
Making Digital Currencies Payments Mainstream
Mahdi Zamani
Wednesday, April 27, 2022, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/96675948342
Passcode: 130697
Abstract:
Cryptocurrencies exemplify a digital transformation of commerce and a potential for disruptive advances in payment technologies. As of late 2021, the total value of all cryptocurrency assets exceeded $3 trillion. Compare this to the roughly $6.6T total worth of cash and $12T total worth of gold globally around the same time. Cryptocurrencies have already become a mainstream form of asset. Every year hundreds of billions of dollars is being invested in blockchain startups. The trend has also led central banks around the world to explore the possibility of issuing new forms of central bank money, known as central bank digital currencies (CBDC), potentially disrupting entire economies in near future.
On the other hand, compare the $20+ trillion that Visa and Mastercard together moved last year in consumer spending to only $55 billion in projected spending on cryptocurrencies for 2022. The huge gap suggests digital currencies still have a long path to becoming a mainstream form of payment, at least in the consumer spending sector. While gold never made it there either, likely due to their physical difficulties, digital currencies have the potential to be even less frictional than today’s mainstream payment rails. Nevertheless, bringing those potentials into action (e.g., improved speed) often comes with new challenges from other angles (e.g., security and privacy) that prohibits the adoption of digital currencies at scale.
In this talk, I describe some of these challenges and introduce some of Visa’s ongoing research efforts aimed at tackling these challenges. Our goal is to position Visa as a technical leader in the digital currencies landscape to promote its benefits and raise awareness about its risks and challenges. The new flows rely on advanced cryptographic techniques and tools that require extensive in-vitro experimentation before becoming ready for mass adoption.
Bio:
Mahdi is currently leading the Digital Currency Research team at Visa Research in Palo Alto, CA. The team is focused on building the next generation of mainstream payment systems that support various forms of digital currencies, ranging from cryptocurrencies and stablecoins to central bank digital currencies (CBDC). Before that, Mahdi was a Staff Research Scientist at Visa Research since 2017, leading applied research on projects such as RapidChain, FlyClient, and Zether as well as product initiatives such as Visa’s Universal Payment Channels (UPC) and Offline Payment System (OPS). Mahdi’s research at Visa has been published in top-tier security and financial venues and has received 850+ academic citations, 15 patents, and several media coverages. Before joining Visa, he spent a year at Yale University as a postdoctoral associate in 2016 and received his Ph.D. in Computer Science from the University of New Mexico in 2015.
Trustworthy AI for Wildlife Conservation: AI and Humans Combating Extinction Together
Tanya Berger-Wolf
Wednesday, April 20, 2022, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/96675948342
Passcode: 130697
Abstract:
Increasingly, AI is the foundation of decisions big and small, affecting lives of individuals and the wellbeing of our planet, the source of income for corporations and the foundation of resource distribution for populations. Data-driven, AI-enabled decisions are also the hope of solving our planet's biggest challenges, from climate change and poverty to pandemics and global crime. But if these solutions are to be trusted by those for whom they are intended, those who they affect the most, then the entire process of decision-making must be fair, just, inclusive, and participatory. The intended beneficiaries of the solutions must be more than mere data points or data providers but rather active partners every step of the way, from data to solution. I will show how this can work in the context of conservation. I will present an example of how data-driven, AI-enabled decision process becomes trustworthy by opening a wide diversity of opportunities for participation, supporting community-building, addressing the inherent data and computational biases, and providing transparent measures of performance. The community becomes the decision-maker, and AI scales the community, as well as the puzzle of data and solutions to the planetary scale, turning massive collections of images into high-resolution information database, enabling scientific inquiry, conservation, and policy decisions.
I will show how it all can come together to a deployed system, Wildbook, a project of tech for conservation non-profit Wild Me, with species including whales (flukebook.org), sharks (sharkbook.ai), giraffes (giraffespotter.org), and many more. Read more: https://www.forbes.com/sites/bernardmarr/2021/01/29/the-amazing-ways-wild-me-uses-artificial-intelligence-and-citizen-scientists-to-help-with-conservation/
Bio:
Dr. Tanya Berger-Wolf is a Professor of Computer Science Engineering, Electrical and Computer Engineering, and Evolution, Ecology, and Organismal Biology at the Ohio State University, where she is also the Director of the Translational Data Analytics Institute. Recently she has been awarded US National Science Foundation $15M grant to establish a new Harnessing Data Revolution Institute, founding a new field of study: Imageomics. As a computational ecologist, her research is at the unique intersection of computer science, wildlife biology, and social sciences. She creates computational solutions to address questions such as how environmental factors affect the behavior of social animals (humans included). Berger-Wolf is also a director and co-founder of the conservation software non-profit Wild Me, home of the Wildbook project, which brings together computer vision, crowdsourcing, and conservation. Wildbook has been recently chosen by UNSECO as one of the top AI 100 projects worldwide supporting the UN Sustainable Development Goals. It has been featured in media, including Forbes, The New York Times, CNN, National Geographic, and most recently The Economist.
Berger-Wolf has given hundreds of talks about her work, including at TED/TEDx, UN/UNESCO AI for the Planet, and SXSW EDU. Prior to coming to OSU in January 2020, Berger-Wolf was at the University of Illinois at Chicago. Berger-Wolf holds a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign. She has received numerous awards for her research and mentoring, including University of Illinois Scholar, UIC Distinguished Researcher of the Year, US National Science Foundation CAREER, Association for Women in Science Chicago Innovator, and the UIC Mentor of the Year.
Fairness and accuracy (and transparency and…) in algorithms for criminal justice and housing
Cris Moore
Wednesday, April 13, 2022, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/96675948342
Passcode: 130697
Abstract:
The study of algorithmic bias has become a burgeoning subfield of AI, machine learning, and theoretical computer science. Many people are now working to design algorithms that guarantee various kinds of statistical fairness. I want to take a 90-degree turn from this, and share some experiences of working on algorithms “on the ground,” including a study of a pretrial risk assessment algorithm in Bernalillo County where Albuquerque is located. By collaborating with legal scholars, court administrators, housing lawyers, and others, I’ve learned how they think about fairness in decision making: they are concerned not just with statistics, but with procedural issues—who has the burden of proof, and how data and algorithms can be explained, audited, and contested. I’ll also share some thoughts on what I think our responsibilities are as computer scientists to engage with these domains more deeply.
Bio:
Cristopher Moore received his B.A. in Physics, Mathematics, and Integrated Science from Northwestern University, and his Ph.D. in Physics from Cornell. From 2000 to 2012 he was a professor at The University of New Mexico, with joint appointments in Computer Science and Physics. Since 2012, Moore has been a resident professor at the Santa Fe Institute. He has also held visiting positions at the Niels Bohr Institute, École Normale Superieure, École Polytechnique, Université Paris 7, Northeastern University, the University of Michigan, and Microsoft Research. He is an elected Fellow of the American Physical Society, the American Mathematical Society, and the American Association for the Advancement of Science. With Stephan Mertens, he is the author of The Nature of Computation from Oxford University Press.
Termination of Stateless Amnesiac Flooding
Betsy Disalvo
Wednesday, April 6, 2022, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/96675948342
Passcode: 130697
Abstract:
Data has become embedded in all aspects of our lives, gathered and communicated to us in ways that could not have been imagined 20 years ago. While there was a time when data was considered impersonal, inert, and intangible, today we recognize that data is a reflection of the context in which it is collected, analyzed, and communicated. The DataWorks program is an entry-level job program that serves both as educational outreach for the College of Computing and a research platform. Outreach goals are to employee and train young people from disenfranchised neighborhoods as Data Wranglers, cleaning, formatting and preparing data for analysis, preparing them for more advanced careers in data. As a research program we have access to understand the labor of data and how data literacy is developed. We have found surprising ways that race, trauma, fatigue, exploitation and empowerment have appeared in the labor of data, examples of vernacular turns that less traditional audiences may use in developing data literacy and impact the impacts of these factors on the generation and preparation of data.
Bio:
Dr. Betsy DiSalvo is an Associate Professor in the School of Interactive Computing at Georgia Institute of Technology. DiSalvo work is focused on computer science (CS) education and informal learning. She is PI for several NSF-funded CS education projects, including exploring maker-oriented learning approaches to increase transfer and reflection in CS courses and the DataWorks project, an authentic working environment for minoritized youth that provides CS education through entry-level jobs. DiSalvo collaborates with game developers and others to develop educational games such as the Beats Empire game, which assesses CS learning outcomes and the Hemonauts game, which helps chronically ill children learn science concepts related to their bodies. In the past decade, DiSalvo has led research efforts to understand minoritized parents’ use of information technology in their children’s education, working with African American and Latin American parents in Atlanta. DiSalvo's work has included the development of the Glitch Game Testers program, a CS education effort with African American males, and projects for the Carnegie Science Museum, the Children's Museum of Atlanta, Eyedrum Art Center and the Walker Art Center.
Termination of Stateless Amnesiac Flooding
Amitabh Trehan
Wednesday, March 30, 2022, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/96675948342
Passcode: 130697
Abstract:
Imagine a network where every user is very aggressive about forwarding the messages they receive (like hyperactive WhatsApp users). The users are polite enough in that they do not send the message back to the users they have just received the message from in the previous round. However, as busy social media users, they quickly forget this interaction and forward the same message if they get it again in the future - potentially circulating the same message till the end of social media civilisation! Will they ever stop?
Consider any arbitrary finite graph modelling a synchronous network and a user/users who begin the process by flooding (we call this Amnesiac Flooding) a message $M$ i.e. sending $M$ to all neighbours. Nodes continue the process further by forwarding to all neighbours except to those from whom they just received $M$.The traditional flooding algorithm uses a flag to enforce termination by explicitly rejecting a message which comes again but amnesiac flooding does not use any such memory making it stateless. Thus, synchronous amnesiac flooding is a stateless algorithm which achieves broadcast but does it terminate? In our work, we show the surprising result that amnesiac flooding begun from any number of sources does indeed terminate thus realising a stateless practical broadcast algorithm. We also discover a sharp differentiation between termination times on bipartite and non-bipartite graphs promising other potentially useful stateless algorithms. However, not everything works out as nicely in asynchronous networks!
Based on:
- Walter Hussak and Amitabh Trehan : On The Termination of Flooding, STACS 2020
- Walter Hussk and Amitabh Trehan: On Termination of a Flooding Process (Brief Announcement). PODC 2019
- Walter Hussak and Amitabh Trehan: Terminating Cases of Flooding
Bio:
Prof. Amitabh Trehan is an Associate Professor in the Computer Science department at Durham University, UK, where he heads the NESTiD (Network Engineering Science and Theory in Durham) research group. Amitabh did his PhD from the University of New Mexico with Prof. Jared Saia on the topic of Self-healing Networks. He followed that up with postdocs at U. Victoria, Canada, Technion and Hebrew University, Israel, before moving to the UK. Amitabh’s research interest centre largely around algorithms, in particular, on developing distributed algorithms which are mathematically proven to be resilient and efficient.
Expressive computation: integrating programming and physical making
Jennifer Jacobs
Wednesday, March 23, 2022, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/96675948342
Passcode: 130697
Abstract:
Creators in many different fields use their hands. Artists and craftspeople manipulate physical materials, manufacturers manually control machine tools, and designers sketch ideas. Computers are increasingly displacing many manual practices in favor of procedural description and automated production. Despite this trend, computational and manual forms of creation are not mutually exclusive. In this talk, I argue that by developing methods to integrate computational and physical making, we can dramatically expand the expressive potential of computers and broaden participation in computational production. To support this argument, I will present research across three categories: 1) Integrating physical and manual creation with computer programming through domain-specific programming environments. 2) Broadening professional computational making through computational fabrication technologies.3) Broadening entry points into computer science learning by blending programming with art, craft, and design. Collectively, my research demonstrates how developing computational workflows, representations, and interfaces for manual and physical making can enable manual creators to leverage existing knowledge and skills. Furthermore, I’ll discuss how collaborating with practitioners from art, craft, and manufacturing science can diversify approaches to knowledge production in systems engineering and open new research opportunities in computer science.
Bio:
Jennifer Jacobs is Assistant Professor at the University of California Santa Barbara in Media Arts and Technology and Computer Science (by courtesy). At UCSB, she directs the Expressive Computation Lab, which investigates ways to support expressive computer-aided design, art, craft, and manufacturing by developing new computational tools, abstractions, and systems that integrate emerging forms of computational creation and digital fabrication with traditional materials, manual control, and non-linear design practices. Prior to joining UCSB, Jennifer received her Ph.D. from the Massachusetts Institute of Technology and was a Postdoctoral Fellow at the Brown Institute of Media Innovation within the Department of Computer Science at Stanford University. She also received an M.F.A. and B.F.A from Hunter College and the University of Oregon respectively. Her research has been presented at leading human-computer interaction research venues and journals including UIST, DIS, SIGGRAPH, and, most prominently, at the flagship ACM Conference on Human Factors in Computing Systems (CHI), where she received two best paper awards and one best paper honorable mention award in the past four years.
Designing and Implementing Integrated Computing Curricula in the Upper Elementary Grades: Lessons from Utah and Montana
Kristin Searle
Wednesday, March 9, 2022, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/96675948342
Passcode: 130697
Abstract:
As computer science education becomes more and more prevalent in K-12 classrooms, access and equity remain serious issues. We must design and implement computing curricula that make sense to a wide range of learners from a variety of backgrounds. Through professional development, we must also give teachers the tools to engage their students and the computing content in meaningful ways. One technology that has been shown to be especially promising for broadening participation in computing is electronic textiles (e-textiles). E-textiles bring together familiar aspects of fabric crafts with electronic components that are sewable and programmable, allowing for electronics to be embedded in items like clothing, bags, and decorative pillows. Drawing on the qualitative and design-based aspects of the work, I describe the process of developing and implementing culturally responsive, integrated computing curricula using e-textiles materials in two different contexts, the Elementary STEM Teaching Integrating Computing and Textiles Holistically (ESTITCH) project in Utah and the Indian Education and Computing for All project in Montana. Findings highlight both the affordances of integrated computing curricula and the challenges of such work.
Bio:
Dr. Kristin A. Searle is an assistant professor of Instructional Technology and Learning Sciences at Utah State University. She received her Ph.D. in education and anthropology from the University of Pennsylvania. Her work focuses on how participating in making activities can broaden students’ sense of what computing is and who can do it, with a focus on the development of culturally responsive computing curricula and pedagogies. Her work has appeared in journals such as Harvard Educational Review and the Journal of Science Education and Technology.
Distributed Computation in the Sleeping Model
Gopal Pandurangan, University of Houston
Wednesday, March 2, 2022, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/96675948342
Passcode: 130697
Abstract:
The sleeping model (introduced in PODC 2020) is a model for design and analysis of resource-efficient distributed algorithms. In the sleeping model, a node can be in one of two modes in any round --- sleeping or awake (unlike the traditional distributed model where nodes are always awake). Only the rounds in which a node is awake are counted, while sleeping rounds are ignored. A node spends resources (such as energy) only in the awake rounds and hence the main goal is to minimize the awake complexity of a distributed algorithm, the average or worst-case number of rounds any node is awake.
In this talk we present results on distributed algorithms for some fundamental distributed computing problems in the sleeping model. These results show that problems such as Maximal Independent Set (MIS), Leader Election, Spanning Tree, and Minimum Spanning tree (MST) and several other problems can be solved in a small awake complexity, which can be significantly better than the traditional round complexity (which counts both sleeping and awake rounds).
In particular, a main result that we will present is that the fundamental MIS problem can be solved in (expected) O(1) rounds under the node-averaged awake complexity measure in the sleeping model. Specifically, we present a randomized distributed algorithm for MIS that has expected O(1)-rounds node-averaged awake complexity and, with high probability has O(log{n})-rounds worst-case awake complexity and O(log^{3.41}n)-rounds worst-case complexity. We will also show that MST can be solved in O(log n) (worst-case) awake complexity which is optimal. We will also discuss tradeoff bounds between awake and round complexity of MST.
This is joint work with Soumyottam Chatterjee, Robert Gmyr, Khalid Hourani, William K. Moses Jr., John Augustine, and Peter Robinson.
Bio:
Gopal Pandurangan is a professor in the department of computer science at the University of Houston and a VAJRA visiting professor at the Indian Institute of Technology at Madras. He received his Ph.D. in computer science from Brown University in 2002. His research interests are in theory and algorithms for distributed computing, networks, and large-scale data. He has published over 125 refereed papers in these areas. His work has appeared in JACM, SICOMP, ACM TALG, STOC, FOCS, SODA, PODC, SPAA, DISC, and INFOCOM. His research has been supported by research grants from the US National Science Foundation, US-Israeli Binational Science Foundation, and the Singapore Ministry of Education.
An Unquenchable Appetite: Games, Play, and Climate Change
Jessica Hammer
Wednesday, February 23, 2022, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/96675948342
Passcode: 130697
Abstract:
Anthropogenic climate change is the defining challenge of our times. It is global, systemic, and deadly - and it is also subject to disinformation, elite capture, tragedies of the commons, and political resistance to making necessary adaptations. As a game researcher and designer, I feel my responsibility to help address these issues very keenly. In this talk, I will share some of the answers I am developing, both in the form of research agendas and in the form of games produced by my lab. This talk is very much a work in progress, and I hope that it sparks a lively discussion.
Bio:
Jessica Hammer is the Thomas and Lydia Moran Associate Professor of Learning Science, jointly appointed in the Human Computer Interaction Institute and the Entertainment Technology Center at Carnegie Mellon University. Her research focuses on transformational games, which change how players think, feel, or behave. She has been named a World Economic Forum Young Scientist, received an Okawa Award, and participated in Project Horseshoe. Her work has been supported by the NSF, the Heinz Foundation, Google, Amazon, Bosch, and Philips Health, among others. She won Carnegie Mellon’s Teaching Innovation Award in 2018. She is also an award-winning game designer.
Using Linearizable Objects in Randomized Concurrent Programs
Jennifer L. Welch, Texas A&M University
Wednesday, February 16, 2022, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/96675948342
Passcode: 130697
Abstract:
Atomic shared objects, whose operations take place instantaneously, are a powerful technique for designing complex concurrent programs. Since they are not always available, they are typically substituted with software implementations. A prominent condition relating these implementations to their atomic specifications is linearizability, which preserves safety properties of programs using them. However linearizability does not preserve hyper-properties, which include probabilistic guarantees of randomized programs. A more restrictive property, strong linearizability, does preserve hyper-properties but it is impossible to achieve in many situations. In particular, we show that there are no strongly linearizable implementations of multi-writer registers or snapshot objects in message-passing systems. On the other hand, we show that a wide class of linearizable implementations, including well-known ones for registers and snapshots, can be modified to approximate the probabilistic guarantees of randomized programs when using atomic objects.
Bio:
Jennifer L. Welch received her S.M. and Ph.D. from the Massachusetts Institute of Technology and her B.A. from the University of Texas at Austin. She is currently holder of the Chevron II Professorship and Regents Professorship in the Department of Computer Science and Engineering at Texas A&M University, and is an ACM Distinguished Member. Her research interests are in the theory of distributed computing, especially dynamic networks and distributed data structures.
The Science and Art of Conveying Information through Touch
Hong Tan
Wednesday, February 9, 2022, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/96675948342
Passcode: 130697
Abstract:
In this presentation, I will describe two haptics research projects being conducted in the Haptic Interface Laboratory Research Lab at Purdue University. The first project, TAPS, is concerned with the development of a haptic display for speech communication. We have long known that speech communication through the skin is possible through research on natural methods of tactual speech communication used by individuals with severe hearing and/or visual impairments. Despite a long history of research, however, the development of sensory-substitution devices to support the communication of speech has proven to be a difficult task. We have now demonstrated that up to 500 English words can be learned at a rate of 1 word per minute on a TActile Phonemic Sleeve (TAPS) worn on the forearm. The device consists of 24 tactors under independent control for stimulation at the forearm. Key insights in designing distinctive haptic symbols, mapping all 39 English phonemes to the symbols, and training learners to recognize phonemes and words will be described. The second project, palmScape, is a tactile display for the palm that delivers calm and pleasant vibrotactile patterns evocative of familiar natural phenomena. I will first describe the palmScape display and then show results from an affective rating study which confirm that the sensations evoked by palmScape are visceral and delightful. While the first project focuses on the science of information transmission capacity of the sense of touch, the second project explores the art of hand-crafting calm and delightful tactile experiences. The significance of our work goes beyond tactile speech communication and sensory substitution. Imagine a world where touch serves as an additional or alternative channel of communication for people with all levels of sensory capabilities, and tactile emojis bring a smile to everyone’s face!
Bio:
Hong Z. Tan is a Professor of Electrical and Computer Engineering in the College of Engineering at Purdue University, with courtesy appointments in the School of Mechanical Engineering and the Department of Psychological Sciences. She directs the Haptic Interface Research Lab that investigates the science and technology of displaying information through the sense of touch, taking a perception-based approach to solving engineering problems. Tan received her Bachelor's degree in Biomedical Engineering from Shanghai Jiao Tong University and earned her Master and Doctorate degrees, both in Electrical Engineering and Computer Science, from the Massachusetts Institute of Technology (MIT). She was a Research Scientist at the MIT Media Lab before joining the faculty at Purdue University in 1998. Tan has held a McDonnell Visiting Fellowship at Oxford University, a Visiting Associate Professorship in the Department of Computer Science at Stanford University, a Guest Researcher position in the Institute of Life Science and Technology at Shanghai Jiao Tong University, a Senior Researcher and Research Manager position at Microsoft Research Asia, and a Professorship at Beijing Normal University Faculty of Psychology. She was a recipient of the prestigious US National Science Foundation CAREER award and a Chinese National Natural Science Fund’s Distinguished (Overseas) Young Scholar. In addition to serving on numerous program committees, Tan was a co-organizer of the Haptics Symposium from 2003 to 2005, served as the founding chair of the IEEE Technical Committee on Haptics from 2006-2008, and co-chaired the World Haptics Conference in 2015. She served two terms as an Associate Editor of the IEEE Transactions on Haptics, and was the Editor-in-Chief of the World Haptics Conference Editorial Board from 2012-2015. Tan was elevated to IEEE Fellow in 2017 for her contributions to wearable haptics. She is currently on leave from Purdue University, working as a Lead Haptics Scientist at Google LLC.
Application of Secure Multiparty Protocols in Real World Products
Mahnush Movahedi
Wednesday, February 2, 2022, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/96675948342
Passcode: 130697
Abstract:
Today, distributed systems have become so large that they require highly scalable algorithms; algorithms that have asymptotically small communication, computation, and latency costs. In this talk we look at two interesting applications of secure distributed algorithms in the real world.
First, we discuss designing and deploying a privacy-preserving Randomized Controlled Trials (RCT) protocol. When feasible, RCT gives the strongest and most trustworthy empirical measures of causal effects. However, the most important settings often involve the most sensitive data, therefore cause privacy concerns. In this talk, we outline a way to deploy an end-to-end privacy-preserving protocol for learning causal effects from Randomized Controlled Trials (RCTs). Moreover, we show how such a protocol can be scaled to 500 million rows of data and more than a billion gates. We accomplish this by a three-stage solution, interconnecting and blending three privacy technologies--private set intersection, multiparty computation, and differential privacy--to address core points of privacy leakage, at the join, at the point of computation, and at the release, respectively. We additionally demonstrate how we have used this to create a working ads effectiveness measurement product in the real world that is capable of measuring hundreds of millions of individuals per experiment.
Next, we investigate the byzantine agreement (BA) problem and study the early protocols for solving this problem and achieving consensus. We will study lower bounds on BA protocols in different network and adversarial models to build a background knowledge of what is possible and what is not possible to achieve. Then, we study the importance of randomness in designing a BA protocol. We learn how to create global randomness from proof of work and solve the consensus problem in the synchronous model using a blockchain. Next, we explore the new technique that combines classical BA protocols with blockchain to address one of the most challenging problems in the consensus domain: scalability. We learn what is a Verifiable Random Function (VRF) and how we can use VRF to create a faster blockchain protocol.
Bio:
Mahnush Movahedi is a research scientist at Facebook helping the team to design scalable and efficient MPC platforms. Before that she was at Dfinity working on blockchain, scalable and fault-tolerant distributed algorithms for consensus and secure computation. She is also interested in game theory, secret sharing and developing interactive communication protocols that can tolerate noisy communication channels. Before joining Dfinity, Mahnush was a Postdoctoral Associate in Computer Science at Yale. She earned a PhD degree and an MS degree in Computer Science from the University of New Mexico, USA under the supervision of Professor Jared Saia.
Reconsidering Technology through the Lens of Weaving
Laura Devendorf
Wednesday, January 26, 2022, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/96675948342
Passcode: 130697
Abstract:
This talk will present a speculation rooted in my experience weaving electronics and developing software for weaving electronics. I will introduce the basics of woven structure in terms of its mechanical properties as well as methods by which it is designed and manipulated. I will also present some of the exciting opportunities for design and interaction when we consider weaving as a method of electronics production: such as the ability for textile structures to unravel, to be mended, and to be continually modified. Each of these underlying discussions will frame a provocation about alternative ways we might build, use, and unbuild our electronic products.
Bio:
Laura Devendorf, assistant professor of information science with the ATLAS Institute, is an artist and technologist working predominantly in human-computer interaction and design research. She designs and develops systems that embody alternative visions for human-machine relations within creative practice. Her recent work focuses on smart textiles—a project that interweaves the production of computational design tools with cultural reflections on gendered forms of labor and visions for how wearable technology could shape how we perceive lived environments. Laura directs the Unstable Design Lab. She earned bachelors' degrees in studio art and computer science from the University of California Santa Barbara before earning her PhD at UC Berkeley School of Information. She has worked in the fields of sustainable fashion, design and engineering. Her research has been funded by the National Science Foundation, has been featured on National Public Radio, and has received multiple best paper awards at top conferences in the field of human-computer interaction.
Algorithmic Programmable Matter
Andrea Richa
Wednesday, January 19, 2022, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/96675948342
Passcode: 130697
Abstract:
Many programmable matter systems have been developed, including modular and swarm robotics, synthetic biology, DNA tiling, and smart materials. We describe programmable matter as an abstract collection of simple computational elements (particles) with limited memory that each execute distributed, local algorithms to self-organize and solve system-wide problems such as movement, configuration, and coordination. Self-organizing particle systems (SOPS) have many interesting applications like coating objects for monitoring and repair purposes, and forming nano-scale devices for surgery and molecular-scale electronic structures. We describe some of our work on establishing the algorithmic foundations of programmable matter, investigating how macro-scale system behaviors can naturally emerge from local micro-behaviors by individual particles. In particular, we utilize tools from statistical physics and Markov chain analysis to translate Markov chains defined at a system level into distributed, local algorithms for SOPS that drive the desired emergent collective behavior. We further establish the notion of algorithmic matter, where we leverage standard binary computation, as well as physical characteristics of the robots and interactions with the environment in order to implement our micro-level algorithms in actual testbeds composed of robots that are not capable of any standard computation. We conclude by briefly addressing full concurrency and asynchrony in SOPS.
Bio:
Andrea W. Richa is a Professor of Computer Science and Engineering at Arizona State University. Her main areas of expertise are in distributed and network algorithms and computing in general. More recently, she has focused on developing the algorithmic foundations on what has been coined as programmable matter, through her work on self-organizing particle systems (SOPS). Her work has been widely cited, and includes, besides SOPS, work on bio-inspired distributed algorithms, distributed load balancing, packet routing, wireless network modeling and topology control, wireless jamming, data mule networks, underwater optical networking, and distributed hash tables (DHTs). She received the 2021 ASU Faculty Women Association Outstanding Mentor Award and the 2017 Best Senior Researcher award from the School of Computing and Augmented Intelligence (SCAI). She was the recipient of an NSF CAREER Award in 1999, an associate editor of IEEE Transactions on Mobile Computing, and the keynote speaker and program and general chair of several prestigious conferences. She has also delivered several invited talks both nationally and internationally.
Advanced Data Structures for monitoring cyber streams
Cynthia Phillips
Wednesday, December 1, 2021, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/95844274177
Passcode: 856723
Abstract:
This talk will describe data structures/data-management algorithms for monitoring high-speed cyber streams. We wish to identify specific patterns that arrive slowly over time, hidden among high-speed streams of normal traffic. To find such patterns, we must store as much data as possible, to avoid losing partial patterns before the final piece arrives. We must recognize a completed pattern and report it as soon as possible while keeping up with the fast stream of arrivals. To store more stream history, we must carefully manage the movement of data between main (fast) storage and secondary (slower) storage.
We describe the problem, which is a variant of heavy hitters. We present the data structures and algorithms, theoretical analysis results and intuition without detailed proof, and experimental results.
This is joint work with many colleagues at Sandia National Laboratories and university collaborators.
Bio:
Cynthia Phillips is a senior scientist at Sandia National Laboratories. She received a B.A. in applied mathematics from Harvard University and a PhD in computer science from MIT. She has historically worked in combinatorial optimization, algorithm design and analysis, and parallel computation with elements of operations research. Her work has spanned theory, general solver development, and applications. Most recently, she has worked in "big data" areas, including streaming and complex (social) network analysis; co-design of algorithms and architectures for extreme-scale future machines; algorithms for emerging architectures such as neuromorphic computers; cybersecurity, and infrastructure security. She is a fellow of the Society for Industrial and Applied Mathematics (SIAM) and a distiguished scientist of the Association for Computing Machinery (ACM).
Machine Learning, Networking, and Computer Science Education
R. Benjamin Shapiro
Wednesday, November 17, 2021, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/95844274177
Passcode: 856723
Abstract:
Today’s world is full of technologies that leverage machine learning (ML) and networking. Young people see and use many of them every day, including voice assistants like Alexa and Siri, messaging applications, multiplayer games, video filters, and even autonomous vehicles. But the ways we teach computing to young people largely ignores these technologies, and as such does not enable their creative agency and critical capacity with respect to them. In this talk I will describe work my research group has done, together with educator and researcher partners, to create new tools for youth to apply and learn about ML and networking within creative projects, and to study learning with these systems. Our work illustrates that nominally advanced topics in computing (like ML) can, in fact, be part of introductory computing, and that integrating them into beginning computing courses and programs can enable young people to deeply leverage their prior knowledge about dance, athletics, and music.
Bio:
R. Benjamin Shapiro is an Assistant Professor in the Department of Computer Science at the University of Colorado Boulder. He is also faculty, by courtesy, in Learning Sciences and Human Development (School of Education) and Information Science (College of Media, Communication, and Information). He leads the Education team in Apple’s AI/ML organization, which both conducts research and development of new technologies and partnerships for ML education and supports learning about AI and ML by Apple employees worldwide. He holds a B.A. in Independent Studies from the University of California San Diego, and a PhD in Learning Sciences from Northwestern University. He was a postdoc in the Wisconsin Institutes for Discovery, at the University of Wisconsin Madison.
Reconstructing Random Geometric Graphs
Varsha Dani
Wednesday, November 10, 2021, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/95844274177
Passcode: 856723
Abstract:
A unit-disk graph is obtained by taking a finite collection of points in the plane as vertex set, and putting an edge between any two vertices whose Euclidean distance is at most one. The reconstruction problem for such a graph asks, given the adjacency matrix of the graph as input, to approximately recover the coordinates of each vertex, up to symmetries. How accurately can this be done? I will present some recent progress on this problem under the additional assumption that the collection of points is chosen at random from a compact convex region of the plane. I'll also briefly discuss how to extend the same ideas to higher dimensions and arbitrary manifolds of bounded curvature.
Bio:
Varsha Dani is an Assistant Professor at the Rochester Institute of Technology, where she pretends to be a Computer Scientist despite being a Mathematician at heart. After getting a Ph.D in Computer Science from the University of Chicago, she spent many years in Albuquerque, dabbling in many things, including parenting, travel and hiking, edu-tainment, and, of course, mathematics ...er theoretical computer science. In her spare time she likes to... wait, what spare time?
Creative Learning through Expressive Making
HyunJoo Oh
Wednesday, November 03, 2021, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/95844274177
Passcode: 856723
Abstract:
Tools shape the way we think, make, and learn. As a designer and design researcher, I build tools that integrate everyday craft materials with computing, and study how the tools can engage and support other designers in investigating new expressive and technical possibilities. I employ familiarity and accessibility of everyday tools and materials to empower a broad range of designers, from professional designers and hobbyist makers to K-12 educators and students, to expand their capabilities for creative learning through making. In this talk, I’ll present my recent projects, developing a kit of materials for inclusive computing education and techniques of DIY sensing technology for design prototyping.
Bio:
HyunJoo Oh is an Assistant Professor with a joint appointment in the School of Industrial Design and the School of Interactive Computing at Georgia Tech, where she directs the CoDe Craft group (www.codecraft.group/). Her team investigates how computing technologies can extend and transform everyday craft materials and how these integrations can broaden creative possibilities for designers. She received her PhD in Technology, Media, and Society from University of Colorado Boulder and Master’s degrees in Entertainment Technology from Carnegie Mellon University and Media Interaction Design from Ewha Womans University.
Distributed connectivity and the k-out random graph conjecture
Valerie King
Wednesday, October 27, 2021, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/95844274177
Passcode: 856723
Abstract:
We consider the following problem. Each node in a graph has a distinct ID and each knows only the IDs of its neighbors. Suppose it can send one message to a referee who must determine the graph’s connected components. The graph sketching technique described by Ahn, Guha and McGregor in 2012 gives a method which requires only O(log^3 n) bits to be sent by each node, to compute the solution with high probability, and this is tight, according to a recent result of Nelson and Yu. However this method requires public randomness.
We began by investigating the one-way communication cost of this problem when there is private randomness, and ended up proving a surprising lemma about sampling in graphs and connectivity. This is joint work with Jacob Holm, Mikkel Thorup, Or Zamir, and Uri Zwick which appeared in FOCS 2019.
Bio:
Valerie King is an American and Canadian computer scientist who works as a professor at the University of Victoria. Her research concerns the design and analysis of algorithms; her work has included results on maximum flow and dynamic graph algorithms, and played a role in the expected linear time MST algorithm of Karger et al. King graduated from Princeton University in 1977. She earned a law degree (Juris Doctor) from the University of California, Berkeley in 1983, and became a member of the State Bar of California, but returned to Berkeley and earned a Ph.D. in computer science in 1988 under the supervision of Richard Karp with a dissertation concerning the Aanderaa–Karp–Rosenberg conjecture.
She became a Fellow of the Association for Computing Machinery in 2014.
Checking In With All Stakeholders: Reflecting on Research Methods While Designing an Electronic Toolkit with Older Adult Crafters
Katie Siek
Wednesday, October 20, 2021, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/95844274177
Passcode: 856723
Abstract:
Computing researchers develop technology for older adults – from innovating how to make commodity technology more accessible to designing aging in place systems for various stakeholders. Older adults are often consulted by research teams, but not integrally involved in the design process. The research teams themselves are often small with internal epistemologies that impact how research is conducted and analyzed.
In this talk, I reflect on how we iteratively developed an electronic toolkit with older adult crafters while collaborating with outside research teams. During our in-person and remote design workshops, we had to check in with older adult advisors to better understand how we could organize mutually beneficial design workshops that would also help us understand how to scaffold activities to build up their electronics knowledge. Likewise, we checked in with other research teams who integrated their own research approaches into our design process. These collaborations made us aware of new empirical understandings about our methods and how we studied older adults. I conclude with considerations researchers should take when designing for older adults and how to check-in on their own research practices.
Bio:
Katie Siek is a professor and chair of Informatics at Indiana University Bloomington. Her primary research interests are in human computer interaction, health informatics, and ubiquitous computing. More specifically, she is interested in how sociotechnical interventions affect personal health and well being. Her research is supported by the National Institutes of Health, the Robert Wood Johnson Foundation, and the National Science Foundation including a five-year NSF CAREER award. She has been awarded an NCWIT Undergraduate Research Mentoring Award (2019), a CRA-W Borg Early Career Award (2012), and Scottish Informatics and Computer Science Alliance Distinguished Visiting Fellowships (2010 & 2015). Prior to returning to her alma mater, she was a professor for 7 years at the University of Colorado Boulder. She earned her PhD and MS at Indiana University Bloomington in computer science and her BS in computer science at Eckerd College. She was a National Physical Science Consortium Fellow at Indiana University and a Ford Apprentice Scholar at Eckerd College.
Theory and Performance of Backoff Algorithms
Maxwell Young
Wednesday, October 13, 2021, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/95844274177
Passcode: 856723
Abstract:
Binary exponential backoff (BEB) is a decades-old algorithm for coordinating access to a common system resource, such as a shared communication channel. In modern networks, BEB plays a crucial role in WiFi and other wireless standards. Despite this track record, well-known theoretical results indicate that under bursty traffic, BEB has poor performance, and superior algorithms exist.
We investigate a challenging case for BEB: a single burst of packets that simultaneously contend for access on a wireless channel. Using Network Simulator 3, we incorporate into IEEE 802.11g several newer algorithms that have theoretically-superior performance guarantees. Surprisingly, we discover that these newer algorithms underperform BEB.
Investigating further, we identify as the culprit a common abstraction regarding the performance impact of collisions; that is, when two or more devices send at the same time, resulting in failed communication. Our experimental results are complemented by analytical arguments that the number of collisions is an important metric to optimize. We propose a new theoretical model that accounts for the cost of collisions, and we derive new asymptotic bounds on the performance of BEB and some newer backoff algorithms.
Bio:
Maxwell Young is an Assistant Professor at Mississippi State University. Previously, he completed his MS under the supervision of Jared Saia, received his PhD at the University of Waterloo, Canada, and did postdocs at the National University of Singapore and the University of Michigan, Ann Arbor. His work focuses on algorithm design and analysis for large-scale networks. Outside of research, Max fights a losing battle to civilize his children, and he occasionally makes fountain pens.
Expanding Digital Boundaries in Physical Space
John-Mark Collins
Wednesday, October 6, 2021, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/95844274177
Passcode: 856723
Abstract:
Our world is becoming more and more infused with digital pieces - what would this look like if we artists and creatives pushed something together, versus allowing the large tech companies to guide the deployment? How can we start in our galleries and facilities in a way that will empower the new digital reality to be more human-focused?
Electric Playhouse is an all-ages dining, gaming, and recreation wonderland that requires no goggles or equipment to transport you to another reality. The immersive projection-based play arena has 18 interactive areas that change constantly. Electric playhouse is located in Albuquerque, NM.
Bio:
John-Mark is the co-founder and creative director of Electric Playhouse. He is a creative problem solver whose passion lies in using technology to augment the real world in beautiful and engaging ways. John-Mark received his BS in Computer Engineering and MBA from the University of New Mexico. In addition to technology, John-Mark has studied in both Art and Architecture and has participated in several public electronic arts exhibitions. He has worked with a variety of clients, including Coca-Cola, Starbucks, HP, Intel, several Smithsonian Institutions as well as Fermi, SLAC, and Sandia National Laboratories. The drive behind Electric Playhouse stems from the wonder and awe shown on a daily basis by his two young daughters, Lola and Mila.
Designing the Hybrid Body
Cindy Hsin-Liu Kao
Wednesday, September 29, 2021, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/95844274177
Passcode: 856723
Abstract:
Sensor device miniaturization and breakthroughs in novel materials are allowing for the placement of technology increasingly close to our physical bodies. However, unlike all other media, the human body is not simply another surface for enhancement - it is the substance of life, one that encompasses the complexity of individual and social identity. The human body is inseparable from the cultural, the social, and the political, yet technologies for placement on the body have often been developed separately from these considerations, with an emphasis on engineering breakthroughs. The Hybrid Body Lab investigates opportunities for cultural interventions in the development of technologies that move beyond wearable clothing and accessories, and that are purposefully designed to be placed directly on the skin surface. By hybridizing miniaturized robotics, machines, and materials with cultural body decoration practices, the Hybrid Body Lab investigates how technology can be situated as a culturally meaningful material for crafting our identities. Through these hybrid body interfaces, we investigate opportunities for designing new modes of self-expression, and also ways to interact with others and our surrounding environments.
Bio:
Cindy Hsin-Liu Kao is an Assistant Professor in the College of Human Ecology and graduate field faculty in Information Science and Electrical and Computer Engineering at Cornell University, where she founded and directs the Hybrid Body Lab. Her research practice themed Hybrid Body Craft blends aesthetic and cultural perspectives into the design of on-body interfaces. She also creates novel processes for crafting technology close to the body. Her research has been presented at leading computer science conferences and journals (ACM CHI, UbiComp/ISWC, TEI, UIST, IEEE Pervasive Computing) while receiving media coverage by CNN, TIME, Forbes, Fast Company, WIRED, among others. Her work has been exhibited and shown internationally at the Pompidou Centre, the Boston Museum of Fine Art, Ars Electronica, New York Fashion Week. Among her awards include the NSF CAREER Award, and several Honorable Mention/Best Paper Awards in top computer science conferences (ACM CHI, UIST, DIS and ISWC). The design community has also recognized her lab's work with the Fast Company Innovation by Design Award Finalist, an Ars Electronica STARTS Prize Nomination, and the SXSW Interactive Innovation Award. Dr. Kao holds a Ph.D. from the MIT Media Lab, along with a Master's degree in Computer Science; and two Bachelor’s degrees in Computer Science and in Technology Management, all from National Taiwan University.
On Power of Choice for k-colorability
Diksha Gupta
Wednesday, September 22, 2021, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/95844274177
Passcode: 856723
Abstract:
In studying the evolution of a random graph, one starts from an empty graph and adds edges one by one, generating each one independently and uniformly at random. An object of interest associated with the resultant graph is the size of the edge set at which some property changes. For instance, if we are interested in the k-colorability of the random graph, there will eventually be an edge that renders it non-k-colorable. Earlier work on the “power of choice” to affect the outcome of random processes has investigated questions like load-balancing in balls and bins models, scheduling, routing, and many more. When applied to the random graph process, it results in the r-choice Achlioptas process, wherein random edges are generated r at a time, and an online strategy selects one for inclusion in a graph. In this talk, we investigate the problem of whether such a selection strategy can shift the k-colorability transition; that is, the number of edges at which the graph goes from being k-colorable to non-k-colorable.
Bio:
Dr. Diksha Gupta is a Research Fellow at the National University of Singapore under Prof. Seth Gilbert. She obtained a Ph.D. in Computer Science from the University of New Mexico, USA, under the advisement of Prof. Jared Saia in Fall 2020. She holds an M.S. in Computer Science from UNM and an M.Tech. in Computer Science and Engineering from IIT Roorkee, India. Her current research focuses on designing provably secure, scalable, and efficient protocols for distributed systems. In general, topics related to Randomized Algorithms, Distributed Graph Algorithms, Algorithmic Game Theory, and Biologically-inspired Algorithms spike her interest. In her free time, she likes solving mathematical puzzles, reading sci-fiction, and traveling. As a grad student at UNM, she has served as a member for various student and departmental organizations – UNM CS Advisory Board (Graduate Representative), Computer Science Graduate Student Association (President), and UNM Women in Computing (Vice President). She will be joining IBM Innovations Lab Singapore as a Research Scientist in November this year.
Theatre of the Car
Wendy Ju
Wednesday, September 15, 2021, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/95844274177
Passcode: 856723
Abstract:
The advent of autonomous vehicles is both exciting and alarming. The success or failure of such systems will very much depend on the driver-vehicle interaction: whether people have a good assessment of what the car perceives and is likely to do, and how they might respond to different situations. In my research lab, we are looking at how people will interact with the cars and robots of tomorrow. By using simulation technologies and design techniques, we can prototype and test interfaces with real people to understand how best to design our future interactions with automation.
Bio:
Wendy Ju is an Associate Professor at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion and in the Information Science field at Cornell University. Her work in the areas of human-robot interaction and automated vehicle interfaces highlights the ways that interactive devices can be designed to be safer, more predictable, and more socially appropriate. Professor Ju has innovated numerous methods for early-stage prototyping of automated systems to understand how people will respond to systems before the systems are built. She has a PhD in Mechanical Engineering from Stanford, and a Master’s in Media Arts and Sciences from MIT.
A Class of Trees Having Near-Best Balance: a Competitor to Divide-and-Conquer
Laura Monroe
Wednesday, September 8, 2021, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/95844274177
Passcode: 856723
Abstract:
Full binary trees naturally represent commutative non-associative products. There are many important examples of these products: finite-precision floating-point addition and NAND gates, among others. Balance in such a tree is highly desirable for efficiency in calculation. The best balance is attained with a divide-and-conquer approach. However, this may not be the optimal solution, since the success of many calculations is dependent on the grouping and ordering of the calculation, for reasons ranging from the avoidance of rounding error, to calculating with varying precision, to the placement of calculation within a heterogeneous system.
We introduce a new class of computational trees having near-best balance in terms of the Colless index from mathematical phylogenetics. These trees are easily constructed from the binary decomposition of the number of terms in the problem. They also permit much more flexibility than the optimally balanced divide-and-conquer trees. This gives needed freedom in the grouping and ordering of calculation and allows intelligent efficiency trade-offs.
Bio:
Laura Monroe (HPC-DES) is a research scientist at Los Alamos National Laboratory. She received her Ph.D. in Mathematics and Computer Science from the University of Illinois at Chicago, where she studied the theory of error-correcting codes. She worked at NASA Glenn following graduation and joined LANL in 2000.
She was formerly the project leader for the laboratory’s Production Visualization project, where her last project was leading the rebuild of LANL’s state-of-the-art visualization corridor, including large-scale visualization systems, large virtual reality theaters and networking and systems for desktop visualization. She now works at LANL’s Ultrascale Systems Research Center in the field of novel computing, in particular probabilistic computing for high-performance applications. She has published in the fields of probabilistic computing, resilience, error-correcting codes, combinatorics and visualization, and her interests are in those fields as well as in the mathematical bridge between the computer as physical object and as ideal system.
She has received several Defense Program Awards of Excellence and several LANL Distinguished Performance awards, both as team leader and team member, and received an R&D 100 award in 2006 as part of the PixelVizion team. She was named one of the 2019 NM Technology Council Women in Technology awardees.
Dynamical system models for politics and voting
Vicky Chuqiao Yang, Santa Fe Institute
Wednesday, September 1, 2021, 2:00 PM
Join via Zoom: https://unm.zoom.us/j/95844274177
Passcode: 856723
Abstract:
The recent US political landscape brings many puzzling questions. For example, the two major parties have become increasingly polarized since the 1960s, while most voters maintained moderate policy positions. What can lead to the disconnect between the parties and the voters? Also, a sizable proportion, often the majority, of the voting population is uninformed about facts relevant to their voting decisions, such as policies proposed by the candidates. Can such a voting body deliver good collective decisions? In this talk, I will summarize research projects that address these complex issues. These projects leverage dynamical-systems models, recent findings in psychology, and data analysis. This approach takes into account the impact of multiple, complex, and often non-linear factors, and aim to give a coherent understanding of complex social phenomena.
Bio:
Vicky Chuqiao Yang is a fellow at the Santa Fe Institute. Her research uses mathematical tools to understand complex phenomena of human society. She wants to understand both human's collective smarts and their collective stupidity. Her recent applications of interest are urban areas and collective decision-making. Her approach involves two aspects: building mathematical models informed by psychological and social principles of human behavior and using real-world datasets to inform and confront these models. Vicky received a Ph.D. in Applied Mathematics from Northwestern University.
Adding Fast GPU Derived Datatype Handing to Existing MPIs
Carl Pearson, PhD, Sandia National Labs
Wednesday, May 5, 2021, 2:00-2:50 PM
Join via Zoom: https://unm.zoom.us/j/98715707842
Meeting ID: 987 1570 7842
Passcode: 9620277
Abstract:
MPI derived datatypes are an abstraction that simplifies handling of non-contiguous data in MPI applications.These datatypes are recursively constructed at runtime from primitive Named Types defined in the MPI standard.
More recently, the development and deployment of CUDA-aware MPI implementations has encouraged the transition of distributed high-performance MPI codes to use GPUs. Such implementations allow MPI functions to directly operate on GPU buffers, easing integration of GPU compute into MPI codes.
This talk presents a novel datatype handling strategy for nested strided datatypes on GPUs, and its evaluation on a leadership-class supercomputer that does not have built-in support for such datatypes. It focuses on the datatype strategy itself, implementation decisions based off measured system performance, and a technique for experimental modifications to closed software systems.
Bio:
Carl Pearson is a postdoctoral appointee at Sandia National Labs in the Center for Computing Research. There, he works in the Scalable Algorithms group on multi-GPU communication and leveraging specialized GPU computation hardware for generic computing tasks. He received his Ph.D in Electrical and Computer Engineering from the University of Illinois.
Internet of Things-enabled Passive Contact Tracing in Smart Cities
Zeinab Akhavan, UNM CS
Wednesday, April 28, 2021, 2:00-2:50 PM
Join via Zoom: https://unm.zoom.us/j/98715707842
Meeting ID: 987 1570 7842
Passcode: 9620277
Abstract:
Contact tracing has been proven an essential practice during pandemic outbreaks and is a critical non-pharmaceutical intervention to reduce mortality rates. While traditional contact tracing approaches are gradually being replaced by peer-to-peer smartphone-based systems, the new applications tend to ignore the Internet-of-Things (IoT) ecosystem that is steadily growing in smart city environments. This work presents a contact tracing framework that logs smart space users’ co-existence using IoT devices as reference anchors. The design is non-intrusive as it relies on passive wireless interactions between each user’s carried equipment (e.g., smartphone, wearable, proximity card) with an IoT device by utilizing received signal strength indicators (RSSI). The proposed framework can log the identities for the interacting pair, their estimated distance, and the overlapping time duration. Also, we propose a machine learning-based infection risk classification method to characterize each interaction that relies on RSSI-based attributes and contact details. Finally, the proposed contact tracing framework’s performance is evaluated through a real-world case study of actual wireless interactions between users and IoT devices through Bluetooth Low Energy advertising. The results demonstrate the system’s capability to accurately capture contact between mobile users and assess their infection risk provided adequate model training over time.
Bio:
Zeinab is a PhD student in Computer Science at UNM. She has served as UNM Women in Computing Vice President from 2018-2020 and helped CS undergraduate and graduate students build their network within the department. Zeinab’s research interests lie on applying machine learning/deep learning techniques into Internet of Things applications, such as smart buildings/cities.
Upgrading the Product Development Process to Foster ML Fairness and Ethical AI
Donald Martin, Google
Wednesday, April 21, 2021, 2:00-2:50 PM
Join via Zoom: https://unm.zoom.us/j/98715707842
Meeting ID: 987 1570 7842
Passcode: 9620277
Abstract:
Although technology and computer algorithms have significantly advanced health care in recent times, we’ve seen examples where faulty assumptions or biased training data have only served to increase health disparities. Recent research on algorithmic fairness has highlighted that the problem formulation phase of the development of systems that use machine learning can be a key source of bias and have significant downstream impacts on fairness outcomes. However, very little attention has been paid to methods for improving the fairness of this critical phase of machine learning system development. Current practice neither accounts for the dynamic complexity of high-stakes domains nor incorporates the perspectives of vulnerable stakeholders. This talk will explore the application of community-based system dynamics during the product development process in order to foster equitable and inclusive technology based on machine learning and artificial intelligence.
Bio:
Donald Martin, Jr. is currently Sr. Staff Technical Program Manager and Social Impact Technology Strategist at Google. He focuses on driving innovation in the spaces where Google's products and services intersect with society as well as understanding the intersections between Trust and Safety, Machine Learning (ML) Fairness and Ethical Artificial Intelligence (AI). He holds a Bachelor of Science degree in Electrical Engineering from the University of Colorado at Denver and founded its National Society of Black Engineers (NSBE) chapter. Donald has over 30 years of technology leadership experience in the telecommunications and information technology industries. He has held CIO, CTO, COO, VP of IT and product manager positions at global software development companies and telecommunications service providers. Donald holds a US utility patent for "problem modeling in resource optimization." His most recent publication is the Harvard Business Review article "AI Engineers Need to Think Beyond Engineering.”
Free Space Optics based Backhaul/Fronthaul Design for 5G and Beyond
Xiang Sun, PhD, UNM Department of Electrical and Computer Engineering
Wednesday, April 14, 2021, 2:00-2:50 PM
Join via Zoom: https://unm.zoom.us/j/98715707842
Meeting ID: 987 1570 7842
Passcode: 9620277
Abstract:
As 5G technologies are being rolled out, research effort is concentrated on the evolutionary solutions for post-5G and 6G eras. In order to meet the requirements of increased capacity, reduced latency, and on-demand service for mobile networks, various technologies, such as free space optics (FSO) and drone mounted base stations, have been proposed to be the enabling solutions for the next generation mobile networks. In this talk, a tunable FSO system will be introduced to provide low-cost and high-speed backhaul/fronthaul links between geographically distributed base stations and the gateway, where the base stations are communicating with the gateway via the FSO links in the time-division multiplexing manner. In addition, we will illustrate the FSO based drone assisted mobile network framework, where drone mounted base stations are deployed over some places of interest (such as hotspots and disaster struck areas) to assist the base stations to communicate with the mobile users in the places of interest via FSO links. Some challenges and potential solutions to achieve the FSO based drone assisted mobile network will also be presented.
Bio:
Xiang Sun is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of New Mexico. He received the B.E. and M.E. degrees from the Hebei University of Engineering in 2008 and 2011, respectively, and the Ph.D. degree in electrical engineering from the New Jersey Institute of Technology (NJIT) in 2018. He has (co-)authored 44 technical publications, held one U.S. patent, and filed six U.S./PCT non-provisional patent applications. His research interests include mobile edge computing, wireless networks, distributed machine learning, and Internet of Things. He has received several honors and awards, including the 2016 IEEE International Conference on Communications Best Paper Award, the 2017 IEEE Communications Letters Exemplary Reviewers Award, the 2018 NJIT Hashimoto Price, the 2018 Inter Digital Innovation Award on IoT Semantic Mashup, the 2019 NJIT Outstanding Doctoral Dissertation Award, and the 2019 IEICE Communications Society Best Tutorial Paper Award. He currently serves as Associate Editor of the Digital Communications and Networks and the IEEE Open Journal of the Computer Society.
Designing and Optimizing MPI for Next-generation Applications and Systems
Patrick Bridges, PhD, UNM Computer Science
Wednesday, April 7, 2021, 2:00-2:50 PM
Join via Zoom: https://unm.zoom.us/j/98715707842
Meeting ID: 987 1570 7842
Passcode: 9620277
Abstract:
The Message Passing Interface (MPI) has been the standard for building distributed scientific applications for more than three decades, but both computer architectures and scientific applications have changed dramatically in this time, resulting in systems that are significantly more complex and difficult to program and optimize than the systems for which MPI was originally designed. Because of this, MPI has had to and must continue to change as well, taking on more responsibility for managing and optimizing the complex communication required to effectively leverage modern supercomputers. In this talk, I will describe the challenges faced by next-generation parallel communication systems and describe several research directions we are pursuing to address these challenges. This includes research on the communication performance of HPC applications on modern systems, how to effectively model the performance of communication primitives so they can be successfully optimized, and higher-level communication patterns and primitives commonly used by HPC applications and how new MPI abstractions such as neighbor collectives and persistent, partitioned communication can or could be used to improve their performance.
Bio:
Patrick Bridges is a Professor of Computer Science and the Director of the UNM Center for Advanced Research Computing. He received his B.S. in Computer Science from Mississippi State University in 1994 and his Ph.D. in Computer Science from the University of Arizona in 2002 and joined the faculty of UNM immediately thereafter. His research interests cover a wide range of topics related to operating systems and networking for large-scale high-performance computing systems, including virtualization, performance measurement and modeling, fault tolerance, and general system design issues.
Time-series data mining and machine learning techniques for seismic signal detection and classification
Mohammad Ashraf Siddiquee, PhD Candidate, UNM Computer Science
Wednesday, March 24, 2021, 2:00-2:50 PM
Join via Zoom: https://unm.zoom.us/j/98715707842
Meeting ID: 987 1570 7842
Passcode: 9620277
Abstract:
Current seismic data processing pipeline is surprisingly human-dependent. With the rapid increase of seismic-sensor data availability, all manual data processing approaches fail to detect, classify, and analyze seismic activity within a reasonable amount of time. An automated, fast, and reliable seismic data processing pipeline is desired for meaningful analysis of massive seismic datasets. In this talk, we will show how advanced time-series data-mining and machine learning techniques can be leveraged to resolve this issue. We will particularly focus on seismic activity detection, classification, and inspection using our techniques that would help us better understand the surrounding earth structure, earthquake evaluation, and seismic monitoring.
Bio:
Ashraf is a PhD candidate in the UNM Computer Science Department. He works under the supervision of Dr. Abdullah Mueen, and his research revolves around designing and developing novel data mining and machine learning approaches that can be applied to large time-series (i.e. seismic sensor) datasets. For his research work, he collaborates with Los Alamos National Laboratory and Air Force Research Laboratory. He has interned at NEC laboratories America Inc where he implemented a domain agnostic anomaly detection system in univariate time-series. In his free time, Ashraf goes outdoors for biking or fishing.
Financial Science is Computer Science
Donour Sizemore, PhD, Two Sigma Investments
Wednesday, March 10, 2021, 2:00-2:50 PM
Join via Zoom: https://unm.zoom.us/j/98715707842
Meeting ID: 987 1570 7842
Passcode: 9620277
Abstract:
In this talk, the speaker will survey the types of problems that finance companies solve, from the mundane to the novel. With this motivation, we introduce examples of the technical problems that investment managers see on a daily basis. Highlights will include automated decision making (modeling/forecasting), scalable data collection, and technical debt. No finance background is necessary.
Bio:
Donour is Vice President at Two Sigma Investments, where he has worked as a software engineer since 2014. He has worked on data science problems in a variety of industries and is a UNM Computer Science alumnus Previously, he was Trackside Systems Director at Michael Waltrip Racing (2011-2013), Visiting Researcher at Sun (2009), and Researching Computing Director at the Chicago Economic Research Center (2003-2005). Donour holds a Ph.D. in Computer Science from the University of New Mexico (2011) and a BS in Mathematics and Computer Science from the University of Chicago (2003).
From Robot Swarms to COVID-19: Interactions in Space Determine Temporal Dynamics
Melanie Moses, PhD, The University of New Mexico Department of Computer Science
Wednesday, February 24, 2021, 2:00-2:50 PM
Join via Zoom: https://unm.zoom.us/j/98715707842
Meeting ID: 987 1570 7842
Passcode: 9620277
Abstract:
In complex systems high level patterns emerge from small scale interactions. The nature of those interactions depends on where agents are located in physical space. In this talk I highlight the importance of understanding small scale interactions to predict behavior of robot swarms and disease dynamics. The talk reviews our bio-inspired algorithms for robot foraging, the VolCAN swarm of volcano monitoring robots, and how the spatial dynamics of viral infection in the lung determines viral load which ultimately influences epidemic spread of COVID-19.
Bio:
I am a Professor of Computer Science with a secondary appointment in Biology at the University of New Mexico. I'm also an external faculty member of the Santa Fe Institute. I earned my undergraduate degree in Symbolic Systems at Stanford University and my PhD in Biology at UNM. I've recently run the NASA Swarmathon and NM CSforAll educational programs, and I am a Co-PI of the UNM Advance program to support women and underrepresented faculty in STEM. My current research projects include the VolCAN project to develop a swarm of autonomous adaptive robots to monitor volcanoes and predict eruptions, and SIMCov, a spatial model of COVID-19 lung infection and immune response. I am also a co-PI on two AI research institute planning grants, one on the foundations of intelligence at the Santa Fe Institute, and the Proteus Institute at the University of Vermont.
Hand and Machine
Leah Buechley, PhD, UNM CS Department
Wednesday, February 10, 2021, 2:00-2:50 PM
Join via Zoom: https://unm.zoom.us/j/98715707842
Meeting ID: 987 1570 7842
Passcode: 9620277
Abstract:
The aim of my research is to introduce the creative and intellectual potential of computers and electronics to new audiences. I believe that by making technology more accessible and building artifacts that look and feel different from anything that has been built in the past, I can change and broaden the culture of technology. I can get a diverse range of people excited by the ways that computers and electronics can be used to build beautiful, expressive, and useful objects. I can also illuminate the deep relationships between the tools that we use and the communities that we create. To achieve these goals, I integrate computation and electronics with materials from art and design, like paper, textiles, ceramics, and wood. I then use these integrations to develop new tools and approaches that others can employ. I research the adoption of the tools I develop to understand how different people and communities use and learn from different materials. This talk will present an overview of my work, focusing in particular on two recent projects, designing and building Interactive Murals and exploring computational design and ceramics.
Bio:
Leah Buechley is an associate professor in the computer science department at the University of New Mexico, where she directs the Hand and Machine research group. Her work explores integrations of electronics, computing, art, craft, and design. She is a pioneer in paper and fabric-based electronics and her inventions include the LilyPad Arduino, a construction kit for sew-able electronics. Previously, she was a professor at the MIT Media Lab, where she founded and directed the High-Low Tech group. Her work has been featured in publications including The New York Times, Boston Globe, and Wired and exhibited in venues including Ars Electronica, the Exploratorium, and the Victoria and Albert Museum. In 2017, her work was recognized with the Edith Ackerman award for Interaction Design and Children. Leah received a PhD in computer science from the University of Colorado at Boulder and a BA in physics from Skidmore College.
Locality-Aware Data Movement on Modern Supercomputers
Amanda Bienz, PhD, UNM CS Department
Wednesday, February 3, 2021, 2:00-2:50 PM
Join via Zoom: https://unm.zoom.us/j/98715707842
Meeting ID: 987 1570 7842
Passcode: 9620277
Abstract:
The performance of parallel sparse linear solvers, such as algebraic multigrid, is limited by inter-process communication constraints. The cost of communication is dominated by inter-node messages, while on-node messages are relatively inexpensive. Therefore, communication can be optimized by limiting inter-node communication, exchanging it for additional intra-node messages. During this talk, I will discuss performance expectations of irregular communication as well as locality-aware methods for optimizing the cost. Furthermore, I will present extensions of this work to modern heterogeneous architectures.
Bio:
Amanda Bienz is an assistant professor in the department of computer science at the University of New Mexico. She received her PhD from the University of Illinois at Urbana-Champaign in 2018. Her research is focused on improving the performance and scalability of numerical methods on modern parallel architectures, with a focus on sparse matrix operations, locality-awareness, data aggregation, and heterogeneous architectures.
Characterizing Biomolecular Binding with a Docking Game
Bruna Jacobson, PhD, UNM CS Department
Wednesday, December 2, 2020, 2:00-2:50 PM
Join via Zoom: https://unm.zoom.us/j/98715707842
Meeting ID: 987 1570 7842
Passcode: 9620277
Abstract:
Biomolecular interactions are vital to the large majority of processes that sustain life. When these interactions go astray, they can lead to the onset of diseases and allergies. Research on molecular binding and recognition is central to the development of new drugs and vaccines. We are witnessing this right now, as the research community is trying to identify potential molecular targets for vaccines and treatments against COVID-19. Simulations of biomolecular binding can greatly contribute by filling in knowledge gaps from wet-lab experiments. Computer simulations at the atomic scale can help to elucidate these interactions in detail. However, atomistic simulation is currently limited to short time and length scales and may not capture the phenomena we want to observe. In this colloquium, I will address the design and preliminary results from DockAnywhere, a molecular puzzle game to characterize molecular binding. In this molecular docking game, players can help find bound positions of a small molecule and a protein by manipulating it in the high-dimensional space of molecular interactions. The crowdsourced molecular motion data are used to identify potential binding sites and reconstruct possible pathways of binding with techniques from robotics and biophysics.
Bio:
Dr. Bruna Jacobson is an Assistant Professor at UNM Computer Science. She has a Ph.D. in Physics from the University of Southern California, 2012, and more recently held a Research Assistant Professor position at the Tapia Lab at UNM Computer Science. Her research interests are varied and include Computational Biology, Human-Automation Collaboration, Machine Learning for Biomaterials, Biophysics, Bioinformatics, and Computational Soft Matter Physics. She has often showcased her research in outreach activities around Albuquerque.
Realtime.Earth: Collective Intelligence from Distributed Imagery for Wildland Fire
Kasra “Kaz” Manavi, PhD, Director of Research and Communications, Simtable
Wednesday, November 11, 2020, 2:00-2:50 PM
Abstract:
We are currently fighting “blind” on wildland fire incidents. Fire location and behavior intelligence is crucial during the initial phase of an incident, but reports of wildfire can be delayed for hours. To make matters worse, changes in fuel loads and forest composition along with increasing fire season lengths are resulting in larger and more intense fires. With recent events like the Tubbs, Atlas and Camp Fires, more and more catastrophic wildland fire events are causing significant structure damage and considerable numbers of lives are being lost. Real-time data streams relevant to wildland fire are diversifying e.g. increased activity on social media and publicly accessible imagery. With the increase in these streams, more and more sources of relevant imagery are becoming available during an incident. We suggest the fusion of these data outlets coupled with streaming camera feeds directly from mobile phone browsers can provide real-time situation awareness during the critical first hours of an incident. In this talk we discuss observations obtained using Realtime.Earth, a web-based platform for real-time collective intelligence enabled by imagery capture and collection, data distribution and model visualization, all in the browser. We discuss how imagery captured on mobile devices from citizens, crews and social media can be fused together into live 3D models for real-time fire behavior monitoring.
Bio:
Kasra “Kaz” Manavi is the Director of Research and Communications at Simtable. He received a M.S. in Computer Science from Texas A&M University with an emphasis on robotic motion planning and received a PhD in Computer Science from the University of New Mexico with a focus on computational structural biology. After graduation, he started working at Simtable LLC in Santa Fe, NM, where he has been working on developing a web-based platform to enable real-time collective intelligence by providing users the ability to seamlessly incorporate agent-based modeling, ambient computing, photogrammetry, geospatial information systems and distributed computation into solutions that helps users better understand complex environmental and social phenomena in their community, primarily in the wildland fire space.
Augmenting Human-Infrastructure Interfaces
Fernando Moreu, The University of New Mexico
Wednesday, November 4, 2020, 2:00-2:50 PM
Abstract:
This seminar builds up from existing human-computer and human-machine interfaces, and challenges existing human-infrastructure interfaces with new paradigms and decision-making scenarios. In this seminar, human-infrastructure interfaces are developed within the area of structural health monitoring, exploring Augmented Reality (AR) as the new interface between the deterioration of infrastructure and the humans making real-time decisions. The contents of this talk will emphasize advancing human decisions and cognition of the built environment enabled by a human access to databases, sensors, and assessment tools. To date, new technologies collecting data of the built environment are cheaper, more accurate, diverse, and more accessible than ever before. However, the use and implementation of these new technologies to structural engineers to assess, inspect, or inform actions have been very limited. The examples will present attempts and results of empowering human-machine interfaces in the context of the built environment (human-infrastructure interfaces) and increasing human involvement and participation (human-in-the-loop) through AR. This seminar will present specific practical implementations about how the collection of data, their analysis, and their interpretation can inform human decisions in the area of structural engineering, emergencies and rescue, smart cities and communities, and other overlapping research themes grounded in computer science and engineering.
Bio:
Dr. Fernando Moreu, PE is an assistant professor in structural engineering at the Department of Civil, Construction and Environmental Engineering (CCEE) at the University of New Mexico (UNM) at Albuquerque, NM. He holds courtesy appointments in the Departments of Electrical and Computer Engineering and Mechanical Engineering, both at UNM. He is the founder and director of the Smart Management of Infrastructure Laboratory (SMILab) at UNM (http://smilab.unm.edu/). SMILab is headquartered at the Center for Advanced Research and Computing (CARC) at UNM and aims to develop the use of next-generation smart sensing technologies and strategies towards safer, cost-effective, resilient and sustainable structure . Dr. Moreu’s industry experience includes ESCA Consultants, Inc. for over ten years, with experience in the design, construction and replacement of over thirty bridges in the US. Research interests include structural dynamics and vibrations, structural health monitoring, wireless smart sensor networks, field monitoring of critical infrastructure, augmented reality, unmanned aerial systems, human-machine interfaces, nonlinear dynamics, cyber-physical systems, and aerospace structures design, monitoring, and reusability. Dr. Moreu received his MS and PhD degrees in structural engineering from the University of Illinois at Urbana-Champaign (2005 and 2015, respectively.)
Screening at Scale: Best Practices for Research and Development in Eye Disease Detection
Jeremy Benson, The University of New Mexico
Wednesday, October 14, 2020, 2:00-2:50 PM
Abstract:
In this talk, we will cover some common sight-threatening complications that occur alongside Diabetes. Using tools from image processing and machine learning, we will highlight some successful approaches to making general solutions that work across datasets -- using both low-cost and high-end camera technologies -- as well as mitigating variations that appear throughout different demographics. Finally, we will discuss some tactics to deploy services on cloud infrastructure (AWS) so that they are widely available, cost-effective, and yield quick results.
Bio:
Jeremy Benson is a PhD candidate in Computer Science at the University of New Mexico. He is a member of Dr. Estrada's Data Science Group, where his research focuses on semi-supervised approaches to data discovery and labeling. For the past 5 years, he has been with VisionQuest Biomedical (ABQ, NM), where he works on software solutions for medical diagnostics. Outside of work and school, Benson enjoys playing board games like Chess and Go, or video games like Among Us, which is pretty suspicious.
How to Perform Binary Classification without a Binary Classifier?
Abhinav Aggarwal, PhD, Amazon
Wednesday, September 30, 2020, 2:00-2:50 PM
Abstract:
Binary classification is a fundamental task in machine learning. The success of any such classifier is often measured through real-valued statistical aggregates computed using the classifier's predictions on a set of test data points against their true labels. In this talk, I will present algorithms that use common performance metrics like AUC, Log-Loss, Precision, Recall, or F1-Scores to infer the true labels of arbitrarily many test data points without training any model and requiring only the knowledge of the size of the test dataset. This helps provide insight into the extent of information leakage from exposing these statistical aggregates and how it can be exploited.
Bio:
Abhinav is an Applied Scientist at Amazon Alexa, working on differentially private mechanism design for data de-identification and making machine learning models robust to privacy leakage attacks. Prior to this, he has worked with VISA Research, Google, Microsoft, Cornell University and University of Saskatchewan for internships. He obtained his Ph.D in Computer Science in 2019 from the University of New Mexico (UNM), under the supervision of Prof. Jared Saia. During this time, he worked on fault tolerant distributed computing and its applications to bio-inspired algorithms for swarm foraging, bagging the best paper award at SIROCCO 2020 and a similar nomination at ICDCN 2020.
Beating Sybil with Resource Burning
Diksha Gupta, The University of New Mexico
Wednesday, September 23, 2020, 2:00-2:50 PM
Abstract:
In a permissionless system, due to the absence of a certifying authority, participants can join and depart at will. Taking advantage of this, an attacker can inject a large number of adversarial pseudo participants, thereby launching a Sybil attack on the system. Existing defense techniques use computational puzzles to limit the number of adversarial participants proportional to the fraction of computational power with the attacker. But these impose a computational cost on the system even in the absence of an attack.
In this talk, we will first discuss an algorithm- ESTIMATE, that provides bounds on the number of new honest participants in a permissionless system. To this end, we present an empirical study of the performance of our algorithm on a number of real-world permissionless systems. Then, we will discuss the application of this algorithm to design an efficient Sybil defense algorithm - ERGO. This algorithm gives the following guarantees: 1) always maintains a majority of honest participants in the system, and 2) the cost to the honest participants grows sub-linearly in the cost to the attacker.
Bio:
Diksha is a Ph.D. Candidate in the Department of Computer Science at The University of New Mexico (UNM), under the advisement of Prof. Jared Saia. Her current research is focused on designing provably secure, scalable, and efficient protocols for distributed systems. She obtained an M.S. in Computer Science at UNM. Prior to this, she completed an M.Tech in Computer Science and Engineering from the Indian Institute of Technology (IIT), Roorkee, India. In her free time, she likes solving puzzles, reading sci-fiction, and rock climbing. She has served on the boards of various student and departmental organizations – UNM CS Advisory Board (Graduate Representative), Computer Science Graduate Student Association (CSGSA) (President), and UNM Women in Computing (Vice President). She will be joining as a Research Fellow at the National University of Singapore (NUS) in December 2020.
Imputation and characterization of uncoded self-harm in major mental illness using machine learning
Praveen Kumar, The University of New Mexico
Wednesday, September 16, 2020, 2:00-2:50 PM
Abstract:
Suicide is 1 of the 10 leading causes of death in the United States with self-harming behavior being a major risk factor for suicide. Inadequate coding of suicidality and self-harm in medical records has been consistently reported. Underreporting of self-harm impedes the ability to estimate event prevalence and reduces the statistical power to perform time-to-event comparative effectiveness pharmacotherapy studies. The objective of this study was to apply machine learning algorithms at the visit level to impute self-harm events that were uncoded in claims data of individuals with major mental illness (MMI) (schizophrenia, schizoaffective disorder, major depressive disorder, and bipolar disorder), to identify factors associated with coding discrepancies, and to characterize coded vs imputed self-harm incidence in various demographic groups.
Bio:
Praveen is a computer science Ph.D. student at The University of New Mexico, working with Prof. Christophe G. Lambert at the Department of Internal Medicine of UNM. Medical claims data are often not clean; come with noisy labels and missing phenotypes/outcomes. Missing information poses several challenges while applying machine learning algorithms on such data. Working on claims data, Praveen primarily focuses on developing techniques to impute missing phenotypes/outcomes. His work on imputing missing phenotypes for self-harm won the best poster award at the OHDSI (Observational Health Data Sciences and Informatics) Symposium. He contributed to the OHDSI open-source software to convert the CMS (Centers for Medicare & Medicaid Services) Data to OMOP CDM (Observational Medical Outcomes Partnership Common Data Model) compatible files. He has a Computer Engineering Bachelor's degree from the National Institute of Technology, Surat(India), and a Computer Science Master's degree from the University of New Mexico. After finishing his Bachelor's degree, he worked for IT companies - DXC Technology, Fiserv, and Travelport. At these companies, he worked for the development and enhancement of software products related to banking, insurance, and travel domains.
A new interpolation algorithm for the theory of Equality with Uninterpreted Functions
Jose Abel Castellanos Joo, The University of New Mexico
Wednesday, September 9, 2020, 2:00-2:50 PM
Abstract:
An interpolant for a pair (A, B) of inconsistent formulas is a formula C such that: A implies C; B is inconsistent with C; and C only contains common symbols between A and B. Modern techniques for interpolant generation rely on special deductive calculus and unsatisfiability proofs. In this talk, we will discuss a new algorithm to compute the interpolation formula for the theory of Equality with Uninterpreted Functions (EUF) that does not require unsatisfiability proofs. We will discuss an observation made during the implementation of the algorithm, introducing a new Horn-unsatisfiability algorithm that uses a congruence closure with explanations as the mechanism for equality propagation.
Bio:
Jose is a Ph.D. student in the computer science department at the University of New Mexico working with Prof. Deepak Kapur. His research interests span from formal methods to computer algebra. His goal is to combine decision procedures from sums of squares algorithms to provide new answers to verification problems. Jose enjoys reading about self-reference, learning programming languages/tools, and playing the guitar whenever he finds free time.
A Multi-Robot Loss-Tolerant Algorithm for Surveying Volcanic Plumes
John Ericksen, The University of New Mexico
Wednesday, September 2, 2020, 2:00-2:50 PM
Abstract:
Measurement of volcanic CO2 flux by a drone swarm poses special challenges. Drones must be able to follow gas concentration gradients while tolerating frequent drone loss. We present the LoCUS algorithm as a solution to this problem and prove its robustness. LoCUS relies on swarm coordination and self-healing to solve the task. As a point of contrast, we also implement the MoBS} algorithm, derived from previously published work, which allows drones to solve the task independently. We compare the effectiveness of these algorithms using drone simulations, and find that LoCUS provides a reliable and efficient solution to the volcano survey problem. Further, the novel data-structures and algorithms underpinning LoCUS have application in other areas of fault-tolerant algorithm research.
Bio:
John Ericksen is a software developer with Honeywell Federal Manufacturing and Technologies and a computer science Ph.D. student at the University of New Mexico with the Moses Biological Computation Lab. Working with the earth and planetary sciences department, John's research focus is on autonomous airborne robot swarms used to sample volcanic CO2 plumes. The goal of this is to link volcanic CO2 output with volcanic behavior to better understand the precursors to life-threatening eruptions. John has also published on a variety of other research topics including software architecture, evolutionary complex systems, and intelligent swarm robotics. John holds a computer science Bachelor's degree from Western Washington University and a computer science Master's degree from the University of New Mexico. At Honeywell, John works with a team of software developers that develop software solutions in a variety of contexts to the Federal Government. In his free time, John likes to spend his time underwater. He enjoys scuba diving throughout the United States and the Caribbean, works as a scuba diving instructor at a shop in Albuquerque, and loves nature photography, especially the underwater variety.
Advancing Machine Learning and Machine Vision Using Topological Graph-Based Representations, Methods, and Algorithms
Liping Yang, UNM Department of Geography and Environmental Studies
Wednesday, March 4, 2020
Centennial Engineering Center 1041
2:00-3:00 PM
Abstract:
Google can tell what it is for photo search well, but not for drawing search. Drawing-based search remains challenging because drawing images contain much less information compared with natural images; no color and texture, only shape and topology. In this talk, we will present remaining challenges in computer vision, and we will show how we overcome the obstacles by developing new image representation, methods and algorithms based on topological graph and computational geometry. Our image representation, methods and algorithms can make machine learning and machine vision learn and see better. We will show the effectiveness of our topological graph-based image representation and methods using two applications: image classification and image denoising.
Bio:
Dr. Liping Yang is an assistant professor of geographic information science (GIScience) and geospatial artificial intelligence (GeoAI) in the Department of Geography and Environmental Studies at The University of New Mexico (UNM). Dr. Yang received her Ph.D. in Spatial Information Science and Engineering from the University of Maine in 2015; after that she was a Postdoctoral Researcher at Penn State University, where she worked on machine learning and deep learning to analyze big geospatial data, including high resolution aerial images. After Penn State and prior to UNM , Dr. Yang was a postdoctoral research associate in the Information Sciences group of the Computer, Computational, and Statistical Sciences Division at Los Alamos National Laboratory (LANL), focusing on computer vision and machine learning algorithm development for technical diagram image analysis. Dr. Yang has worked many years at the intersection of Computer Science, Mathematics, and GIScience. Her multidisciplinary background on GIScience, graph theory, computational geometry, and machine learning provides her a solid foundation to develop creative and novel solutions to advance computer vision tasks such as image representation, retrieval, and analysis to advance machine vision. Dr. Yang has multiple top-tier journal papers (e.g., IJGIS, Soft Computing) and conference papers (e.g., ACM SIGSPATIAL GIS, CVPR, ICCV, KDD) in GIScience, GeoAI, and computer vision areas.
Compiler Directed Lightweight Resilience Mechanisms for HPC Applications
Chao Chen, CS Assistant Faculty Candidate
Wednesday, February 26, 2020
Centennial Engineering Center 1041
2:00-3:00 PM
Abstract:
Transient faults are becoming a significant concern for emerging extreme-scale high performance computing (HPC) systems. This nascent problem is exacerbated by technology trends toward smaller transistor size, higher circuit density and the use of near-threshold voltage techniques to save power. They could corrupt the execution of long-running scientific applications by leading to either SDCs (incorrect values in outputs) or soft failures (abnormal termination, e.g., process crashes). While SDCs harm the confidence in computations and could lead to inaccurate and untrustworthy scientific insights, soft failures degrade system efficiency and performance since they require the impacted jobs to be restarted from their checkpoints and re-executing the lost computations before continuing the normal operation. As a consequence, transient faults detection as well as recovery must be dealt with in the HPC system design for its usability (trust in the output results) and efficiency (speedup and energy efficiency). In particular, solutions must be designed that have very low regular execution overheads, as well as an ability to detect (and potentially recover from) a large set of faults with negligible downtime.
In this talk, I will present two compiler driven resilience techniques, called LADR and CARE, which are designed respectively for SDC detection and soft failure (SF) recovery. By exploring applications’ knowledge via compiler techniques, they both achieve high fault coverage (~80%), but incur negligible or even zero runtime overheads. I will first describe LADR which detects the SDCs in scientific applications by watching for data anomaly of their state variables (those of scientific interest), and employs compile-time data-flow analysis to minimize the number of monitored variables, thereby reducing runtime and memory overheads. The compiler analysis uses the algebraic properties of the underlying data-flow to select the variables where the fault appears in a magnified manner. The technique is able to maintain a high level of fault coverage with low false positive rates. I will then introduce CARE, a compiler-assisted online recovery technique against soft failures. The advantages of CARE are that it can quickly (with milliseconds) repair the (crashed) process on-the-fly allowing applications to continue their executions instead of being simply terminated and restarted, and incur zero runtime overhead during the normal execution of applications. For recovery, it utilizes the live variables of the program resident in registers and reconstructs the failed computation. Finally, I will conclude my talk by describing future directions towards applying compiler technologies for efficient implementation of the desired system properties.
Bio:
Chao Chen is a Ph.D. candidate in the School of Computer Science at Georgia Tech, advised by Santosh Pande and Greg Eisenhauer. His research interests are broadly in the areas of compilers and systems, with a thesis research on lightweight resilience techniques for HPC applications by exploring applications’ properties. His work appears in top-tier HPC venues, and was nominated for Best Student Paper at SC '19.
High-Performance Graph Analytics with GraphBLAS and LAGraph
Scott Kolodziej, Texas A&M University
Wednesday, February 19, 2020
Centennial Engineering Center 1041
2:00-3:00 PM
Abstract:
Graph-structured data continues to grow in size and complexity, from social networks to graph databases to genome graphs. Even deep neural networks, which have traditionally been treated as dense networks, are now being formulated with sparse connectivity layers to resemble graphs. At the core of these challenges is the field of graph analytics: the application of novel graph algorithms to answer questions about the relationships between data. Efficient solutions to these problems often require a varied approach that utilizes state-of-the-art algorithm design, mathematics, high-performance computing, and software engineering. In this talk, we will step through some recent advances in graph partitioning, as well as how the graph analytics landscape is being transformed by the GraphBLAS standard and associated algorithmic developments. We will also explore a variety of domains and applications where graph analytics is being used and demonstrate how new high-performance graph algorithms and libraries are being built to enable novel research in these areas.
Bio:
Scott Kolodziej is an Assistant Research Scientist at Texas A&M University in the Department of Computer Science & Engineering. He received his Ph.D. in Computer Science from Texas A&M in 2019 for work on hybrid combinatoric and optimization-based graph partitioning methods while working with Dr. Tim Davis. His research interests include high-performance graph analytics, computational optimization, and software engineering of scientific software. In addition to being named an HPEC 2019 Graph Challenge Champion and Affiliate of the Texas A&M Institute of Data Science for his work in graph algorithms and analytics, Scott was also an ACM Student Research Competition Grand Finalist for his work in software engineering and documentation. His background additionally includes degrees in chemical engineering and industrial experience as an Optimization Engineer at Shell.
Graph-Based Exploration of Energy Landscapes in Biomolecular Interactions
Bruna Jacobson, CS Assistant Faculty Candidate
Wednesday, February 12, 2020
Centennial Engineering Center 1041
2:00-3:00 PM
Abstract:
To understand the biological processes that sustain life requires elucidating how biomolecules interact with each other and their environment. However, these interactions are highly complex and involve molecular motion in a high-dimensional space. Biomolecules such as proteins, nucleic acids, and lipids consist of a large number of atoms and hence exhibit many degrees of freedom. Moreover, their behavior is dependent on external factors such as the solvent, temperature, and pH. This complexity explains the immense effort over the last forty years to create computational models that accurately represent molecular interactions. While there has been significant progress in computational methods and the development of high-performance computing systems to run them, enabling atomistic simulations of relatively large systems, there is a significant limit on the ability to simulate processes that occur over timescales longer than about one microsecond. In this talk, I will present how graph-based models of molecular interactions can reduce the dimensionality of motion of a large protein complex by mapping a weighted directed graph onto a rugged and dynamic high-dimensional energy landscape. Protein motion is then derived from pathway determination on the graph. Chemical and conformational changes can be simulated for biomolecular interactions via chemical reaction networks. I will show how transition rate parameters in these networks can be optimized via supervised learning. I will highlight the potential of such networks to be combined with the graph-based model of the energy landscape to simulate biochemical pathway determination. Lastly, I discuss how we are using this modeling approach as a game, DockAnywhere, that will allow users to experience biomolecular interactions on their mobile devices.
Bio:
Dr. Bruna Jacobson is currently a Research Assistant Professor at the Computer Science Department at UNM. She has a Ph.D in Physics from the University of Southern California, 2012, and more recently held a postdoctoral position at the Tapia Lab at UNM Computer Science. Her research interests are varied and include Computational Biology, Human-Automation Collaboration, Machine Learning for Biomaterials, Biophysics, Bioinformatics, and Computational Soft Matter Physics. She is the Principal Investigator of an NSF Core Program Award. She has often showcased her research in outreach activities around Albuquerque. Dr. Jacobson was the recipient of a Chateaubriand Fellowship from the French Embassy in Washington D.C. as a graduate student in 2010.
Programming energy landscapes for absolute molecular positioning & robust molecular computation
Chris Thachuk, Senior Postdoctoral Researcher at Caltech
Wednesday, February 5, 2020
Centennial Engineering Center 1041
2:00-3:00 PM
Abstract:
The promise of molecular programming lies in its ability to (i) self-assemble structures with nanometer precision and (ii) process information autonomously in a biochemical context in order to sense and actuate matter. How do you 'program' an energy landscape so that DNA-based devices follow designed reaction pathways, yet incur significant kinetic and thermodynamic energy penalties for spurious pathways? I'll focus on two different projects butting up against this same theme in different ways. (Part I) A very successful example of self-assembly driven by molecular forces is DNA origami. This process can result in the assembly of ~10^10 copies of a designed 2D or 3D shape, with feature resolution of 6 nanometers. By designing the energy landscape of the interaction between a DNA origami shape and a flat surface, we demonstrate that single molecules can be placed with orientation that is absolute (all degrees of freedom are specified) and arbitrary (every molecule's orientation is independently specified). (Part II) The most sophisticated molecular computing systems have been built upon the DNA strand displacement primitive, where a soup of rationally designed nucleotide sequences interact, react, and recombine over time in order to carry out sophisticated computation. Existing systems are often slow, error-prone, require bespoke design and weeks of labor to realize experimentally. I will detail our efforts to fix these issues by introducing a molecular breadboard, capable of computing billions of functions including all 2^32 Boolean predicates with 5 distinct inputs. Its purpose is to "scale-up" what is possible with this technology and to "scale-out" its adoption to new contexts. In order to facilitate the rapid design of new circuits from a common molecular broth, we have developed a compiler that takes as input a logic description and provides as output the optimized set of breadboard components necessary to activate the desired logic behavior. By mixing these preexisting components as prescribed, it is possible to achieve fast, autonomous and robust molecular circuits, from conception to implementation, within a single afternoon. Due to the large separation of time scales between designed and spurious computation, we expect the breadboard architecture will open new research directions in molecular sensing, actuation and interfacing with self-assembly systems.
Bio:
Chris Thachuk is a Banting Fellow awardee and Senior Postdoctoral Researcher at Caltech, with Erik Winfree. Chris works in the areas of DNA computing and molecular programming - how one might compute or build new structures at the nano-scale with bio-molecules such as DNA. Prior to Caltech, Chris was a postdoc at Oxford Computer Science and a James Martin Fellow at the Institute for the Future of Computing, Oxford. He received his PhD in Computer Science from UBC in 2013, advised by Anne Condon. Much of Chris' computations now happen in a test tube.
Reducing Parallel Communication Costs on Emerging Architectures
Amanda Bienz, Postdoctoral Researcher at University of Illinois at Urbana-Champaign
Wednesday, January 29, 2020
Centennial Engineering Center 1041
2:00-3:00 PM
Abstract:
Advances in parallel architectures yield the potential to solve increasing large and difficult problems efficiently. However, parallel communication demands create performance bottlenecks across applications domains. Communication overhead remains a challenge due to the fact that demands are hard to predict, particularly when moving across difference machines. As a result, it is difficult to design algorithms that are generally efficient across a variety of architectures. In this talk, I will present an analysis of parallel communication, dis-playing a heavy correlation between cost and relative location of the sending and receiving processes, with inter-node communication costing significantly more than intra-node. Furthermore, I will present methods for reducing communication costs and improving the performance of parallel applications on emerging architectures, with a focus on re-routing MPI messages on each node to reduce the amount of costly inter-node communication.
Bio:
Amanda Bienz is a Postdoctoral Researcher at University of Illinois at Urbana-Champaign.
Exploring Supercomputer I/O Systems: Performance Learning, Optimization and Beyond
Bing Xie, HPC System Engineer at Oak Ridge National Laboratory
Monday, January 27, 2020
Farris Engineering Center 3100
2:00-3:00 PM
Abstract:
Supercomputer I/O systems are built around scientific codes. These codes issue periodic write bursts to the file systems for various purposes and with various I/O patterns. From the application’s viewpoint, if its I/O system does not absorb data fast enough, then memory to buffer the output is exhausted, forcing the computation to stall before it can output more data. Output stalls leave precious CPU resources underutilized, extending application runtimes and compromising system throughput. In this talk, I will discuss the study on the write performance of production supercomputers, ranging from quantitative I/O behavior analysis to predictive performance modeling with machine learning techniques. In particular, I will talk about the challenges of benchmarking, profiling and modeling on the write performance of supercomputer I/O systems under production load, and discuss the techniques and methods I proposed to analyze the target systems based on the system design, deployment and configuration. Moreover, I will also show my works on data management among heterogeneous filesystems and resource management for workflows on elastic virtual infrastructure, emphasizing on the challenges, opportunities and my approach.
Bio:
Bing Xie is an HPC System Engineer at Oak Ridge National Laboratory. Bing received her Ph.D. in 2017 from the Computer Science Department at Duke University, where she was advised by Jeff Chase. Her research develops performance analysis and prediction methods for supercomputer I/O systems. More broadly, her research interests span distributed systems, storage systems, high-performance computing, and cloud computing. Her papers are published at HPDC, SC, ACM TOS, etc. Among her works, the petascale filesystem study was nominated for Best Paper and also for Best Student Paper at SC’12.
Accelerating the Analysis of Massive-scale Graph-structured Data
George Slota, Assistant Professor, Computer Science Department, Rensselaer Polytechnic Institute
Wednesday, January 22, 2020
Centennial Engineering Center 1041
2:00-3:00 PM
Abstract:
This talk considers the study of large-scale social, informational, and biological network data, topologically represented as graphs. Such graphs are common, complex, and can be very large, which makes them important to study yet computationally difficult to work with. Developing scalable parallelization methods for graph analytical algorithms is an interesting research area with many significant challenges. I will present results from my ongoing collaborations with Sandia National Labs that involve the development of such techniques for effective multicore, manycore, and distributed parallelization on modern high performance computing architectures. This work includes the tera-scale graph partitioning software PuLP/XtraPuLP, low overhead layout and distribution methods that have enabled the complex study of the largest publicly available web crawl to date via my HPCGraph framework, and A-BTER, a new graph generator designed to facilitate benchmarking studies for the community detection problem at a new massive scale.
Bio:
Dr. George Slota is an Assistant Professor in the Computer Science Department at Rensselaer Polytechnic Institute. He previously worked at Sandia National Labs in the Scalable Algorithms Department from 2013-2016. He graduated with his Ph.D. in Computer Science and Engineering from Penn State in 2016 after working in the Scalable Computing Lab with his advisor, Kamesh Madduri. He was partially supported by a Blue Waters Fellowship during my graduate studies. His research interests are in the areas of graph and network mining, big data analytics, combinatorial algorithms, and their relation to parallel, scientific, and high performance computing.