Tag: 2024
All the talks with the tag "2024".
Characterizing Graph Datasets for Node Classification - Homophily-Heterophily Dichotomy and Beyond
Sikta MohantyPublished: at 02:00 PMThis work explores the concept of homophily in graph datasets and proposes a measure called adjusted homophily. It also introduces a new characteristic called label informativeness (LI) to distinguish different types of heterophily. The study shows that LI better correlates with graph neural network performance compared to traditional homophily measures.
Class-Imbalanced Learning on Graphs - A Survey
Rishav DasPublished: at 02:00 PMThis survey provides a comprehensive understanding of class-imbalanced learning on graphs (CILG), a promising solution that combines graph representation learning and class-imbalanced learning. It presents a taxonomy of existing work, analyzes recent advancements, and discusses future research directions in CILG.
RISC-V (Reduced Instruction Set Computing V)
Shreya AdyaPublished: at 02:00 PMThis presentation explores RISC-V, a revolutionary open-source instruction set architecture (ISA) that is poised to disrupt the computing landscape. It highlights the core principles, advantages, challenges, and applications of RISC-V, including efficient embedded systems, the Internet of Things (IoT), and self-driving cars.
ORB-SLAM3 - An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM
Harshit AgarwalPublished: at 02:00 PMThis paper presents ORB-SLAM3, the first system able to perform visual, visual-inertial and multi-map SLAM with monocular, stereo and RGB-D cameras. The system operates robustly in real time, in small and large, indoor and outdoor environments, and is two to ten times more accurate than previous approaches.
Neural Theorem Proving in Lean
Rahul VishwakarmaPublished: at 02:00 PMThis talk explores integrating large language models (LLMs) and interactive theorem provers (ITPs) like Lean to automate theorem proving. It covers recent advancements, data augmentation, dynamic sampling methods, and the development of tools to facilitate experiments in Lean 4.
KS-Lottery - Finding Certified Lottery Tickets for Multilingual Language Models
Aritra MukhopadhayaPublished: at 02:00 PMThe KS-Lottery method identifies a small subset of LLM parameters highly effective in multilingual fine-tuning. This talk covers the theoretical foundation, experimental results, and surprising findings, such as fine-tuning 18 tokens’ embedding of LLaMA sufficing to reach full fine-tuning translation performance.