Hardware-aware Algorithms for Sequence Modeling - Tri Dao | Stanford MLSys #87
Published 2024-01-18Download video
Recommendations
-
1:16:48 Notes on AI Hardware - Benjamin Spector | Stanford MLSys #88
-
56:32 Monarch Mixer: Making Foundation Models More Efficient - Dan Fu | Stanford MLSys #86
-
58:58 FlashAttention - Tri Dao | Stanford MLSys #67
-
24:07 AI can't cross this line and we don't know why.
-
1:16:18 Stanford ECON295/CS323 I 2024 I AI and Creativity, Anima Anandkumar
-
57:19 Efficiently Modeling Long Sequences with Structured State Spaces - Albert Gu | Stanford MLSys #46
-
1:10:58 Trends in Deep Learning Hardware: Bill Dally (NVIDIA)
-
57:05 Text2SQL: The Dream versus Reality - Laurel Orr | Stanford MLSys #89
-
17:38 The moment we stopped understanding AI [AlexNet]
-
55:59 Training LLMs at Scale - Deepak Narayanan | Stanford MLSys #83
-
58:12 MIT Introduction to Deep Learning | 6.S191
-
59:17 Serving 100s of LLMs on 1 GPU with LoRAX - Travis Addair | Stanford MLSys #84
-
31:53 Scaling Computing Performance Beyond the End of Moore’s Law: Song Han
-
1:06:35 MedAI #41: Efficiently Modeling Long Sequences with Structured State Spaces | Albert Gu
-
1:17:29 Stanford CS25: V4 I Overview of Transformers
Similar videos