Info This post is auto-generated from RSS feed Hacker News. Source: Compiling LLMs into a MegaKernel: A path to low-latency inference