Info This post is auto-generated from RSS feed Hacker News. Source: 4x faster LLM inference (Flash Attention guy's company)