Warning This post was published 96 days ago. The information described in this article may have changed.
Info This post is auto-generated from RSS feed Hacker News. Source: 4x faster LLM inference (Flash Attention guy's company)