Beyond the Token Wall: How REFORM Redefines Long-Context AI Inference
Explore REFORM, a groundbreaking method by Woomin Song and colleagues at KAIST that combines recurrent compression with random access for efficient long-context AI inference.
In-depth technical discussions, system architecture breakdowns, and insights from our research community.
Explore REFORM, a groundbreaking method by Woomin Song and colleagues at KAIST that combines recurrent compression with random access for efficient long-context AI inference.
William Arnold from NVIDIA explains Dynamo, a platform for high-performance open source inference using disaggregated serving to scale LLM workloads.
Andrey Cheptsov details dstack, an open-source GPU-centric orchestrator designed to simplify GPU provisioning, development, training, and inference.
Insights from Philip Kiely of Baseten on optimizing embedding model inference, balancing throughput and latency, and using TensorRT-LLM and quantization.
Practical strategies for system-level optimization in large-scale RL environments. Learn about the long-tail effect, partial rollout in SGLang, CUDA Graph Aware Refit, and solutions for the Training-Inference Mismatch problem.
Exploring how floating-point non-associativity affects determinism and reproducibility in deep learning. Learn why LLMs show non-deterministic outputs even at temperature 0, and how GPU hardware design influences accuracy, speed, and reproducibility.
A comprehensive exploration of Edge AI deployment strategies, covering immutable operating systems, GPU integration with Kubernetes, hardware co-design, and the challenges of deploying AI at the edge.
An architectural deep dive into vLLM, exploring PagedAttention, optimized KV caching, chunked prefill, and advanced features that enable efficient LLM serving at scale.
Want to contribute to our blog? Reach out at daniel@aerlabs.tech or shubham@aerlabs.tech