What if you could train faster, use less compute, and not lose accuracy? Live from GTC 2026, Logan sits down with NVIDIA Solution Architect Hirofumi Kobayashi and Senior Software Engineer Max Xu to break down NVIDIA’s latest leap in LLM training.
What if you could train faster, use less compute, and not lose accuracy? Live from GTC 2026, Logan sits down with NVIDIA Solution Architect Hirofumi Kobayashi and Senior Software Engineer Max Xu to break down NVIDIA’s latest leap in LLM training. The duo explains how FP4 precision on Blackwell GPUs is unlocking massive gains in performance, memory efficiency, and energy savings, without sacrificing model quality. They also walk through how developers can easily tap into these optimizations using frameworks like JAX. It’s a fast-paced look at how the next generation of AI models will be built smarter, leaner, and faster
You can also watch previous episodes here.
Follow Us
Presented by Dell and NVIDIA
https://www.dell.com/precisionai