DeepSeek has released a new paper,Secret Sex Society (2018) with co-founder Liang Wenfeng credited as a contributor, detailing how its latest large language model DeepSeek-V3 achieves efficient training and inference using only 2,048 H800 GPUs – significantly fewer than the tens of thousands typically required. The team attributes this efficiency to four key innovations: memory optimization through multi-head latent attention (MLA), computational savings via a Mixture-of-Experts (MoE) design with FP8 precision, communication improvements using a multi-plane network topology, and faster inference through multi-token prediction (MTP). With MLA, KV cache memory usage is cut to just 70KB per token, up to 1/7 that of competing models. MoE architecture activates only 37 billion of the model’s 671 billion parameters per forward pass, reducing training costs by 90% compared to dense models. FP8 training further halves compute and memory usage, with minimal accuracy tradeoff. Beyond the model, the paper also outlines five future directions for AI hardware design, advocating for tighter integration between software and hardware to address memory, compute, and networking bottlenecks. [36Kr, in Chinese]
(Editor: {typename type="name"/})
Blockchain Explained: How It Works, Who Cares and What Its Future May Hold
Russian state TV has a new enemy: fidget spinners
How to escape your social media bubble before the election
One of California's out of control fires was sparked by a gender reveal party explosion
Boston Celtics vs. Dallas Mavericks 2025 livestream: Watch NBA online
LiDAR explained: What this laser tech can do for your new iPhone
Cat litters with better approval ratings than President Trump's right now
'Love Guaranteed' is an unwatchable shell of a rom
Elon Musk's DOGE.gov website can apparently be edited by anyone
Shailene Woodley's 'Adrift' is a romantic late summer thrill
Google 'Ask for me:' AI that calls businesses on your behalf for pricing and availability
'The Boys' Season 2 goes deep in the dark to show us the light: Review
接受PR>=1、BR>=1,流量相当,内容相关类链接。