DeepSeek V4: Open-Weight Frontier at 1/7 the Cost
DeepSeek V4 ships 1.6T MoE open weights with 1M-token context: 80.6% on SWE-bench Verified at $1.74/$3.48 per million — roughly 1/7 the output cost of Claude Opus 4.7.
DeepSeek V4 ships 1.6T MoE open weights with 1M-token context: 80.6% on SWE-bench Verified at $1.74/$3.48 per million — roughly 1/7 the output cost of Claude Opus 4.7.
OpenAI released GPT-5.5 on April 23, 2026 — the first fully retrained base since GPT-4.5. Benchmarks, $5/$30 API pricing, 1M context, and Opus 4.7 compared.
Meta Muse Spark is MSL's first proprietary model, with top health and science benchmarks but coding gaps. Modes, scores, and what developers should know.
GPT-5.4 scores 75% on OSWorld, surpassing human experts at desktop tasks. What this means for AI agents, enterprise workflows, and the competition in 2026.
Zhipu AI's GLM-4.7 explained: 355B MoE architecture, 200K-token context, multimodal inputs, and $0.60 in / $2.20 out per million tokens on Z.ai.
One email per week — courses, deep dives, tools, and AI experiments.
No spam. Unsubscribe anytime.