Alibaba’s New AI Video-Generation Model Tops Global Ranking

Published
Score
13

Why it matters

HappyHorse-1.0, an open-source AI video generation model, topped the Artificial Analysis Video Arena leaderboard on April 8, 2026, achieving top Elo scores of 1333–1357 in text-to-video (no audio) and 1391–1406 in image-to-video, surpassing ByteDance's Seedance 2.0 by 60+ points; it also placed second in audio-inclusive tracks with native audio-video sync, 1080p output, and 38-second inference on an H100 GPU.[1][2][3][5]

The model was developed by an independent AI research collective, HappyHorse AI, led by Zhang Di (ex-Kuaishou VP and Kling AI architect), comprising a team formerly from Alibaba’s Taotian Group Future Life Lab (ATH-AI); multiple reports link it to Alibaba Group, which anonymously released it amid its Token Hub AI consolidation, though the team presents it as independent with full weights and code on GitHub/Hugging Face.[1][3][4][6][8][13] Competitors include closed-source models like Seedance 2.0 (ByteDance), SkyReels V4, and Kling 3.0.

This follows Alibaba's AI push after consolidating operations into Token Hub for streamlined development and revenue growth, building on prior work like Kling AI; the anonymous drop on April 8 propelled it to #1 in blind user preference tests by April 9, highlighting a shift from closed-source dominance.[1][4][5] No API exists yet, limiting production use despite open-source access.[2][7]

It's newsworthy for proving open-source can outperform proprietary giants in real-user blind tests (Elo-based on quality, motion, faithfulness), signaling intensified China-U.S. AI video rivalry, potential enterprise deployment via Alibaba Cloud, and accessible innovation with commercial licensing across 7 languages.[1][2][4][5][10]

Sources

mail

Get notified about new Antitrust developments

Primary sources. No fluff. Straight to your inbox.

See more entries tagged Antitrust.

Also on LawSnap