Hy3 preview is a 295 billion parameter Mixture-of-Experts model with only 21 billion active parameters, making it cheaper to run than most rivals of similar capability.
On SWE-bench Verified—a coding benchmark testing real GitHub bug fixes—it jumped from 53% (Hy2) to 74.4%, a 40% improvement over the previous generation.
The model is already live across Tencent’s app ecosystem including Yuanbao, QQ, and Tencent Docs, with API access on Tencent Cloud starting at roughly $0.18 per million input tokens.
Tencent quietly dropped its most capable AI model yet on Thursday, and the benchmark numbers are hard to ignore. Hy3 preview, the company’s first model after a full infrastructure rebuild, went open-source today across GitHub, Hugging Face, and ModelScope.
It’s also available on Tencent Cloud’s official website, under a paid plan.
My3 packs 295 billion total parameters (a measurement of a model’s potential breadth of knowledge) but only 21 billion active at any given time. That’s the beauty of a Mixture-of-Experts architecture—the model routes each query to a specialized subset of its “expert” sub-networks instead of running everything at once. Less compute, lower cost, roughly similar output quality. It also supports up to 256,000 tokens of context, which is enough to swallow a full-length novel in a single prompt.
The model was built to balance three things Tencent says it stopped sacrificing for each other: capability breadth, honest evaluation, and cost-efficiency. Their previous flagship, Hy2, had over 400 billion parameters. Tencent explicitly walked that back, arguing 295 billion is the optimal sweet spot where reasoning fully matures but the cost of adding more parameters stops paying off.
This also doesn’t mean the model is worse. Models with better training and lower parameters outperform bigger generalist ones quite frequently.
On coding, the improvement is dramatic. SWE-bench Verified is a benchmark that tests whether a model can actually fix real bugs from GitHub repositories—not toy problems, but production code. Hy2 scored 53.0%. Hy3 preview scores 74.4%. That’s a 40% jump in one generation, landing it in range of Claude Opus 4.6 (80.8%) and above GLM-5 (77.8%) and Kimi-K2.5 (76.8%). Terminal-Bench 2.0, which measures autonomous task execution in a real command-line environment, went from 23.2% to 54.4%—also a massive leap.
The model, however, can be a very interesting choice for people building with agents. Agents have a very complex set of instructions that involve memories, skills, and tool calls. They usually miss something, which can ruin a workflow or produce poor results. That’s why agentic capabilities are becoming more and more important for AI developers as this area becomes the most hyped thing in the industry. It’s also why the model was immediately made available on Openclaw.
Search and browsing agents—where models must retrieve, filter, and synthesize information from the open web without human guidance—also improved sharply. On BrowseComp, a benchmark tracking complex web research tasks, Hy3 preview reached 67.1% (up from Hy2’s 28.7%). On WideSearch, it hit 70.2%, outperforming GLM-5 and Kimi-K2.5 but trailing Claude Opus 4.6’s 77.2%.
In reasoning, the model topped every Chinese competitor on Tsinghua University’s math PhD qualifying exam (Spring 2026), scoring 88.4 on the average of three runs avg@3. That’s a real-world exam, not a curated dataset—the kind of evaluation Tencent says it’s prioritizing to avoid benchmark gaming. The model also scored 87.8 on CHSBO 2025 (China’s national high school biology olympiad), highest among Chinese models in that category.
Hy3 preview started training in late January 2026 and launched Thursday—under three months from cold start to open-source release. Unusually fast for a frontier-class model. Tencent attributes it to a February infrastructure overhaul led by Yao Shunyu, its chief AI scientist, who pushed a full rebuild of the pretraining and reinforcement learning stack.
This is a very different approach from what Chinese AI labs were doing a year ago, when DeepSeek’s R1 shocked the industry with its cost-efficiency.
Hy3 still trails OpenAI and Google DeepMind’s flagships, but by the size-to-performance ratio, Hy3 preview is hard to dismiss: the agent benchmark composite shows it in the “optimal zone” with ~295 billion parameters, ahead of DeepSeek-V3.2 (600 billion+) and matching Kimi-K2.5 (over 1 trillion parameters) at a fraction of the compute cost.
Hunyuan models have already been deployed across Yuanbao, CodeBuddy, WorkBuddy, QQ, and Tencent Docs. On CodeBuddy and WorkBuddy, first-token latency dropped 54%, end-to-end generation time fell 47%, and the model successfully ran agent workflows as long as 495 steps. Tencent Cloud is offering API access at approximately $0.18 per million input tokens and $0.59 per million output tokens, with personal Token Plan packages starting at around $4.10 per month.
Daily Debrief Newsletter
Start every day with the top news stories right now, plus original features, a podcast, videos and more.
The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.
We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.
Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.
Your permission applies to the following domains:
https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.