AI AI
속보
심층
이벤트
더보기
자금 조달 정보
특집
온체인 생태계
용어
팟캐스트
데이터
OPRR
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
BTC
$96,000
5.73%
ETH
$3,521.91
3.97%
HTX
$0.{5}2273
5.23%
SOL
$198.17
3.05%
BNB
$710
3.05%
XRP
$2.25
2.07%
DOGE
$0.325
2.23%
USDC
$0.999
3.05%

칸브리아가 DeepSeek-V4를 완벽히 지원하며, 코드가 오픈 소스로 공개되었으며 국내 칩 기업 주가를 견인하고 있습니다.

동향 Beating 모니터링에 따르면, Hanhua Ji announced that it has completed the adaptation of two models, 285B DeepSeek-V4-Flash and 1.6T DeepSeek-V4-Pro, on the day of the V4 release, based on the vLLM inference framework, and the adaptation code has been open-sourced on GitHub.

Adaptation speed depends on two premises: First, Hanhua Ji's self-developed NeuWare software stack natively supports mainstream frameworks such as PyTorch and vLLM, enabling fast model migration; Second, Hanhua Ji's chips natively support mainstream low-precision data formats, allowing for accuracy validation without additional format conversion. For the new structure of V4, Hanhua Ji has developed a proprietary fusion operator library, Torch-MLU-Ops, to provide specialized acceleration for modules such as Compressor and mHC, and has written sparse/compressed Attention, GroupGemm, and other hot operator kernels using BangC.

At the inference framework level, Hanhua Ji supports TP/PP/SP/DP/EP five-dimensional mixed parallelism, communication-computation parallelism, low-precision quantization, and PD-separated deployment in vLLM. The V4 technical report only mentioned validation on NVIDIA GPUs and Huawei Ascend NPUs, without mentioning the Hanhua Ji platform. This adaptation was autonomously carried out by Hanhua Ji. Stimulated by the V4 release news, the A-share domestic chip sector strengthened, and Hanhua Ji saw a linear surge during trading hours.

举报 오류 신고/제보
오류 신고/제보
제출
새 문고 추가
자신만 보기
공개
저장
문고 선택
새 문고 추가
취소
완료