Qwen2.5 VL! Qwen2.5 VL! Qwen2.5 VL! (original) (raw)
QWEN CHAT GITHUB HUGGING FACE MODELSCOPE DISCORD
We release Qwen2.5-VL, the new flagship vision-language model of Qwen and also a significant leap from the previous Qwen2-VL. To try the latest model, feel free to visit Qwen Chat and choose Qwen2.5-VL-72B-Instruct. Also, we open both base and instruct models in 3 sizes, including 3B, 7B, and 72B, in both Hugging Face and ModelScope.
The key features include:
- Understand things visually: Qwen2.5-VL is not only proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images.
- Being agentic: Qwen2.5-VL directly plays as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use.
- Understanding long videos and capturing events: Qwen2.5-VL can comprehend videos of over 1 hour, and this time it has a new ability of capturing event by pinpointing the relevant video segments.
- Capable of visual localization in different formats: Qwen2.5-VL can accurately localize objects in an image by generating bounding boxes or points, and it can provide stable JSON outputs for coordinates and attributes.
- Generating structured outputs: for data like scans of invoices, forms, tables, etc. Qwen2.5-VL supports structured outputs of their contents, benefiting usages in finance, commerce, etc.
Performance
We evaluate our models with the SOTA models as well as the best models of similar model sizes. In terms of the flagship model Qwen2.5-VL-72B-Instruct, it achieves competitive performance in a series of benchmarks covering domains and tasks, inlcuding college-level problems, math, document understanding, general question answering, math, video understanding, and visual agent. Notably, Qwen2.5-VL achieves significant advantages in understanding documents and diagrams, and it is capable of playing as a visual agent without task-specific finetuning.
In terms of smaller models, Qwen2.5-VL-7B-Instruct outperforms GPT-4o-mini in a number of tasks, and Qwen2.5-VL-3B, which is a solution for edge AI, even outperforms the 7B model of our previous version Qwen2-VL.
Model Capabilities
1. World-wide Image Recognition
Qwen2.5-VL has significantly enhanced its general image recognition capabilities, expanding the categories of images to an ultra-large number. It not only includes plants, animals, landmarks of famous mountains and rivers, but also IPs from film and TV series, as well as a wide variety of products.
Example: Attractions Identification Next
User
What are these attractions? Please give their names in Chinese and English.
Qwen2.5-VL
These attractions are:
- Top left: The Great Pyramids of Giza (埃及的吉萨大金字塔)
- Top right: The Great Wall of China (中国的长城)
- Bottom left: The Statue of Liberty (美国的自由女神像)
- Bottom right: The Terracotta Army (中国的秦始皇兵马俑)
Example: Birds Identification Next
User
What are these birds? Please give their names in Chinese and English.
Qwen2.5-VL
These birds are: Top Left: 红翅黑鹂 (Red-winged Blackbird) Top Right: 蓝翅八色鸫 (Blue-winged Pitta) Bottom Left: 小斑鸬鹚 (Little Pied Cormorant) Bottom Right: 金头扇尾莺 (Golden-headed Cisticola)
Example: Cars Identification Next
User
What are these cars? Please give their names in Chinese and English.
Qwen2.5-VL
这些车分别是:
- 左上角:劳斯莱斯(Rolls-Royce)
- 右上角:梅赛德斯-奔驰GLE(Mercedes-Benz GLE)
- 左下角:法拉利Portofino M(Ferrari Portofino M)
- 右下角:保时捷911(Porsche 911)
Example: Celebrity Recognition Next
User
Who are these in this picture? Please give their names in Chinese and English.
Qwen2.5-VL
这张图片中的四位名人分别是:
- 左上角:迈克尔·杰克逊(Michael Jackson)
- 右上角:迈克尔·乔丹(Michael Jordan)
- 左下角:成龙(Jackie Chan)
- 右下角:唐纳德·特朗普(Donald Trump)
Example: Foods Identification Next
User
What are these foots? Please give their names in Chinese and English.
Qwen2.5-VL
Sure! Here are the names of the foods in the images, both in Chinese and English:
- Top Left Image:
- Chinese: 牛仔骨 (Niú zǎi gǔ)
- English: Beef Short Ribs
- Top Right Image:
- Chinese: 蔬菜沙拉 (Shūcài shālā)
- English: Vegetable Salad
- Bottom Left Image:
- Chinese: 红烧狮子头 (Hóngshāo shizi tóu)
- English: Braised Meatballs
- Bottom Right Image:
- Chinese: 火锅 (Huǒguō)
- English: Hot Pot (with various ingredients like meat, vegetables, and seafood)
Example: Products Identification Next
User
What are these products? Please give their names in Chinese and English.
Qwen2.5-VL
Sure, here are the products and their Chinese and English names:
- Top Left Image:
- English Name: Propolinse Mouthwash
- Chinese Name: 蜂胶漱口水
- Top Right Image:
- English Name: George Caroll Moisturizing and Refreshing Shampoo
- Chinese Name: 乔治卡罗尔保湿清爽洗发水
- Bottom Left Image:
- English Name: On The Body Sweet Love Perfume Water
- Chinese Name: 洋甘菊甜蜜爱香水
- Bottom Right Image:
- English Name: Tiger Balm Mosquito Repellent
- Chinese Name: 虎标驱蚊水
2. Precise Object Grounding
Qwen2.5-VL utilizes bounding boxes and point-based representations for grounding, enabling hierarchical positioning and standardized JSON output. This enhanced localization capability serves as a foundation for visual reasoning.
Example: Precise Object Grounding Next
User
Detect all motorcyclists in the image and return their locations in the form of coordinates. The format of output should be like {“bbox_2d”: [x1, y1, x2, y2], “label”: “motorcyclist”, “sub_label”: “wearing helmat” # or “not wearing helmat”}.
Qwen2.5-VL
[
{"bbox_2d": [341, 258, 397, 360], "label": "motorcyclist", "sub_label": "not wearing helmat "},
{"bbox_2d": [212, 332, 274, 448], "label": "motorcyclist", "sub_label": "not wearing helmat "},
{"bbox_2d": [66, 124, 112, 198], "label": "motorcyclist", "sub_label": "not wearing helmat "},
{"bbox_2d": [5, 235, 63, 320], "label": "motorcyclist", "sub_label": "wearing helmat "}
]
Example: Precise Object Grounding Next
User
Identify basketball players and detect the key points of their heads and hands, returning them in the form of points. The primary label is the player’s name, and the secondary labels include left hand, right hand, and head.
Qwen2.5-VL
[
{"point_2d": ["394", "105"], "label": "LeBron James", "label2": "head"},
{"point_2d": ["876", "131"], "label": "Stephen Curry", "label2": "head"},
{"point_2d": ["100", "614"], "label": "LeBron James", "label2": "right hand"},
{"point_2d": ["460", "507"], "label": "LeBron James", "label2": "left hand"},
{"point_2d": ["784", "660"], "label": "Stephen Curry", "label2": "left hand"},
{"point_2d": ["945", "507"], "label": "Stephen Curry", "label2": "right hand"}
]
Example: Precise Object Grounding Next
User
Count the number of birds in the figure, including those that are only showing their heads. To ensure accuracy, first detect their key points, then give the total number..
Qwen2.5-VL
[
{"point_2d": ["38", "314"], "label": "birds"},
{"point_2d": ["126", "309"], "label": "birds"},
{"point_2d": ["195", "277"], "label": "birds"},
{"point_2d": ["243, 291"], "label": "birds"},
{"point_2d": ["285, 291"], "label": "birds"},
{"point_2d": ["345, 291"], "label": "birds"},
{"point_2d": ["400, 284"], "label": "birds"},
{"point_2d": ["465, 298"], "label": "birds"},
{"point_2d": ["525, 314"], "label": "birds"},
{"point_2d": ["644, 314"], "label": "birds"},
{"point_2d": ["695, 314"], "label": "birds"},
{"point_2d": ["762, 304"], "label": "birds"},
{"point_2d": ["834, 264"], "label": "birds"},
{"point_2d": ["248, 638"], "label": "birds"}
]
There are a total of 14 birds in the figure.
Example: Detecting before Counting Next
User
Please first output bbox coordinates and names of every item in this image in JSON format, and then answer how many items are there in the image.
Qwen2.5-VL
[
{"bbox_2d": [19, 3, 84, 125], "label": "ice cream"},
{"bbox_2d": [167, 0, 288, 134], "label": "flip flops"},
{"bbox_2d": [349, 0, 457, 123], "label": "beach umbrella"},
{"bbox_2d": [521, 4, 635, 123], "label": "starfish"},
{"bbox_2d": [10, 180, 118, 294], "label": "cocktail drink"},
{"bbox_2d": [150, 196, 308, 283], "label": "sunglasses"},
{"bbox_2d": [354, 192, 467, 300], "label": "lifebuoy"},
{"bbox_2d": [532, 202, 635, 290], "label": "watermelon slice"},
{"bbox_2d": [2, 379, 134, 452], "label": "straw hat"},
{"bbox_2d": [167, 335, 308, 456], "label": "palm tree"},
{"bbox_2d": [360, 359, 468, 476], "label": "sun"},
{"bbox_2d": [546, 338, 642, 452], "label": "martini glass"}
]
There are 12 items in the image.
Example: Grounding with Description Next
User
Locate every cake and describe their features, output the bbox coordinates in JSON format.
Qwen2.5-VL
[
{"bbox_2d": [130, 725, 304, 906], "label": "cake with white frosting and eyes"},
{"bbox_2d": [344, 641, 508, 800], "label": "cake with white frosting and eyes"},
{"bbox_2d": [289, 530, 435, 690], "label": "cake with white frosting and colorful sprinkles"},
{"bbox_2d": [40, 542, 197, 712], "label": "cake with white frosting and colorful sprinkles"},
{"bbox_2d": [64, 386, 202, 566], "label": "cake with pink frosting and red sprinkles"},
{"bbox_2d": [245, 375, 363, 529], "label": "cake with pink frosting and blue sprinkles"},
{"bbox_2d": [389, 359, 492, 494], "label": "cake with pink frosting and blue sprinkles"},
{"bbox_2d": [423, 436, 555, 587], "label": "cake with pink frosting and googly eyes"},
{"bbox_2d": [518, 348, 642, 512], "label": "cake with pink frosting and blue sprinkles"},
{"bbox_2d": [500, 549, 652, 698], "label": "cake with white frosting and googly eyes"},
{"bbox_2d": [615, 465, 757, 626], "label": "cake with white frosting and colorful sprinkles"},
{"bbox_2d": [723, 372, 849, 525], "label": "cake with chocolate frosting"},
{"bbox_2d": [782, 497, 944, 676], "label": "cake with white frosting and colorful candies"}
]
Example: Pointing to target object Next
User
Locate the person who act bravely, report the bbox coordinates in JSON format.
Qwen2.5-VL
[
{"bbox_2d": [74, 58, 526, 619], "label": "person who act bravely"}
]
3. Enhanced Text Recognition and Understanding
Qwen2.5-VL has upgraded its OCR recognition capabilities to a new level, with enhanced multi-scenario, multi-language and multi-orientation text recognition and text localization performance. Furthermore, it has been significantly enhanced in information extraction to meet the growing digitalized and intelligent demands in areas such as qualification review and financial business.
Example: Vertical Text Recognition Next
User
Read all texts in the image, output in lines.
Qwen2.5-VL
平安喜樂 八方來財 順風順水順財神 暴富 大吉 諸事皆順 福 時來運轉 發財 如意 有錢有勢有前程 好運來
Example: Multilingual Text Recognition Next
User
Recognise all texts in the image, output in lines.
Qwen2.5-VL
SMK أبو منير لبيع وصيانة الروديترات روديترات ماء - مكيف - دفايات COOLING CAR SYSTEM أبو منير 052-204-5334 محمد أبو سيراج 059-831-0796 056-811-8256
Example: Text Spotting Next
User
Spotting all the text in the image with line-level, and output in JSON format.
Qwen2.5-VL
[
{"bbox_2d": [108, 175, 496, 230], "text_content": "AuntieAnne's"},
{"bbox_2d": [49, 429, 252, 450], "text_content": "CINNAMON SUGAR"},
{"bbox_2d": [49, 463, 61, 483], "text_content": "1"},
{"bbox_2d": [102, 463, 113, 483], "text_content": "x"},
{"bbox_2d": [131, 463, 198, 483], "text_content": "17,000"},
{"bbox_2d": [461, 463, 534, 488], "text_content": "17,000"},
{"bbox_2d": [21, 510, 143, 531], "text_content": "SUB TOTAL"},
{"bbox_2d": [461, 516, 534, 538], "text_content": "17,000"},
{"bbox_2d": [21, 579, 176, 601], "text_content": "GRAND TOTAL"},
{"bbox_2d": [461, 585, 534, 607], "text_content": "17,000"},
{"bbox_2d": [21, 628, 126, 649], "text_content": "CASH IDR"},
{"bbox_2d": [461, 634, 534, 656], "text_content": "20,000"},
{"bbox_2d": [21, 676, 169, 698], "text_content": "CHANGE DUE"},
{"bbox_2d": [471, 682, 529, 704], "text_content": "3,000"},
]
Example: Key Information Extraction Next
User
Extract following information from the receipt:[‘发票代码’,‘发票号码’,‘到站’,‘燃油费’,‘票价’,‘乘车日期’,‘开车时间’,‘车次’,‘座号’],output in JSON。
Qwen2.5-VL
{
"发票代码": "221021325353",
"发票号码": "10283819",
"到站": "开发区",
"燃油费": "2.0",
"票价": "8.00<全>",
"乘车日期": "2013-06-29",
"开车时间": "流水",
"车次": "040",
"座号": "371"
}
Example: Key Information Extraction Next
User
You are a delivery bill verification expert who can identify house numbers and courier label information, and verify whether the address on the courier label matches the house number. Please identify the house number from the bill address and the house number on the door separately, and then verify if the door number is delivered correctly.
Qwen2.5-VL
To verify the delivery, let’s break down the information from both the courier label and the door.
Courier Label Information:
- Address on the label: 1935 South Wabash Avenue Suite 1, Chicago, IL 60616
- House number from the address: 1935
Door Information:
- House number on the door: 1935
Verification:
- The house number on the courier label is 1935.
- The house number on the door is also 1935.
Since both the house numbers match, the delivery is verified to be correct. The package should indeed be delivered to this location.
Example: Key Information Extraction Next
User
本月人工费花费多少
Qwen2.5-VL
根据表格中的数据,2006年6月30日的“本月合计”行显示了当月的各项费用。其中,“人工费”列的数值为45,560元。
因此,本月的人工费花费为45,560元。
User
材料费分别用在了哪些方面
Qwen2.5-VL
根据提供的表格,材料费分别用在了以下几个方面:
- 耗用主要材料:金额为114,240元。
- 耗用结构件:金额为138,350元。
- 耗用其他材料:金额为14,160元。
- 摊销周转材料:金额为7,130元。
这些费用合计为: [ 114,240 + 138,350 + 14,160 + 7,130 = 273,880 \text{元} ]
这与“本月合计”中的材料费273,880元一致。
4. Powerful Document Parsing
Qwen2.5-VL has designed a unique document parsing format called QwenVL HTML format, which extracts layout information based on HTML. QwenVL HTML can perform document parsing in various scenarios, such as magazines, research papers, web pages, and even mobile screenshots.
Example: QwenVL HTML Next
User
QwenVL HTML
Qwen2.5-VL
<html>
<body>
<h1 data-bbox="879 283 1605 348">Qwen2.5 Technical Report</h1>
<p data-bbox="1124 406 1360 456"></p>
<address class="author"><p>Qwen Team</p></address>
<div class="image" data-bbox="778 491 850 541"><img data-bbox="778 491 850 541"/></div>
<p data-bbox="885 491 1460 541">https://huggingface.co/Qwen</p>
<div class="image" data-bbox="778 541 850 589"><img data-bbox="778 541 850 589"/></div>
<p data-bbox="885 541 1708 589">https://modelscope.cn/organization/qwen</p>
<div class="image" data-bbox="792 589 850 639"><img data-bbox="792 589 850 639"/></div>
<p data-bbox="885 589 1584 639">https://github.com/QwenLM/Qwen2.5</p>
<h2 data-bbox="1143 681 1344 733">Abstract</h2>
<p data-bbox="434 785 2050 1252">In this report, we introduce Qwen2.5, a comprehensive series of large language models (LLMs) designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has been significantly improved during both the pre-training and post-training stages. In terms of pre-training, we have scaled the high-quality pre-training datasets from the previous 7 trillion tokens to 18 trillion tokens. This provides a strong foundation for common sense, expert knowledge, and reasoning capabilities. In terms of post-training, we implement intricate supervised finetuning with over 1 million samples, as well as multistage reinforcement learning, including offline learning DPO and online learning GRPO. Post-training techniques significantly enhance human preference, and notably improve long text generation, structural data analysis, and instruction following.</p>
<p data-bbox="434 1262 2050 1587">To handle diverse and varied use cases effectively, we present Qwen2.5 LLM series in rich configurations. The open-weight offerings include base models and instruction-tuned models in sizes of <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>0.5</mn><mrow><mtext> </mtext><mi mathvariant="normal">B</mi></mrow><mo separator="true">,</mo><mn>1.5</mn><mrow><mtext> </mtext><mi mathvariant="normal">B</mi></mrow><mo separator="true">,</mo><mn>3</mn><mrow><mtext> </mtext><mi mathvariant="normal">B</mi></mrow><mo separator="true">,</mo><mn>7</mn><mrow><mtext> </mtext><mi mathvariant="normal">B</mi></mrow><mo separator="true">,</mo><mn>14</mn><mrow><mtext> </mtext><mi mathvariant="normal">B</mi></mrow><mo separator="true">,</mo><mn>32</mn><mrow><mtext> </mtext><mi mathvariant="normal">B</mi></mrow></mrow><annotation encoding="application/x-tex">0.5 \mathrm{~B}, 1.5 \mathrm{~B}, 3 \mathrm{~B}, 7 \mathrm{~B}, 14 \mathrm{~B}, 32 \mathrm{~B}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8778em;vertical-align:-0.1944em;"></span><span class="mord">0.5</span><span class="mord"><span class="mspace nobreak"> </span><span class="mord mathrm">B</span></span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.1667em;"></span><span class="mord">1.5</span><span class="mord"><span class="mspace nobreak"> </span><span class="mord mathrm">B</span></span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.1667em;"></span><span class="mord">3</span><span class="mord"><span class="mspace nobreak"> </span><span class="mord mathrm">B</span></span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.1667em;"></span><span class="mord">7</span><span class="mord"><span class="mspace nobreak"> </span><span class="mord mathrm">B</span></span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.1667em;"></span><span class="mord">14</span><span class="mord"><span class="mspace nobreak"> </span><span class="mord mathrm">B</span></span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.1667em;"></span><span class="mord">32</span><span class="mord"><span class="mspace nobreak"> </span><span class="mord mathrm">B</span></span></span></span></span>, and <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>72</mn><mrow><mtext> </mtext><mi mathvariant="normal">B</mi></mrow></mrow><annotation encoding="application/x-tex">72 \mathrm{~B}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord">72</span><span class="mord"><span class="mspace nobreak"> </span><span class="mord mathrm">B</span></span></span></span></span> parameters. Quantized versions of the instruction-tuned models are also provided. Over 100 models can be accessed from Hugging Face Hub, ModelScope, and Kaggle. In addition, for hosted solutions, the proprietary models currently include two mixture-of-experts (MoE) variants: Qwen2.5-Turbo and Qwen2.5-Plus, both available from Alibaba Cloud Model Studio.</p>
<p data-bbox="434 1587 2050 2052">Qwen2.5 has demonstrated top-tier performance on a wide range of benchmarks evaluating language understanding, reasoning, mathematics, coding, human preference alignment, etc. Specifically, the open-weight flagship Qwen2.5-72B-Instruct outperforms a number of open and proprietary models and demonstrates competitive performance to the state-of-the-art open-weight model, Llama-3-405B-Instruct, which is around 5 times larger. Qwen2.5-Turbo and Qwen2.5-Plus offer superior cost-effectiveness while performing competitively against GPT-4o-mini and GPT-4o respectively. Additionally, as the foundation, Qwen2.5 models have been instrumental in training specialized models such as Qwen2.5-Math (Yang et al., 2024b), Qwen2.5-Coder (Hui et al., 2024), QwQ (Qwen Team, 2024d), and multimodal models.</p>
<div class="image" data-bbox="408 2275 2086 2800"><img data-bbox="408 2275 2086 2800"/></div>
<p data-bbox="289 2864 2202 3058">Figure 1: In the iterative development of the Qwen series, data scaling has played a crucial role. Qwen 2.5, which leverages 18 trillion tokens for pre-training, has demonstrated the most advanced capabilities within the Qwen series, especially in terms of domain expertise, underscoring the importance of scale together with mixture in enhancing the model’s capabilities.</p>
</body>
</html>
Example: QwenVL HTML Next
User
QwenVL HTML
Qwen2.5-VL
<html><body>
<div class="formula" data-bbox="920 213 1345 326"><img data-bbox="920 213 1345 326"/><div>$$h=D \tan \theta \approx D \theta=D \frac{\lambda}{2 L}$$</div></div>
<p data-bbox="247 338 1458 392">测出玻璃片长度 <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>D</mi></mrow><annotation encoding="application/x-tex">D</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.02778em;">D</span></span></span></span> 和条纹间距 <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>L</mi></mrow><annotation encoding="application/x-tex">L</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal">L</span></span></span></span>, 即可求出薄片的厚度或细丝直径。</p>
<p data-bbox="308 392 1894 458">(2) 利用劈尖的等厚条纹, 可检查工件的表面是否平整和有无缺陷, 如图 9-21 所示。</p>
<div class="image" data-bbox="420 508 894 832"><img data-bbox="420 508 894 832"/></div>
<p data-bbox="384 858 934 914">图 9-20 测量细丝或薄片的厚度</p>
<p></p>
<div class="image" data-bbox="1268 468 1788 832"><img data-bbox="1268 468 1788 832"/></div>
<p data-bbox="1138 858 1914 914">图 9-21 检查工件的表面是否平整和有无缺陷</p>
<h2 data-bbox="324 946 534 1000"> 2. 牛顿环</h2>
<p data-bbox="247 1020 2016 1278">如图 9-22 所示, 在一块光学平晶上, 放置一曲率半径很大的平凸透镜, 在透镜和平晶间便形成空气膜层。以单色平行光垂直照射, 经空气膜上、下表面反射的两束光发生干涉, 于是在空气膜的上表面出现一组干涉条纹。这也是一种等厚干涉条纹, 并且是以触点为圆心的一组同心圆环, 称为牛顿环。彩色牛顿环的投影见彩色插图 6 。</p>
<div class="image" data-bbox="600 1308 1034 1914"><img data-bbox="600 1308 1034 1914"/></div>
<p data-bbox="844 1950 900 2000">(a)</p>
<p></p>
<div class="image" data-bbox="1138 1288 1658 1942"><img data-bbox="1138 1288 1658 1942"/></div>
<p data-bbox="1418 1950 1508 2000">(b)</p>
<p data-bbox="988 2016 1272 2068">图 9-22 牛顿环</p>
<p data-bbox="336 2084 916 2144">(1) 牛顿环明纹与暗纹的位置</p>
<p data-bbox="247 2148 2016 2276">考虑到入射光在膜层的上表面的反射无半波损失, 但在下表面的反射有半波损失。设 <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>r</mi></mrow><annotation encoding="application/x-tex">r</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.02778em;">r</span></span></span></span>为牛顿环第 <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>k</mi></mrow><annotation encoding="application/x-tex">k</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord mathnormal" style="margin-right:0.03148em;">k</span></span></span></span> 级圆环的半径, 该环对应空气膜厚度为 <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>e</mi></mrow><annotation encoding="application/x-tex">e</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal">e</span></span></span></span>, 则明环条件为</p>
<div class="formula" data-bbox="778 2296 1524 2484"><img data-bbox="778 2296 1524 2484"/><div>$$\delta=2 e+\frac{\lambda}{2}=k \lambda \quad(k=1,2,3, \cdots)$$</div>暗环条件为 <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mspace width="1em"/><mi>δ</mi><mo>=</mo><mn>2</mn><mi>e</mi><mo>+</mo><mfrac><mi>λ</mi><mn>2</mn></mfrac><mo>=</mo><mo stretchy="false">(</mo><mn>2</mn><mi>k</mi><mo>+</mo><mn>1</mn><mo stretchy="false">)</mo><mfrac><mi>λ</mi><mn>2</mn></mfrac><mspace width="1em"/><mo stretchy="false">(</mo><mi>k</mi><mo>=</mo><mn>0</mn><mo separator="true">,</mo><mn>1</mn><mo separator="true">,</mo><mn>2</mn><mo separator="true">,</mo><mo>⋯</mo><mtext> </mtext><mo stretchy="false">)</mo></mrow><annotation encoding="application/x-tex">\quad \delta=2 e+\frac{\lambda}{2}=(2 k+1) \frac{\lambda}{2} \quad(k=0,1,2, \cdots)</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mspace" style="margin-right:1em;"></span><span class="mord mathnormal" style="margin-right:0.03785em;">δ</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.7278em;vertical-align:-0.0833em;"></span><span class="mord">2</span><span class="mord mathnormal">e</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:1.2251em;vertical-align:-0.345em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.8801em;"><span style="top:-2.655em;"><span class="pstrut" style="height:3em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mtight">2</span></span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.394em;"><span class="pstrut" style="height:3em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathnormal mtight">λ</span></span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:0.345em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mopen">(</span><span class="mord">2</span><span class="mord mathnormal" style="margin-right:0.03148em;">k</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:1.2251em;vertical-align:-0.345em;"></span><span class="mord">1</span><span class="mclose">)</span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.8801em;"><span style="top:-2.655em;"><span class="pstrut" style="height:3em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mtight">2</span></span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.394em;"><span class="pstrut" style="height:3em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathnormal mtight">λ</span></span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:0.345em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mspace" style="margin-right:1em;"></span><span class="mopen">(</span><span class="mord mathnormal" style="margin-right:0.03148em;">k</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mord">0</span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.1667em;"></span><span class="mord">1</span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.1667em;"></span><span class="mord">2</span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.1667em;"></span><span class="minner">⋯</span><span class="mspace" style="margin-right:0.1667em;"></span><span class="mclose">)</span></span></span></span></div>
<p data-bbox="336 2516 742 2576">(2) 牛顿环半径公式</p>
<p data-bbox="336 2580 584 2642">在 <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi mathvariant="normal">△</mi><mi>O</mi><msup><mi>O</mi><mo mathvariant="normal" lspace="0em" rspace="0em">′</mo></msup><mi>A</mi></mrow><annotation encoding="application/x-tex">\triangle O O^{\prime} A</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.9463em;vertical-align:-0.1944em;"></span><span class="mord">△</span><span class="mord mathnormal" style="margin-right:0.02778em;">O</span><span class="mord"><span class="mord mathnormal" style="margin-right:0.02778em;">O</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.7519em;"><span style="top:-3.063em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mtight">′</span></span></span></span></span></span></span></span></span><span class="mord mathnormal">A</span></span></span></span> 中</p>
<p></p>
<div class="formula" data-bbox="880 2572 1398 2648"><img data-bbox="880 2572 1398 2648"/><div>$$r^{2}=R^{2}-(R-e)^{2}=2 R e-e^{2}$$</div></div>
<p data-bbox="247 2648 1650 2708">因为 <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>R</mi><mo>≫</mo><mi>e</mi></mrow><annotation encoding="application/x-tex">R \gg e</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.7224em;vertical-align:-0.0391em;"></span><span class="mord mathnormal" style="margin-right:0.00773em;">R</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">≫</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal">e</span></span></span></span>, 略去 2 阶小项 <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msup><mi>e</mi><mn>2</mn></msup></mrow><annotation encoding="application/x-tex">e^{2}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8141em;"></span><span class="mord"><span class="mord mathnormal">e</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.8141em;"><span style="top:-3.063em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mtight">2</span></span></span></span></span></span></span></span></span></span></span></span> 后, 得 <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>e</mi><mo>=</mo><msup><mi>r</mi><mn>2</mn></msup><mi mathvariant="normal">/</mi><mn>2</mn><mi>R</mi></mrow><annotation encoding="application/x-tex">e=r^{2} / 2 R</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal">e</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:1.0641em;vertical-align:-0.25em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.02778em;">r</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.8141em;"><span style="top:-3.063em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mtight">2</span></span></span></span></span></span></span></span></span><span class="mord">/2</span><span class="mord mathnormal" style="margin-right:0.00773em;">R</span></span></span></span>, 代入式 (9-27), 则明环条件为</p>
<div class="formula" data-bbox="798 2708 1448 2822"><img data-bbox="798 2708 1448 2822"/><div>$$\delta=\frac{r^{2}}{R}+\frac{\lambda}{2}=k \lambda \quad(k=1,2,3, \cdots)$$</div></div>
</body></html>
Example: QwenVL HTML with image caption Next
User
QwenVL HTML with image caption
Qwen2.5-VL
<html>
<body>
<div class="image caption" data-bbox="30 164 1225 807">
<img data-bbox="30 164 1225 807" />
<p>This is an image of a magazine cover. It's got three pictures on it, and then the title underneath that says 2030 in white letters, and then next to that it has some Chinese writing. The picture on the left shows wind turbines up on top of a hillside. There's several of them. They're all white with blue bases. You can see mountains off in the distance behind this hillside. Then there's another picture right beside that one showing water coming out from a dam. And then the third picture shows solar panels lined up in rows as far as you can see.</p>
</div>
<p data-bbox="95 871 1120 925">这是科学家描绘的美好图景:到 2030 年,地球上消耗的所有能源,都将是清洁的可再生能源。要进入这个由风、水、太阳所主导的新能源时代,我们必须克服怎样的困难?</p>
<h2 data-bbox="89 953 451 995">24 可再生能源政策简史</h2>
<p data-bbox="134 1081 458 1116">产业圆桌 Industry Round Table</p>
<p data-bbox="89 1121 444 1163">26 2030 我们开什么车</p>
<p data-bbox="683 1078 1064 1121">44 完美主义 硬币的两面</p>
<p data-bbox="134 1280 486 1315">信息技术 Information Technology</p>
<p data-bbox="89 1320 384 1362">30 电视向网络投降</p>
<p data-bbox="683 1280 1064 1325">50 社会身份 健康晴雨表</p>
</body>
</html>
Example: Parsing Screenshot Next
User
图片解析成html
Qwen2.5-VL
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>淘宝许愿淘彩头分25亿</title>
<style>
body {
background-color: #4a0d6e;
margin: 0;
padding: 0;
font-family: Arial, sans-serif;
color: white;
text-align: center;
}
.header {
display: flex;
justify-content: space-between;
align-items: center;
padding: 10px;
}
.header h1 {
margin: 0;
color: gold;
}
.content {
padding: 20px;
}
.wish-options {
display: flex;
justify-content: center;
gap: 10px;
margin-top: 20px;
}
.wish-option {
width: 100px;
height: 200px;
border-radius: 10px;
padding: 10px;
box-sizing: border-box;
cursor: pointer;
}
.selected {
background-color: gold;
}
.unselected {
background-color: purple;
}
.footer {
margin-top: 20px;
}
.button {
background-color: gold;
border: none;
padding: 10px 20px;
border-radius: 5px;
cursor: pointer;
}
</style>
</head>
<body>
<div class="header">
<div><img src="back_arrow.png" alt="Back"></div>
<h1>淘宝·许愿淘彩头分25亿</h1>
<div><img src="more_options.png" alt="More Options"></div>
</div>
<div class="content">
<p>许下新年心愿得红包</p>
<img src="golden_lantern.png" alt="Golden Lantern">
<div class="wish-options">
<div class="wish-option selected">福寿康宁</div>
<div class="wish-option unselected">我想免费周游世界</div>
<div class="wish-option unselected">追随自己的热情</div>
</div>
<div class="footer">
<button class="button">换一批 | 定制心愿</button>
<p>许下你的定制心愿,更灵验哦</p>
<button class="button">许愿拿红包 🎉</button>
</div>
</div>
</body>
</html>
5. Enhanced Video Comprehension Ability
Qwen2.5-VL’s video comprehension capabilities have been comprehensively upgraded. In terms of temporal processing, we have introduced dynamic frame rate (FPS) training and absolute time encoding technology. As a result, the model can not only support the understanding of ultra-long videos on an hourly scale but also achieve second-level event localization. It is capable of accurately comprehending content from long videos spanning hours, searching for specific events within videos, and summarizing key points from different time segments. This allows users to quickly and efficiently extract crucial information embedded in the videos.
Example: Information Extraction from Videos Next
User
Watch the video and list the paper titles in a table.
Qwen2.5-VL
Here is a table listing the paper titles from the video:
Paper Title |
---|
A New Sequential Prediction Framework with Spatial-temporal Embedding |
NeW CRFs: Neural Window Fully-connected CRFs for Monocular Depth Estimation |
Deep Unified Representation for Heterogeneous Recommendation |
OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework |
KoMen: Domain Knowledge Guided Interaction Recommendation for Emerging Scenarios Yiqing Xie |
Vision-Language Pre-Training for Boosting Scene Text Detectors |
Example: Video Reasoning Next
User
Qwen2.5-VL
图中的物体是一个中国传统的舞狮道具,具体来说是一个小型的、可能是用于装饰或儿童玩具的舞狮头。这个舞狮头主要是红色的,象征着喜庆和好运,在中国文化中,红色被认为是一种能够驱邪避凶的颜色。它的设计非常生动,具有夸张的表情和张开的大嘴,露出白色的牙齿和红色的舌头,这些都是为了在表演时能够吸引观众的注意力,并传达出一种威猛和活力的感觉。
这个舞狮头通过几根细线悬挂起来,表明它可能是一个可以操控的玩偶或者装饰品。在实际的舞狮表演中,通常会有两个人穿着这样的狮子服装进行舞蹈,一个人控制头部,另一个人控制身体和尾巴部分,通过他们的协调动作来模拟狮子的各种姿态和行为。
背景中可以看到一些家居装饰品,如花瓶和植物,这表明这个场景可能是在一个家庭环境中拍摄的。整体来看,这个舞狮头不仅是一件具有文化意义的物品,也是一个充满趣味和艺术价值的装饰品。
Example: Long Video Understanding Next
User
Could you provide a comprehensive overview of the competition’s progress?
Qwen2.5-VL
The video captures an intense and highly competitive tennis match between Venus Williams and Serena Williams, two renowned professional tennis players, during the Australian Open in Melbourne. The match is set on a vibrant blue hard court, with the iconic “Melbourne” branding prominently displayed. The atmosphere is electric, with a packed stadium filled with enthusiastic spectators, many of whom are dressed in blue, adding to the visual spectacle.
Match Progress:
- Initial Stages:
- The match begins with both players demonstrating their powerful serves and precise returns. The scoreboard shows the game progressing closely, with both sisters showing their competitive spirit.
- Venus Williams starts strong, securing a few early points with her aggressive play and strategic shots. Her focus and determination are evident as she moves swiftly across the court.
- Mid-Game Dynamics:
- As the match progresses, Serena Williams begins to assert her dominance. She showcases her signature power and agility, hitting deep groundstrokes and executing well-placed volleys.
- The rally between the two becomes increasingly intense, with extended exchanges that highlight their exceptional athleticism and skill. The crowd erupts in cheers, responding to the high-quality tennis on display.
- Key Moments:
- A notable moment occurs when Venus Williams hits a powerful serve, but Serena responds with a stunning return, sending the ball back with incredible speed and precision. This exchange leads to a prolonged rally, captivating the audience.
- Another significant point sees Serena Williams executing a drop shot, catching Venus off guard and winning the point. The crowd’s reaction underscores the brilliance of the shot.
- Later Stages:
- The match continues with both players maintaining their high level of performance. The scoreboard reflects a close contest, with neither player gaining a substantial lead.
- Venus Williams demonstrates her resilience, fighting back from behind and keeping the pressure on Serena. Her ability to adapt and change tactics keeps the match unpredictable.
- Final Stages:
- As the match nears its conclusion, the tension builds. Both players are visibly fatigued but remain committed to giving their best effort.
- In a crucial moment, Serena Williams secures a break point and capitalizes on it, taking the lead. Her composure and experience shine through as she manages to maintain this advantage.
- The final points are fiercely contested, with both players pushing themselves to the limit. Ultimately, Serena Williams prevails, clinching the match with a decisive shot.
Audience and Atmosphere:
- The crowd plays a significant role throughout the match, providing constant support and encouragement. Their reactions range from excited cheers to moments of suspenseful silence, reflecting the intensity of the game.
- The presence of prominent sponsors such as KIA, ANZ, and Emirates adds to the professional setting, emphasizing the importance of the event.
Conclusion:
The match between Venus Williams and Serena Williams at the Australian Open is a testament to their exceptional talent and rivalry. It showcases not only their individual skills but also the deep connection and mutual respect they share as siblings. The match concludes with Serena Williams emerging victorious, but the true winner is the sport of tennis, which benefits from such high-caliber performances. The audience leaves the stadium with lasting memories of an unforgettable encounter.
Example: Temporal Video Grounding Next
User
Give the query: ’the user is experiencing the image generation feature’, when does the described content occur in the video? Use seconds for time format.
Qwen2.5-VL
The described content occurs from 28 seconds to 50 seconds in the video. During this segment, the user interacts with the image generation feature, requesting and receiving an artistic double scene painting of a mountain during day and night. The user then adds a bird to the generated image, demonstrating the functionality of the image generation tool.
Example: Structured Video Captioning Next
User
Localize a series of activity events in the video, output the start and end timestamp for each event, and describe each event with sentences. Provide the result in json format with ‘mm:ss.ff’ format for time depiction.
Qwen2.5-VL
[
{
"start_time": "00:21.00",
"end_time": "00:49.00",
"description": "A person removes a piece of meat from its packaging and cuts off the fat."
},
{
"start_time": "00:50.00",
"end_time": "01:27.00",
"description": "The person seasons the meat with salt and pepper on both sides."
},
{
"start_time": "01:28.00",
"end_time": "01:36.00",
"description": "The seasoned meat is placed on a grill."
},
{
"start_time": "01:37.00",
"end_time": "02:08.00",
"description": "Oil is poured into a pot followed by chopped onions, carrots, and bell peppers which are then stirred."
},
{
"start_time": "02:09.00",
"end_time": "02:26.00",
"description": "A rack is placed inside the pot and the meat is set on top."
},
{
"start_time": "02:27.00",
"end_time": "02:34.00",
"description": "Beer is poured into the pot."
},
{
"start_time": "02:35.00",
"end_time": "02:43.00",
"description": "The lid of the pot is covered with charcoal briquettes."
},
{
"start_time": "03:14.00",
"end_time": "04:26.00",
"description": "The meat is sliced into pieces."
}
]
6. Superior Computer and Mobile Agent
Example: Mobile Use Next
Please help me book a one-way ticket in the bookings app. The starting point is Chongqing Jiangbei Airport and the ending point is Beijing Capital Airport on January 28th
Example: Computer Use Next
帮我找到 Manchester, GB 地区这个月的天气 (Find the monthly forecast for Manchester, GB for this month.)
Example: Computer Use Next
您能帮我增强我照片的色彩鲜艳度吗? (Could you assist me in enhancing the color vibrancy of my photo?)
Example: Computer Use Next
帮我下载在 “/home/user/eval.vsix” 位置的VS Code插件 (Please help me install an extension in VS Code from a local VSIX file “/home/user/eval.vsix”)
Model Updates
Compared to Qwen2-VL, Qwen2.5-VL has enhanced the model’s perception of temporal and spatial scales, and further simplified the network structure to improve model efficiency.
- Perception of Time and Image Size
In the spatial dimension, Qwen2.5-VL not only dynamically converts images of different sizes into tokens of varying lengths but also directly represents coordinates such as detection boxes and points using the actual size scale of the image, without performing traditional coordinate normalization. This allows the model to directly learn the scale of the images. In the temporal dimension, dynamic FPS (Frames Per Second) training and absolute time encoding have been introduced, aligning mRoPE ids directly with the speed of time. This enables the model to learn the pace of time through the intervals of temporal dimension ids.
- More Concise and Efficient Visual Encoder
The visual encoder plays a crucial role in multimodal large models. We trained a native dynamic resolution ViT from scratch, including stages for CLIP, vision-language model alignment, and end-to-end training. To address the issue of load imbalance in ViT during the training and testing phases of multimodal large models, we introduced Window Attention to effectively reduce the computational load on the ViT side. In our ViT setup, only four layers are Full Attention layers, while the rest use Window Attention. The maximum window size is 8x8, and regions smaller than 8x8 do not require padding; instead, they retain their original scale, ensuring that the model maintains native resolution. Additionally, to simplify the overall network structure, we made the ViT architecture more consistent with LLMs by adopting RMSNorm and SwiGLU structures.
What’s Next
In the near future, we will further enhance the model’s problem-solving and reasoning capabilities, while incorporating more modalities. This will make the model smarter and move us towards an integrated omni-model that can handle multiple types of input and tasks.