MNN 支持 InternVL 多模态大模型

MNN 支持 InternVL 多模态大模型

1. 背景介绍

InternVL (https://modelscope.cn/models/OpenGVLab/InternVL2_5-1B) 是一个多模态模型,结合了视觉和语言处理能力,适用于图像理解、视觉问答等任务,相比QwenVL更为轻量。为了使 InternVL 模型能够在 MNN(Mobile Neural Network)推理框架中高效运行,我们对其进行了适配和优化。以下是 MNN LLM 支持 InternVL 模型的技术实现细节。

2. 语言模型支持

略过模型下载环节。为了方便调试,对于多模态的模型,优先把大语言模型部分转换好并跑通,再增加多模态的部分(图像/声音等等)

2.1 加载与打印

根据 InternVL 官方代码加载模型并打印

model = AutoModel.from_pretrained(
    path,
    torch_dtype=torch.bfloat16,
    low_cpu_mem_usage=True,
    use_flash_attn=True,
    trust_remote_code=True).eval()
print(model)

可以打印出如下结构

InternVLChatConfig {
  "_attn_implementation_autoset": true,
  "_commit_hash": null,
  "architectures": [
    "InternVLChatModel"
  ],
  "auto_map": {
    "AutoConfig": "configuration_internvl_chat.InternVLChatConfig",
    "AutoModel": "modeling_internvl_chat.InternVLChatModel",
    "AutoModelForCausalLM": "modeling_internvl_chat.InternVLChatModel"
  },
  "downsample_ratio": 0.5,
  "dynamic_image_size": true,
  "force_image_size": 448,
  "llm_config": {
    "_attn_implementation_autoset": true,
    "_name_or_path": "Qwen/Qwen2.5-0.5B-Instruct",
    "add_cross_attention": false,
    "architectures": [
      "Qwen2ForCausalLM"
    ],
    "attention_dropout": 0.0,
    "bad_words_ids": null,
    "begin_suppress_tokens": null,
    "bos_token_id": 151643,
    "chunk_size_feed_forward": 0,
    "cross_attention_hidden_size": null,
    "decoder_start_token_id": null,
    "diversity_penalty": 0.0,
    "do_sample": false,
    "early_stopping": false,
    "encoder_no_repeat_ngram_size": 0,
    "eos_token_id": 151645,
    "exponential_decay_length_penalty": null,
    "finetuning_task": null,
    "forced_bos_token_id": null,
    "forced_eos_token_id": null,
    "hidden_act": "silu",
    "hidden_size": 896,
    "id2label": {
      "0": "LABEL_0",
      "1": "LABEL_1"
    },
    "initializer_range": 0.02,
    "intermediate_size": 4864,
    "is_decoder": false,
    "is_encoder_decoder": false,
    "label2id": {
      "LABEL_0": 0,
      "LABEL_1": 1
    },
    "length_penalty": 1.0,
    "max_length": 20,
    "max_position_embeddings": 32768,
    "max_window_layers": 21,
    "min_length": 0,
    "model_type": "qwen2",
    "no_repeat_ngram_size": 0,
    "num_attention_heads": 14,
    "num_beam_groups": 1,
    "num_beams": 1,
    "num_hidden_layers": 24,
    "num_key_value_heads": 2,
    "num_return_sequences": 1,
    "output_attentions": false,
    "output_hidden_states": false,
    "output_scores": false,
    "pad_token_id": null,
    "prefix": null,
    "problem_type": null,
    "pruned_heads": {},
    "remove_invalid_values": false,
    "repetition_penalty": 1.0,
    "return_dict": true,
    "return_dict_in_generate": false,
    "rms_norm_eps": 1e-06,
    "rope_scaling": null,
    "rope_theta": 1000000.0,
    "sep_token_id": null,
    "sliding_window": 32768,
    "suppress_tokens": null,
    "task_specific_params": null,
    "temperature": 1.0,
    "tf_legacy_loss": false,
    "tie_encoder_decoder": false,
    "tie_word_embeddings": false,
    "tokenizer_class": null,
    "top_k": 50,
    "top_p": 1.0,
    "torch_dtype": "bfloat16",
    "torchscript": false,
    "transformers_version": "4.51.3",
    "typical_p": 1.0,
    "use_bfloat16": true,
    "use_cache": true,
    "use_sliding_window": false,
    "vocab_size": 151674
  },
  "max_dynamic_patch": 12,
  "min_dynamic_patch": 1,
  "model_type": "internvl_chat",
  "ps_version": "v2",
  "select_layer": -1,
  "template": "internvl2_5",
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": null,
  "use_backbone_lora": 0,
  "use_llm_lora": 0,
  "use_thumbnail": true,
  "vision_config": {
    "_attn_implementation_autoset": true,
    "_name_or_path": "",
    "add_cross_attention": false,
    "architectures": [
      "InternVisionModel"
    ],
    "attention_dropout": 0.0,
    "bad_words_ids": null,
    "begin_suppress_tokens": null,
    "bos_token_id": null,
    "chunk_size_feed_forward": 0,
    "cross_attention_hidden_size": null,
    "decoder_start_token_id": null,
    "diversity_penalty": 0.0,
    "do_sample": false,
    "drop_path_rate": 0.0,
    "dropout": 0.0,
    "early_stopping": false,
    "encoder_no_repeat_ngram_size": 0,
    "eos_token_id": null,
    "exponential_decay_length_penalty": null,
    "finetuning_task": null,
    "forced_bos_token_id": null,
    "forced_eos_token_id": null,
    "hidden_act": "gelu",
    "hidden_size": 1024,
    "id2label": {
      "0": "LABEL_0",
      "1": "LABEL_1"
    },
    "image_size": 448,
    "initializer_factor": 1.0,
    "initializer_range": 0.02,
    "intermediate_size": 4096,
    "is_decoder": false,
    "is_encoder_decoder": false,
    "label2id": {
      "LABEL_0": 0,
      "LABEL_1": 1
    },
    "layer_norm_eps": 1e-06,
    "length_penalty": 1.0,
    "max_length": 20,
    "min_length": 0,
    "model_type": "intern_vit_6b",
    "no_repeat_ngram_size": 0,
    "norm_type": "layer_norm",
    "num_attention_heads": 16,
    "num_beam_groups": 1,
    "num_beams": 1,
    "num_channels": 3,
    "num_hidden_layers": 24,
    "num_return_sequences": 1,
    "output_attentions": false,
    "output_hidden_states": false,
    "output_scores": false,
    "pad_token_id": null,
    "patch_size": 14,
    "prefix": null,
    "problem_type": null,
    "pruned_heads": {},
    "qk_normalization": false,
    "qkv_bias": true,
    "remove_invalid_values": false,
    "repetition_penalty": 1.0,
    "return_dict": true,
    "return_dict_in_generate": false,
    "sep_token_id": null,
    "suppress_tokens": null,
    "task_specific_params": null,
    "temperature": 1.0,
    "tf_legacy_loss": false,
    "tie_encoder_decoder": false,
    "tie_word_embeddings": true,
    "tokenizer_class": null,
    "top_k": 50,
    "top_p": 1.0,
    "torch_dtype": "bfloat16",
    "torchscript": false,
    "transformers_version": "4.51.3",
    "typical_p": 1.0,
    "use_bfloat16": true,
    "use_flash_attn": false
  }
}
 internvl_chat {'config': {'hidden_size': 'hidden_size', 'head_dim': 'head_dim', 'num_attention_heads': 'num_attention_heads', 'num_hidden_layers': 'num_hidden_layers', 'num_key_value_heads': 'num_key_value_heads', 'rope_theta': 'rope_theta'}, 'model': {'lm_': 'lm_head', 'embed_': 'model.embed_tokens', 'blocks_': 'model.layers', 'final_layernorm_': 'model.norm', 'visual': 'visual'}, 'decoder': {'self_attn': 'self_attn', 'mlp': 'mlp', 'input_layernorm': 'input_layernorm', 'post_attention_layernorm': 'post_attention_layernorm'}, 'attention': {'q_proj': 'q_proj', 'k_proj': 'k_proj', 'v_proj': 'v_proj', 'o_proj': 'o_proj'}} InternVLChatModel(
  (vision_model): InternVisionModel(
    (embeddings): InternVisionEmbeddings(
      (patch_embedding): Conv2d(3, 1024, kernel_size=(14, 14), stride=(14, 14))
    )
    (encoder): InternVisionEncoder(
      (layers): ModuleList(
        (0-23): 24 x InternVisionEncoderLayer(
          (attn): InternAttention(
            (qkv): Linear(in_features=1024, out_features=3072, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (mlp): InternMLP(
            (act): GELUActivation()
            (fc1): Linear(in_features=1024, out_features=4096, bias=True)
            (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          )
          (norm1): LayerNorm((1024,), eps=1e-06, elementwise_affine=True)
          (norm2): LayerNorm((1024,), eps=1e-06, elementwise_affine=True)
          (drop_path1): Identity()
          (drop_path2): Identity()
        )
      )
    )
  )
  (language_model): Qwen2ForCausalLM(
    (model): Qwen2Model(
      (embed_tokens): Embedding(151674, 896)
      (layers): ModuleList(
        (0-23): 24 x Qwen2DecoderLayer(
          (self_attn): Qwen2Attention(
            (q_proj): Linear(in_features=896, out_features=896, bias=True)
            (k_proj): Linear(in_features=896, out_features=128, bias=True)
            (v_proj): Linear(in_features=896, out_features=128, bias=True)
            (o_proj): Linear(in_features=896, out_features=896, bias=False)
          )
          (mlp): Qwen2MLP(
            (gate_proj): Linear(in_features=896, out_features=4864, bias=False)
            (up_proj): Linear(in_features=896, out_features=4864, bias=False)
            (down_proj): Linear(in_features=4864, out_features=896, bias=False)
            (act_fn): SiLU()
          )
          (input_layernorm): Qwen2RMSNorm((896,), eps=1e-06)
          (post_attention_layernorm): Qwen2RMSNorm((896,), eps=1e-06)
        )
      )
      (norm): Qwen2RMSNorm((896,), eps=1e-06)
      (rotary_emb): Qwen2RotaryEmbedding()
    )
    (lm_head): Linear(in_features=896, out_features=151674, bias=False)
  )
  (mlp1): Sequential(
    (0): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
    (1): Linear(in_features=4096, out_features=896, bias=True)
    (2): GELU(approximate='none')
    (3): Linear(in_features=896, out_features=896, bias=True)
  )
  (model): Qwen2ForCausalLM(
    (model): Qwen2Model(
      (embed_tokens): Embedding(151674, 896)
      (layers): ModuleList(
        (0-23): 24 x Qwen2DecoderLayer(
          (self_attn): Qwen2Attention(
            (q_proj): Linear(in_features=896, out_features=896, bias=True)
            (k_proj): Linear(in_features=896, out_features=128, bias=True)
            (v_proj): Linear(in_features=896, out_features=128, bias=True)
            (o_proj): Linear(in_features=896, out_features=896, bias=False)
          )
          (mlp): Qwen2MLP(
            (gate_proj): Linear(in_features=896, out_features=4864, bias=False)
            (up_proj): Linear(in_features=896, out_features=4864, bias=False)
            (down_proj): Linear(in_features=4864, out_features=896, bias=False)
            (act_fn): SiLU()
          )
          (input_layernorm): Qwen2RMSNorm((896,), eps=1e-06)
          (post_attention_layernorm): Qwen2RMSNorm((896,), eps=1e-06)
        )
      )
      (norm): Qwen2RMSNorm((896,), eps=1e-06)
      (rotary_emb): Qwen2RotaryEmbedding()
    )
    (lm_head): Linear(in_features=896, out_features=151674, bias=False)
  )
)

2.2 重命名(Mapper)

根据上面的结构,在 transformers/llm/export/utils/model_mapper.py 注册 InternVL 的映射关系:

def regist_intervl(self):
    intervl_map = {
        'config': {
            'hidden_size': 'llm_config.hidden_size',
            'num_attention_heads': 'llm_config.num_attention_heads',
            'num_hidden_layers': 'llm_config.num_hidden_layers',
            'rope_theta': 'llm_config.rope_theta',
            'head_dim': 'llm_config.head_dim',
            'num_key_value_heads': 'llm_config.num_key_value_heads',
        },
        'model': {
            'lm_': 'language_model.lm_head',
            'embed_': 'language_model.model.embed_tokens',
            'blocks_': 'language_model.model.layers',
            'final_layernorm_': 'language_model.model.norm',
            # 'visual': 'vision_model'
        },
        'decoder': {
            'self_attn': 'self_attn',
            'mlp': 'mlp',
            'input_layernorm': 'input_layernorm',
            'post_attention_layernorm': 'post_attention_layernorm'
        },
        'attention': {
            'q_proj': 'q_proj',
            'k_proj': 'k_proj',
            'v_proj': 'v_proj',
            'o_proj': 'o_proj'
        }
    }
    self.regist('internvl_chat', intervl_map)

*** vision 模型先不注册,优先调通 LLM 部分*

2.3 Prompt 模板

虽然 InternVL 用的 Qwen 的大语言模型结构,但其模型类型为internvl_chat ,无法自动识别为 Qwen 类型,需要修改 llmexport.pybuild_prompt_template 函数,增加处理逻辑:

if self.model_type == 'internvl_chat':
    if 'Qwen' in self.config.llm_config._name_or_path:
        # 使用 Qwen 的模板

此步完成后可以用 llmexport 导出大语言模型部分了,支持文本对话

3. Vision 支持

3.1 重命名

在 2.2 的基础上,把 vision 模型加上:

        'model': {
            'lm_': 'language_model.lm_head',
            'embed_': 'language_model.model.embed_tokens',
            'blocks_': 'language_model.model.layers',
            'final_layernorm_': 'language_model.model.norm',
            'visual': 'vision_model'
        },

3.2 视觉模型导出

主要修改 llm/export/utils/vision.py

对于 internvl_chat 类型,增加一个InternVLVision 类 ,把 modeling_internvl_chat.py 的图像处理模块相关代码移过来:

  • 实现 __init__ 函数,复制源代码中进行图像特征提取所需要的模块
  • 实现 init_configload 函数,填充图像处理的配置文件
  • 实现 forward 函数,以便支持模型导入和Embed计算
  • 实现 export 函数,以便将模型导出为onnx

3.3 可变形状的支持

  • onnx 导出时增加 dynamic_axes
  • 修改模型代码:pixel_shuffle 部分存在无法动态计算形状的问题:
        x = x.view(n, int(h * scale_factor), int(w * scale_factor),
                   int(c / (scale_factor * scale_factor)))

对应需要修改成,用Torch的Tensor计算函数int()替换Python的int强转:

        x = x.view(n, (h * scale_factor).int(), (w * scale_factor).int(),
                   (c / (scale_factor * scale_factor)).int())

3.4 输出匹配

MNN LLM 的 Embedding 输入要求 Vision 模型输出 [seq, batch, hidden] 的形状,而原始代码输出是 [batch, seq, hidden] ,需要额外做个转置:

        vit_embeds = vit_embeds.permute(1, 0, 2)

3.5 图像预处理

阅读官方代码 https://modelscope.cn/models/OpenGVLab/InternVL2_5-1B ,获取图像预处理的参数(MEAN/STD)

在 llm engine 的 vision_process 方法中,支持对图像进行预处理,填充 mean / norm 等参数到 llm_config.json 文件即可:

image = MNN::CV::resize(image, {mVisionHeight, mVisionWidth}, 0, 0,
                        MNN::CV::INTER_LINEAR, MNN::CV::COLOR_BGR2RGB,
                        mVisionMean, mVisionNorm);

init_config 方法设置图像的均值和标准差,注意需要做个转换:

IMAGENET_MEAN = [0.485, 0.456, 0.406]
IMAGENET_STD = [0.229, 0.224, 0.225]
for i in range(3):
    IMAGENET_MEAN[i] = IMAGENET_MEAN[i] * 255.0
    IMAGENET_STD[i] = 1.0 / IMAGENET_STD[i] / 255.0
self.llm_config['image_mean'] = IMAGENET_MEAN
self.llm_config['image_norm'] = IMAGENET_STD
self.llm_config['image_size_unit'] = 14

4. 导出结果与测试

执行导出命令,

python3 llmexport.py --path ~/third/InternVL2_5-1B --export mnn

打印如下结果:

✅ Done load pretrained model /Users/xtjiang/third/InternVL2_5-1B       [13.93 s]
✅ Done export visual to ./model/onnx/visual.onnx                       [11.32 s]
✅ Done convert onnx model to ./model/visual.mnn                        [ 8.18 s]
✅ Done export tokenizer to ./model/tokenizer.txt                       [ 0.18 s]
✅ Done export embedding to ./model/embeddings_bf16.bin                 [ 0.50 s]
✅ Done export onnx model to ./model/onnx/llm.onnx                      [ 4.97 s]
✅ Done convert onnx model to ./model/llm.mnn                           [ 0.88 s]
Load LayerNorm data: 100%|█████████████████████████████████████████████████████████| 2186/2186 [00:00<00:00, 360307.64it/s]
Quant weights: 100%|██████████████████████████████████████████████████████████████████| 2186/2186 [00:04<00:00, 542.69it/s]
✅ Done quant model weight to ./model/llm.mnn.weight                    [ 4.11 s]
✅ Done export config to ./model/llm_config.json                        [ 0.00 s]

导出产物为:

  • config.json
  • tokenizer.txt
  • llm_config.json
  • embeddings_bf16.bin
  • llm.mnn
  • llm.mnn.weight
  • visual.mnn
  • visual.mnn.weight

执行测试:
使用 prompt.txt

https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg336,336介绍一下图片里的内容

执行测试命令:

./llm_demo ../transformers/llm/export/model/config.json pic.txt

执行结果:

config path is ../transformers/llm/export/model/config.json
main, 222, cost time: 390.273010 ms
Prepare for tuning opt Begin
Prepare for tuning opt End
main, 226, cost time: 59.735001 ms
prompt file is pic.txt
File has been downloaded successfully.
图片里有一个人和一只狗。他们坐在沙滩上,背景是海浪和夕阳。狗看起来很享受,而人似乎在和狗玩耍或互动。

你可能感兴趣的:(mnn,深度学习,人工智能,LLM)