Home

Awesome

CPM_LM_bert4keras

在bert4keras下加载CPM_LM模型

相关链接

模型下载

适配bert4keras的模型下载:https://pan.baidu.com/s/1QyUly1zHKuAxDwyKcNXueg 提取码: xn7a

如果已经下载了pytorch版,那么读者也可以用convert.py自行转换。

示例代码

完整脚本见:basic_language_model_cpm_lm.py

# 模型路径
config_path = '/root/kg/bert/CPM_LM_2.6B_TF/config.json'
checkpoint_path = '/root/kg/bert/CPM_LM_2.6B_TF/model.ckpt'
spm_path = '/root/kg/bert/CPM_LM_2.6B_TF/chinese_vocab.model'


def pre_tokenize(text):
    """分词前处理函数
    """
    return [
        w.replace(' ', u'\u2582').replace('\n', u'\u2583')
        for w in jieba.cut(text, cut_all=False)
    ]


tokenizer = SpTokenizer(
    spm_path,
    token_start=None,
    token_end=None,
    pre_tokenize=pre_tokenize,
    token_translate={u'\u2583': '<cls>'}
)  # 建立分词器

model = build_transformer_model(
    config_path=config_path, checkpoint_path=checkpoint_path, model='gpt2'
)  # 建立模型,加载权重

Config

config.json参考如下

{
  "vocab_size": 30000,
  "hidden_size": 2560,
  "attention_probs_dropout_prob": 0.0,
  "hidden_dropout_prob": 0.0,
  "hidden_act": "gelu",
  "initializer_range": 0.02,
  "intermediate_size": 10240,
  "max_position_embeddings": 1024,
  "num_attention_heads": 32,
  "num_hidden_layers": 32
}

环境依赖

交流联系

QQ交流群:808623966,微信群请加机器人微信号spaces_ac_cn