九色国产,午夜在线视频,新黄色网址,九九色综合,天天做夜夜做久久做狠狠,天天躁夜夜躁狠狠躁2021a,久久不卡一区二区三区

打開APP
userphoto
未登錄

開通VIP,暢享免費電子書等14項超值服

開通VIP
ChatGLM-6B模型結(jié)構(gòu)組件源碼閱讀

一、前言

本文將介紹ChatGLM-6B的模型結(jié)構(gòu)組件源碼。

代練鏈接:https://huggingface.co/THUDM/chatglm-6b/blob/main/modeling_chatglm.py

二、激活函數(shù)

@torch.jit.script
def gelu_impl(x):
    '''OpenAI's gelu implementation.'''
    return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * x *
                                       (1.0 + 0.044715 * x * x)))


def gelu(x):
    return gelu_impl(x)

三、位置編碼

3.1、RoPE原理簡介

ChatGLM-6B的位置編碼采用的旋轉(zhuǎn)位置編碼(詳細(xì)推導(dǎo)過程見原文:ROPE),簡單來說其目的就是構(gòu)建一個包含相對位置信息的Attention矩陣,其公式如下:

式中,、分別表示注意力機(jī)制中的query和key,、分別表示兩個位置,表示位置i處處理的矩陣,其中的形式為:

原作者提到,由于非常稀疏,直接用矩陣乘法來實現(xiàn)會很浪費算力,推薦通過下述方式來實現(xiàn)RoPE:

3.2、ChatGLM-6B中RoPE代碼實現(xiàn)

這里直接上代碼閱讀

class RotaryEmbedding(torch.nn.Module):
    def __init__(self, dim, base=10000, precision=torch.half, learnable=False):
        super().__init__()
        inv_freq = 1. / (base ** (torch.arange(0, dim, 2).float() / dim))
        inv_freq = inv_freq.half()
        self.learnable = learnable
        if learnable:
            self.inv_freq = torch.nn.Parameter(inv_freq)
            self.max_seq_len_cached = None
        else:
            self.register_buffer('inv_freq', inv_freq)
            self.max_seq_len_cached = None
            self.cos_cached = None
            self.sin_cached = None
        self.precision = precision

    def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys,
                              error_msgs)
:

        pass

    def forward(self, x, seq_dim=1, seq_len=None):
        if seq_len is None:
            seq_len = x.shape[seq_dim]
        if self.max_seq_len_cached is None or (seq_len > self.max_seq_len_cached):
            self.max_seq_len_cached = None if self.learnable else seq_len
            # 在計算旋轉(zhuǎn)嵌入之前,根據(jù)當(dāng)前的嵌入維度和基數(shù)計算頻率因子 inv_freq。將其轉(zhuǎn)換為半精度數(shù)據(jù)類型(如果指定的 precision 為 bfloat16,則轉(zhuǎn)換為單精度)。
            t = torch.arange(seq_len, device=x.device, dtype=self.inv_freq.dtype)
            # 使用 愛因斯坦求和函數(shù) einsum 將 t 和 inv_freq 相乘,得到頻率矩陣 freqs。
            freqs = torch.einsum('i,j->ij', t, self.inv_freq)
            # 通過在頻率矩陣 freqs 中進(jìn)行重復(fù)和拼接操作,生成旋轉(zhuǎn)嵌入矩陣 emb,其維度為 [seq_len, 2 * dim]。
            emb = torch.cat((freqs, freqs), dim=-1).to(x.device)
            if self.precision == torch.bfloat16:
                emb = emb.float()

            # 將旋轉(zhuǎn)嵌入矩陣 emb 分別進(jìn)行余弦和正弦運(yùn)算。
            cos_cached = emb.cos()[:, None, :]
            sin_cached = emb.sin()[:, None, :]
            if self.precision == torch.bfloat16:
                cos_cached = cos_cached.bfloat16()
                sin_cached = sin_cached.bfloat16()
            if self.learnable:
                return cos_cached, sin_cached
            self.cos_cached, self.sin_cached = cos_cached, sin_cached
        # 按照序列長度截取
        return self.cos_cached[:seq_len, ...], self.sin_cached[:seq_len, ...]

四、注意力層

4.1、2D位置編碼

ChatGLM-6B代碼中這一層采用的位置編碼是GLM的中提出的2D位置編碼,詳細(xì)原理見原文:GLM: General Language Model Pretraining with Autoregressive Blank Infilling,其原理圖如下圖:


輸入的序列是,片段和片段被隨機(jī)MASK,原始的輸入序列則變?yōu)?span>,如上圖(a)和(b)所示。將三個片段拼接得到模型的輸入,模型的輸出則是被遮蔽掉的片段,如上圖(c)所示。這里使用了2種位置編碼:第一種編碼為整個輸入嵌入位置信息,能夠表示MASK片段在原始輸入中的位置;第二種位置編碼則是為MASK片段內(nèi)的tokens輸入位置信息。

4.2、注意力機(jī)制

ChatGLM-6B相比標(biāo)準(zhǔn)的自注意力機(jī)制在Q和K中注入了RoPE位置信息。

  • 標(biāo)準(zhǔn)自注意力機(jī)制attention_fn
def attention_fn(
        self,
        query_layer,
        key_layer,
        value_layer,
        attention_mask,
        hidden_size_per_partition,
        layer_id,
        layer_past=None,
        scaling_attention_score=True,
        use_cache=False,
)
:

    # 考慮過去的信息
    if layer_past is not None:
        past_key, past_value = layer_past
        key_layer = torch.cat((past_key, key_layer), dim=0)
        value_layer = torch.cat((past_value, value_layer), dim=0)

    # seqlen, batch, num_attention_heads, hidden_size_per_attention_head
    seq_len, b, nh, hidden_size = key_layer.shape

    if use_cache:
        present = (key_layer, value_layer)
    else:
        present = None

    # 對查詢層進(jìn)行縮放操作,即將其除以(隱藏層大小的平方根乘以查詢層的縮放系數(shù))。這是為了控制注意力得分的尺度。
    query_key_layer_scaling_coeff = float(layer_id + 1)
    if scaling_attention_score:
        query_layer = query_layer / (math.sqrt(hidden_size) * query_key_layer_scaling_coeff)

    # ===================================
    # Raw attention scores. [b, np, s, s]
    # ===================================

    # # 注意力分?jǐn)?shù)的輸出形狀: [batch_size, num_heads, seq_length, seq_length]
    output_size = (query_layer.size(1), query_layer.size(2), query_layer.size(0), key_layer.size(0))

    # 形狀重塑:[seq_length, batch_size, num_heads, head_dim] -> [seq_length, batch_size*num_heads, head_dim]
    # [sq, b, np, hn] -> [sq, b * np, hn]
    query_layer = query_layer.view(output_size[2], output_size[0] * output_size[1], -1)
    # [sk, b, np, hn] -> [sk, b * np, hn]
    key_layer = key_layer.view(output_size[3], output_size[0] * output_size[1], -1)

    matmul_result = torch.empty(
        output_size[0] * output_size[1],
        output_size[2],
        output_size[3],
        dtype=query_layer.dtype,
        device=query_layer.device,
    )
    # 計算原始的注意力得分,通過轉(zhuǎn)置和重塑操作,將查詢、鍵和值的張量形狀調(diào)整為合適的形狀。
    matmul_result = torch.baddbmm(
        matmul_result,
        query_layer.transpose(01),  # [b * np, sq, hn]
        key_layer.transpose(01).transpose(12),  # [b * np, hn, sk]
        beta=0.0,
        alpha=1.0,
    )

    # 重塑形狀為:[batch_size,num_head,seq_length,seq_length]
    attention_scores = matmul_result.view(*output_size)

    # 如果指定了縮放的掩碼 softmax(scale_mask_softmax),則將注意力得分傳遞給縮放的掩碼 softmax 函數(shù)進(jìn)行處理,以獲得歸一化的注意力概率。
    # 否則,將應(yīng)用 softmax 操作,并根據(jù)需要填充一個較大的負(fù)數(shù)值(-10000.0)來屏蔽無效位置。
    if self.scale_mask_softmax:
        self.scale_mask_softmax.scale = query_key_layer_scaling_coeff
        attention_probs = self.scale_mask_softmax(attention_scores, attention_mask.contiguous())
    else:
        # 對注意力分?jǐn)?shù)進(jìn)行mask
        if not (attention_mask == 0).all():
            # if auto-regressive, skip
            attention_scores.masked_fill_(attention_mask, -10000.0)
        dtype = attention_scores.type()
        attention_scores = attention_scores.float()
        attention_scores = attention_scores * query_key_layer_scaling_coeff

        attention_probs = F.softmax(attention_scores, dim=-1)

        attention_probs = attention_probs.type(dtype)

    # =========================
    # Context layer. [sq, b, hp]
    # =========================

    # value_layer -> context layer.
    # [sk, b, np, hn] --> [b, np, sq, hn]

    # context layer shape: [b, np, sq, hn]
    output_size = (value_layer.size(1), value_layer.size(2), query_layer.size(0), value_layer.size(3))

    # change view [sk, b * np, hn]
    value_layer = value_layer.view(value_layer.size(0), output_size[0] * output_size[1], -1)

    # 對注意力分?jǐn)?shù)進(jìn)行mask
    # change view [b * np, sq, sk]
    attention_probs = attention_probs.view(output_size[0] * output_size[1], output_size[2], -1)
    # matmul: [b * np, sq, hn]
    context_layer = torch.bmm(attention_probs, value_layer.transpose(01))
    # change view [b, np, sq, hn]
    context_layer = context_layer.view(*output_size)
    # [b, np, sq, hn] --> [sq, b, np, hn]
    context_layer = context_layer.permute(2013).contiguous()
    # [sq, b, np, hn] --> [sq, b, hp]
    new_context_layer_shape = context_layer.size()[:-2] + (hidden_size_per_partition,)
    # 重塑上下文層
    context_layer = context_layer.view(*new_context_layer_shape)
    outputs = (context_layer, present, attention_probs)

    return outputs
  • SelfAttention的目的是為了捕捉序列中的位置信息,應(yīng)用RoPE將位置信息注入Q和K。

    class SelfAttention(torch.nn.Module):
        def __init__(self, hidden_size, num_attention_heads,
                     layer_id, hidden_size_per_attention_head=None, bias=True,
                     params_dtype=torch.float, position_encoding_2d=True)
    :

            super(SelfAttention, self).__init__()

            self.layer_id = layer_id
            self.hidden_size = hidden_size
            self.hidden_size_per_partition = hidden_size
            self.num_attention_heads = num_attention_heads
            self.num_attention_heads_per_partition = num_attention_heads
            self.position_encoding_2d = position_encoding_2d
            self.rotary_emb = RotaryEmbedding(
                self.hidden_size // (self.num_attention_heads * 2)
                if position_encoding_2d
                else self.hidden_size // self.num_attention_heads,
                base=10000,
                precision=torch.half,
                learnable=False,
            )

            self.scale_mask_softmax = None

            if hidden_size_per_attention_head is None:
                self.hidden_size_per_attention_head = hidden_size // num_attention_heads
            else:
                self.hidden_size_per_attention_head = hidden_size_per_attention_head

            self.inner_hidden_size = num_attention_heads * self.hidden_size_per_attention_head

            # Strided linear layer.
            self.query_key_value = skip_init(
                torch.nn.Linear,
                hidden_size,
                3 * self.inner_hidden_size,
                bias=bias,
                dtype=params_dtype,
            )

            self.dense = skip_init(
                torch.nn.Linear,
                self.inner_hidden_size,
                hidden_size,
                bias=bias,
                dtype=params_dtype,
            )

        @staticmethod
        def attention_mask_func(attention_scores, attention_mask):
            attention_scores.masked_fill_(attention_mask, -10000.0)
            return attention_scores

        def split_tensor_along_last_dim(self, tensor, num_partitions,
                                        contiguous_split_chunks=False)
    :

            '''Split a tensor along its last dimension.
            Arguments:
                tensor: input tensor.
                num_partitions: number of partitions to split the tensor
                contiguous_split_chunks: If True, make each chunk contiguous
                                        in memory.
            '''

            # Get the size and dimension.
            last_dim = tensor.dim() - 1
            last_dim_size = tensor.size()[last_dim] // num_partitions
            # Split.
            tensor_list = torch.split(tensor, last_dim_size, dim=last_dim)
            # Note: torch.split does not create contiguous tensors by default.
            if contiguous_split_chunks:
                return tuple(chunk.contiguous() for chunk in tensor_list)

            return tensor_list

        def forward(
                self,
                hidden_states: torch.Tensor,
                position_ids,
                attention_mask: torch.Tensor,
                layer_id,
                layer_past: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
                use_cache: bool = False,
                output_attentions: bool = False,
        )
    :

            '''
            hidden_states: [seq_len, batch, hidden_size]
            attention_mask: [(1, 1), seq_len, seq_len]
            '''


            # [seq_len, batch, 3 * hidden_size]
            mixed_raw_layer = self.query_key_value(hidden_states)

            # [seq_len, batch, 3 * hidden_size] --> [seq_len, batch, num_attention_heads, 3 * hidden_size_per_attention_head]
            new_tensor_shape = mixed_raw_layer.size()[:-1] + (
                self.num_attention_heads_per_partition,
                3 * self.hidden_size_per_attention_head,
            )
            mixed_raw_layer = mixed_raw_layer.view(*new_tensor_shape)

            # [seq_len, batch, num_attention_heads, hidden_size_per_attention_head]
            (query_layer, key_layer, value_layer) = self.split_tensor_along_last_dim(mixed_raw_layer, 3)

            # 根據(jù)是否使用二維位置編碼,對查詢和鍵應(yīng)用旋轉(zhuǎn)嵌入,并根據(jù)位置信息進(jìn)行索引操作。
            if self.position_encoding_2d:
                q1, q2 = query_layer.chunk(2, dim=(query_layer.ndim - 1))
                k1, k2 = key_layer.chunk(2, dim=(key_layer.ndim - 1))
                cos, sin = self.rotary_emb(q1, seq_len=position_ids.max() + 1)
                position_ids, block_position_ids = position_ids[:, 0, :].transpose(01).contiguous(), \
                                                   position_ids[:, 1, :].transpose(01).contiguous()
                q1, k1 = apply_rotary_pos_emb_index(q1, k1, cos, sin, position_ids)
                q2, k2 = apply_rotary_pos_emb_index(q2, k2, cos, sin, block_position_ids)
                # 拼接嵌入不同位置信息的query和key,這樣query和key中包含了兩種位置信息
                query_layer = torch.concat([q1, q2], dim=(q1.ndim - 1))
                key_layer = torch.concat([k1, k2], dim=(k1.ndim - 1))
            else:
                # RoPE
                position_ids = position_ids.transpose(01)
                cos, sin = self.rotary_emb(value_layer, seq_len=position_ids.max() + 1)
                # [seq_len, batch, num_attention_heads, hidden_size_per_attention_head]
                query_layer, key_layer = apply_rotary_pos_emb_index(query_layer, key_layer, cos, sin, position_ids)

            # 調(diào)用 attention_fn 方法計算注意力得分和上下文層,其中使用了注意力函數(shù)的代碼塊
            # [seq_len, batch, hidden_size]
            context_layer, present, attention_probs = attention_fn(
                self=self,
                query_layer=query_layer,
                key_layer=key_layer,
                value_layer=value_layer,
                attention_mask=attention_mask,
                hidden_size_per_partition=self.hidden_size_per_partition,
                layer_id=layer_id,
                layer_past=layer_past,
                use_cache=use_cache
            )

            output = self.dense(context_layer)

            outputs = (output, present)

            if output_attentions:
                outputs += (attention_probs,)

            return outputs  # output, present, attention_probs

    五、GLU層

    根據(jù)代碼,GLU形式化表示為:

    class GEGLU(torch.nn.Module):
        def __init__(self):
            super().__init__()
            self.activation_fn = F.gelu

        def forward(self, x):
            # dim=-1 breaks in jit for pt<1.10
            x1, x2 = x.chunk(2, dim=(x.ndim - 1))
            return x1 * self.activation_fn(x2)


    class GLU(torch.nn.Module):
        def __init__(self, hidden_size, inner_hidden_size=None,
                     layer_id=None, bias=True, activation_func=gelu, params_dtype=torch.float)
    :

            super(GLU, self).__init__()
            self.layer_id = layer_id
            self.activation_func = activation_func

            # Project to 4h.
            self.hidden_size = hidden_size
            if inner_hidden_size is None:
                inner_hidden_size = 4 * hidden_size
            self.inner_hidden_size = inner_hidden_size
            self.dense_h_to_4h = skip_init(
                torch.nn.Linear,
                self.hidden_size,
                self.inner_hidden_size,
                bias=bias,
                dtype=params_dtype,
            )
            # Project back to h.
            self.dense_4h_to_h = skip_init(
                torch.nn.Linear,
                self.inner_hidden_size,
                self.hidden_size,
                bias=bias,
                dtype=params_dtype,
            )

        def forward(self, hidden_states):
            '''
            hidden_states: [seq_len, batch, hidden_size]
            '''


            # [seq_len, batch, inner_hidden_size]
            intermediate_parallel = self.dense_h_to_4h(hidden_states)

            intermediate_parallel = self.activation_func(intermediate_parallel)

            output = self.dense_4h_to_h(intermediate_parallel)

            return output

    六、GLMBlock

根據(jù)代碼,GLMBlock由Layer Norm、Self Attention、Layer Norm和GLU模塊構(gòu)成。


class GLMBlock(torch.nn.Module):
    def __init__(
            self,
            hidden_size,
            num_attention_heads,
            layernorm_epsilon,
            layer_id,
            inner_hidden_size=None,
            hidden_size_per_attention_head=None,
            layernorm=LayerNorm,
            use_bias=True,
            params_dtype=torch.float,
            num_layers=28,
            position_encoding_2d=True
    )
:

        super(GLMBlock, self).__init__()
        # Set output layer initialization if not provided.

        self.layer_id = layer_id

        # LayerNorm層
        self.input_layernorm = layernorm(hidden_size, eps=layernorm_epsilon)
        # 是否使用2維位置編碼
        self.position_encoding_2d = position_encoding_2d

        # 自注意力層
        self.attention = SelfAttention(
            hidden_size,
            num_attention_heads,
            layer_id,
            hidden_size_per_attention_head=hidden_size_per_attention_head,
            bias=use_bias,
            params_dtype=params_dtype,
            position_encoding_2d=self.position_encoding_2d
        )

        # LayerNorm層
        self.post_attention_layernorm = layernorm(hidden_size, eps=layernorm_epsilon)

        self.num_layers = num_layers

        # GLU層
        self.mlp = GLU(
            hidden_size,
            inner_hidden_size=inner_hidden_size,
            bias=use_bias,
            layer_id=layer_id,
            params_dtype=params_dtype,
        )

    def forward(
            self,
            hidden_states: torch.Tensor,
            position_ids,
            attention_mask: torch.Tensor,
            layer_id,
            layer_past: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
            use_cache: bool = False,
            output_attentions: bool = False,
    )
:

        '''
        hidden_states: [seq_len, batch, hidden_size]
        attention_mask: [(1, 1), seq_len, seq_len]
        '''


        # 輸入進(jìn)行Layer Norm
        # [seq_len, batch, hidden_size]
        attention_input = self.input_layernorm(hidden_states)

        # 自注意力
        attention_outputs = self.attention(
            attention_input,
            position_ids,
            attention_mask=attention_mask,
            layer_id=layer_id,
            layer_past=layer_past,
            use_cache=use_cache,
            output_attentions=output_attentions
        )

        attention_output = attention_outputs[0]

        outputs = attention_outputs[1:]

        # Residual connection.
        alpha = (2 * self.num_layers) ** 0.5
        # 執(zhí)行注意力殘差連接
        hidden_states = attention_input * alpha + attention_output
        # 對注意力殘差連接后的輸出進(jìn)行層歸一化
        mlp_input = self.post_attention_layernorm(hidden_states)

        # 使用GLU層對歸一化后的輸出進(jìn)行非線性變換
        mlp_output = self.mlp(mlp_input)

        # 執(zhí)行GLU殘差連接
        output = mlp_input * alpha + mlp_output

        if use_cache:
            outputs = (output,) + outputs
        else:
            outputs = (output,) + outputs[1:]

        return outputs  # hidden_states, present, attentions

七、ChatGLMPreTrainedModel

這一塊主要看看其中的MASKPosition_ids

7.1、ChatGLM-6B的Mask

ChatGLM-6B采用prefix-LM的Mask,其對于輸入的前綴使用雙向注意力,對于后續(xù)的生成部分則是Causal Mask


def get_masks(self, input_ids, device):
    batch_size, seq_length = input_ids.shape
    # context_lengths記錄了batch中每個樣本的真實長度
    context_lengths = [seq.tolist().index(self.config.bos_token_id) for seq in input_ids]
    # 生成causal mask,即下三角以及對角線為1,上三角為0
    attention_mask = torch.ones((batch_size, seq_length, seq_length), device=device)
    attention_mask.tril_()
    # 將前綴部分的注意力改為雙向注意力
    for i, context_length in enumerate(context_lengths):
        attention_mask[i, :, :context_length] = 1
    attention_mask.unsqueeze_(1)
    attention_mask = (attention_mask < 0.5).bool()
        
    return attention_mask

7.2、ChatGLM-6B的Position_ids

def get_position_ids(self, input_ids, mask_positions, device, use_gmasks=None):
    '''
    input_ids: [batch_size, seq_length]
    mask_positions: [batch_size],由于GLM系列中會使用[Mask]或[gMask]標(biāo)志,mask_positions就是指這些標(biāo)注的具體位置
    '''

    batch_size, seq_length = input_ids.shape
    if use_gmasks is None:
        use_gmasks = [False] * batch_size
    # context_lengths:未被padding前,batch中各個樣本的長度
    context_lengths = [seq.tolist().index(self.config.bos_token_id) for seq in input_ids]
    # 2維位置編碼
    if self.position_encoding_2d:
        # [0,1,2,...,seq_length-1]
        position_ids = torch.arange(seq_length, dtype=torch.long, device=device).unsqueeze(0).repeat(batch_size, 1)
        # 將原始輸入后所有位置的postion id都設(shè)置為[Mask]或者[gMask]的位置id
        for i, context_length in enumerate(context_lengths):
            position_ids[i, context_length:] = mask_positions[i]
        # 原始輸入的位置編碼全部設(shè)置為0,待生成的位置添加順序的位置id
        # 例如:[0,0,0,0,1,2,3,4,5]
        block_position_ids = [torch.cat((
            torch.zeros(context_length, dtype=torch.long, device=device),
            torch.arange(seq_length - context_length, dtype=torch.long, device=device) + 1
        )) for context_length in context_lengths]
        block_position_ids = torch.stack(block_position_ids, dim=0)
        # 將postion_ids和block_position_ids堆疊在一起,用于后續(xù)的參數(shù)傳入;
        # 在注意力層中,還有將這個position_ids拆分為兩部分
        position_ids = torch.stack((position_ids, block_position_ids), dim=1)
    else:
        position_ids = torch.arange(seq_length, dtype=torch.long, device=device).unsqueeze(0).repeat(batch_size, 1)
        for i, context_length in enumerate(context_lengths):
            if not use_gmasks[i]:
                position_ids[i, context_length:] = mask_positions[i]

    return position_ids

八、ChatGLMModel

這一塊主要是模型的各部件的組合結(jié)構(gòu),直接看源碼:

class ChatGLMModel(ChatGLMPreTrainedModel):
 def __init__(self, config: ChatGLMConfig, empty_init=True):
        super().__init__(config)
        if empty_init:
            init_method = skip_init
        else:
            init_method = default_init
        # recording parameters
        self.max_sequence_length = config.max_sequence_length
        self.hidden_size = config.hidden_size
        self.params_dtype = torch.half
        self.num_attention_heads = config.num_attention_heads
        self.vocab_size = config.vocab_size
        self.num_layers = config.num_layers
        self.layernorm_epsilon = config.layernorm_epsilon
        self.inner_hidden_size = config.inner_hidden_size
        self.hidden_size_per_attention_head = self.hidden_size // self.num_attention_heads
        self.position_encoding_2d = config.position_encoding_2d
        self.pre_seq_len = config.pre_seq_len
        self.prefix_projection = config.prefix_projection

        self.word_embeddings = init_method(
            torch.nn.Embedding,
            num_embeddings=self.vocab_size, embedding_dim=self.hidden_size,
            dtype=self.params_dtype
        )
        self.gradient_checkpointing = False

        def get_layer(layer_id):
            return GLMBlock(
                self.hidden_size,
                self.num_attention_heads,
                self.layernorm_epsilon,
                layer_id,
                inner_hidden_size=self.inner_hidden_size,
                hidden_size_per_attention_head=self.hidden_size_per_attention_head,
                layernorm=LayerNorm,
                use_bias=True,
                params_dtype=self.params_dtype,
                position_encoding_2d=self.position_encoding_2d,
                empty_init=empty_init
            )

        self.layers = torch.nn.ModuleList(
            [get_layer(layer_id) for layer_id in range(self.num_layers)]
        )

        # Final layer norm before output.
        self.final_layernorm = LayerNorm(self.hidden_size, eps=self.layernorm_epsilon)

        '''
        pre_seq_len 為prompt部分長度,這部分僅編碼,無反向傳播

        '''

        if self.pre_seq_len is not None:
            for param in self.parameters():
                param.requires_grad = False
            self.prefix_tokens = torch.arange(self.pre_seq_len).long()
            self.prefix_encoder = PrefixEncoder(config)
            self.dropout = torch.nn.Dropout(0.1)


    def get_prompt(self, batch_size, device, dtype=torch.half):
        '''
        prompt 編碼
        
        '''

        prefix_tokens = self.prefix_tokens.unsqueeze(0).expand(batch_size, -1).to(device)
        past_key_values = self.prefix_encoder(prefix_tokens).type(dtype)
        past_key_values = past_key_values.view(
            batch_size,
            self.pre_seq_len,
            self.num_layers * 2,
            self.num_attention_heads,
            self.hidden_size // self.num_attention_heads
        )
        # seq_len, b, nh, hidden_size
        past_key_values = self.dropout(past_key_values)
        past_key_values = past_key_values.permute([21034]).split(2)
        # past_key_values = [(v[0], v[1]) for v in past_key_values]
        return past_key_values


 def forward(
            self,
            input_ids: Optional[torch.LongTensor] = None,
            position_ids: Optional[torch.LongTensor] = None,
            attention_mask: Optional[torch.Tensor] = None,
            past_key_values: Optional[Tuple[Tuple[torch.Tensor, torch.Tensor], ...]] = None,
            inputs_embeds: Optional[torch.LongTensor] = None,
            use_cache: Optional[bool] = None,
            output_attentions: Optional[bool] = None,
            output_hidden_states: Optional[bool] = None,
            return_dict: Optional[bool] = None,
    )
 -> Union[Tuple[torch.Tensor, ...], BaseModelOutputWithPast]:

        .....
        '''
 past_key_values機(jī)制是重要的機(jī)制,其可以防止模型在文本生成任務(wù)中重新計算上一次迭代
  中已經(jīng)計算好的上下文的值,大大提高了模型在文本生成任務(wù)中的計算效率。但要特別注意的是,
    在第一次迭代時由于不存在上一次迭代返回的past_key_values值,因此第一次迭代時
    past_key_values值為None。
    past_key_values 中每個元素的dim :
    num_layers * seq_len * batch_size * nh * hidden_size_per_head
        '''

        if past_key_values is None:
            if self.pre_seq_len is not None:
                past_key_values = self.get_prompt(batch_size=input_ids.shape[0], device=input_ids.device,
                                                  dtype=inputs_embeds.dtype)
            else:
                past_key_values = tuple([None] * len(self.layers))

            if attention_mask is None:
                attention_mask = self.get_masks(
                    input_ids,
                    device=input_ids.device
                )


            if position_ids is None:
    '''
    如果只有MASK無gMASK,則mask_positions 為第一個MASK的起始位置,
    如果有g(shù)MASK, 則mask_positions 為第一個gMASK的起始位置
    e.g.
    gMASK = 130001
    MASK = 130000
    seqs = [[11,22,MASK,33,MASK]]
    --> mask_positions:[2] use_gmask = [False]

    gMASK = 130001
    MASK = 130000
    seqs = seqs = [[11,22,MASK,33,MASK, gMASK, 55, 66, gMASK, 77]]
    --> mask_positions:[5] use_gmask = [True]
    
    把位置id結(jié)合mask位置信息由get_position_ids計算(為父類ChatGLMPreTrainedModel的方法)
    在使用2d position coding 時,position_ids dim = batch_size * 2 * seq_length 
    第二維包含 position_id 和 block_position_id 
    '''

                MASK, gMASK = self.config.mask_token_id, self.config.gmask_token_id
                seqs = input_ids.tolist()

                mask_positions, use_gmasks = [], []
                for seq in seqs:
                    mask_token = gMASK if gMASK in seq else MASK
                    use_gmask = mask_token == gMASK
                    mask_positions.append(seq.index(mask_token))
                    use_gmasks.append(use_gmask)

                position_ids = self.get_position_ids(
                    input_ids,
                    mask_positions=mask_positions,
                    device=input_ids.device,
                    use_gmasks=use_gmasks
                )

        if self.pre_seq_len is not None and attention_mask is not None:
            prefix_attention_mask = torch.ones(batch_size, 1, input_ids.size(-1), self.pre_seq_len).to(
                attention_mask.device)
            prefix_attention_mask = (prefix_attention_mask < 0.5).bool()
            attention_mask = torch.cat((prefix_attention_mask, attention_mask), dim=3)

        '''
        輸入的embedding在這里進(jìn)行了轉(zhuǎn)置 
        batch_size * seq_len * hidden_size -> seq_len * batch_size * hidden_size
        '''

        # [seq_len, batch, hidden_size]
        hidden_states = inputs_embeds.transpose(01)

        presents = () if use_cache else None
        all_self_attentions = () if output_attentions else None
        all_hidden_states = () if output_hidden_states else None

        if attention_mask is None:
            attention_mask = torch.zeros(11, device=input_ids.device).bool()
        else:
            attention_mask = attention_mask.to(hidden_states.device)

            
  for i, layer in enumerate(self.layers):

            if output_hidden_states:
                all_hidden_states = all_hidden_states + (hidden_states,)
            layer_past = past_key_values[i]

            if self.gradient_checkpointing and self.training:
                layer_ret = torch.utils.checkpoint.checkpoint(
                    layer,
                    hidden_states,
                    position_ids,
                    attention_mask,
                    torch.tensor(i),
                    layer_past,
                    use_cache,
                    output_attentions
                )
            else:
                layer_ret = layer(
                    hidden_states,
                    position_ids=position_ids,
                    attention_mask=attention_mask,
                    layer_id=torch.tensor(i),
                    layer_past=layer_past,
                    use_cache=use_cache,
                    output_attentions=output_attentions
                )

            hidden_states = layer_ret[0]

            if use_cache:
                presents = presents + (layer_ret[1],)

            if output_attentions:
                all_self_attentions = all_self_attentions + (layer_ret[2 if use_cache else 1],)

        # Final layer norm.
        hidden_states = self.final_layernorm(hidden_states)

        if output_hidden_states:
            all_hidden_states = all_hidden_states + (hidden_states,)

        if not return_dict:
            return tuple(v for v in [hidden_states, presents, all_hidden_states, all_self_attentions] if v is not None)

'''
 經(jīng)過多個glm block堆疊,最后通過一個layernorm
'''

        return BaseModelOutputWithPast(
            last_hidden_state=hidden_states,
            past_key_values=presents,
            hidden_states=all_hidden_states,
            attentions=all_self_attentions,
        )

如有不詳細(xì)之處,還望一起交流學(xué)習(xí)。

參考文獻(xiàn)

  1. GLM: General Language Model Pretraining with Autoregressive Blank Infilling)
  2. modeling_chatglm.py · THUDM/chatglm-6b at main (huggingface.co)
  3. Transformer升級之路:2、博采眾長的旋轉(zhuǎn)式位置編碼 - 科學(xué)空間|Scientific Spaces (kexue.fm)
本站僅提供存儲服務(wù),所有內(nèi)容均由用戶發(fā)布,如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請點擊舉報。
打開APP,閱讀全文并永久保存 查看更多類似文章
猜你喜歡
類似文章
chatgpt中英文寫作翻譯提示詞(4)
Kaggle知識點:BERT的五種Pooling方法
如何從零開始用PyTorch實現(xiàn)Chatbot?(附完整代碼)
pytorch實現(xiàn)part-of-speech(POS)序列標(biāo)注
基于物理信息的神經(jīng)網(wǎng)絡(luò)「PINN」
基于深度學(xué)習(xí)框架pytorch搭建循環(huán)神經(jīng)網(wǎng)絡(luò)LSTM完成手寫字體識別
更多類似文章 >>
生活服務(wù)
熱點新聞
分享 收藏 導(dǎo)長圖 關(guān)注 下載文章
綁定賬號成功
后續(xù)可登錄賬號暢享VIP特權(quán)!
如果VIP功能使用有故障,
可點擊這里聯(lián)系客服!

聯(lián)系客服