---------------------文章不完整 随着个人理解的深入正持续更新中---------------------
太深刻的内容我目前的水平可能理解的不够好,所以下面文章我暂且用比较通俗易懂的文字来描述。
顾名思义,虚拟dom并不真实存在,而是抽象出来的一种、和真实dom具有相似结构的一种js对象。
当数据发生变化的时候,我们不再操作真实的dom来更新页面,而是通过操作虚拟dom来进行更新,然后找到虚拟dom和真实dom之间存在的差异,找到真正需要更新的地方,然后将这些需要更新的地方反馈到真实的dom上面,来进行视图的更新。
上述操作,可以降低一些不必要的更新操作,极大地提高了浏览器的渲染效率。
简而言之,直接操作dom的话,非常浪费性能。
那么如何找到虚拟DOM和真实DOM之间的差异?那就需要引入一种算法了
上述所说的dom其实是一种树形结构,类似于以下结构
而diff算法会将这个树形的结果按照层级进行分解,只比较同级的元素,如上图所示。
如果下面的这两个div盒子分别是新旧结点的话,他们被分解之后的层级比较如下,两个div标签属于同一层级,p和h1标签属于同一层级
(实际开发过程中遇到的可能比这复杂得多,我们只是简单的举个例子演示一下)
ok,接下来上核心了
真实的dom会生成一个虚拟的dom树,这个虚拟dom树上的节点在未发生改变之前称之为oldVnode(我认为是以old virtual node命名的),那么当节点上的数据发生改变之后,就会生成一个新的节点,称之为newVnode,那么如何进行新旧节点之间的比较呢?
事实上是调用了一个名为patch的方法来进行处理的,处理的过程也分为以下两种情况:
没有key也就意味着你不知道改变之后的这个新节点到底是新增的节点还是旧节点变化而来的,我们无法通过他的标识(key)来找到他的“前身”
这种情况分为了三步:
第一步:渲染
第二步:删除
第三步:新增
是不是听起来很简单,事实上也确实很简洁,那么我将详细介绍这一过程
c1和c2分别是旧节点数组和新节点数组,然后我们找出c1和c2谁的长度更短,这个最短的长度即公共长度,然后遍历这个最小长度(即上面说的公共长度,即c1和c2的最小长度),调用patch方法找到两个Vnode的差异
①如果旧节点数组长度 > 新节点数组长度,意味着你进行了删除操作,那就调用unmountChildren方法(具体不解释这个方法了,一旦解释起来就跟套娃一样一个接一个了),这方法的意思就是从旧节点列表当中删除不再需要的节点
②如果新节点数组长度 > 旧节点数组长度,意味着进行了新增操作,此时调用了mountChildren方法,把newVnode挂载到容器当中,新节点从公共长度处开始插入,直到c2的结尾
打个断点 我空闲了继续编辑这一部分
以下是源码,酌情看
const patchUnkeyedChildren = (
c1: VNode[],
c2: VNodeArrayChildren,
container: RendererElement,
anchor: RendererNode | null,
parentComponent: ComponentInternalInstance | null,
parentSuspense: SuspenseBoundary | null,
namespace: ElementNamespace,
slotScopeIds: string[] | null,
optimized: boolean,
) => {
c1 = c1 || EMPTY_ARR
c2 = c2 || EMPTY_ARR
const oldLength = c1.length
const newLength = c2.length
const commonLength = Math.min(oldLength, newLength)
let i
for (i = 0; i < commonLength; i++) {
const nextChild = (c2[i] = optimized
? cloneIfMounted(c2[i] as VNode)
: normalizeVNode(c2[i]))
patch(
c1[i],
nextChild,
container,
null,
parentComponent,
parentSuspense,
namespace,
slotScopeIds,
optimized,
)
}
if (oldLength > newLength) {
// remove old
unmountChildren(
c1,
parentComponent,
parentSuspense,
true,
false,
commonLength,
)
} else {
// mount new
mountChildren(
c2,
container,
anchor,
parentComponent,
parentSuspense,
namespace,
slotScopeIds,
optimized,
commonLength,
)
}
}
有key也就意味着我们可以通过它自身携带的属性key来找到他的“前身”,根据新节点的这个key来判断它到底是新增的节点还是旧节点变化而来的。
这种情况我们将它分为五步:
第一步:前序算法,直到跳出循环
第二步:尾序算法,直到跳出循环
前两个步骤是顺序的情况下,那么通过这两个步骤,找到发生变化的节点,如果发现多了一个新节点,那就走第三步,如果少了一个节点,那就走第四步,还有一种比较复杂的情况就是乱序,遇到这种情况走第五步
第三步:新增操作
第四步:删除操作
第五步:乱序的情况下又分三个小步骤
① 为新节点们都创建了一个key,目的是为了构建一个新的映射关系,具体如何实现的咱不说了,一时半会说不完,那么他建立这个映射的最终目的就是:重新给他做一个排序,从无序状态变为有序状态
② 通过第①小步,我们可以获取到新节点在旧节点中的位置数组,如果有多余的旧节点就把这个旧节点删了,如果出现了交叉,那么就走第③小步
③ 此时,我们需要求得最长公共子序列(即LCS,具体怎么求这里不再赘述),如果我们遍历的这个节点不在LCS里面,就要进行移动,如果在里面就跳过
说的还是不够详细 还是得等我继续补充
以下是源码,酌情看
// can be all-keyed or mixed
const patchKeyedChildren = (
c1: VNode[],
c2: VNodeArrayChildren,
container: RendererElement,
parentAnchor: RendererNode | null,
parentComponent: ComponentInternalInstance | null,
parentSuspense: SuspenseBoundary | null,
namespace: ElementNamespace,
slotScopeIds: string[] | null,
optimized: boolean,
) => {
let i = 0
const l2 = c2.length
let e1 = c1.length - 1 // prev ending index
let e2 = l2 - 1 // next ending index
// 1. sync from start
// (a b) c
// (a b) d e
while (i <= e1 && i <= e2) {
const n1 = c1[i]
const n2 = (c2[i] = optimized
? cloneIfMounted(c2[i] as VNode)
: normalizeVNode(c2[i]))
if (isSameVNodeType(n1, n2)) {
patch(
n1,
n2,
container,
null,
parentComponent,
parentSuspense,
namespace,
slotScopeIds,
optimized,
)
} else {
break
}
i++
}
// 2. sync from end
// a (b c)
// d e (b c)
while (i <= e1 && i <= e2) {
const n1 = c1[e1]
const n2 = (c2[e2] = optimized
? cloneIfMounted(c2[e2] as VNode)
: normalizeVNode(c2[e2]))
if (isSameVNodeType(n1, n2)) {
patch(
n1,
n2,
container,
null,
parentComponent,
parentSuspense,
namespace,
slotScopeIds,
optimized,
)
} else {
break
}
e1--
e2--
}
// 3. common sequence + mount
// (a b)
// (a b) c
// i = 2, e1 = 1, e2 = 2
// (a b)
// c (a b)
// i = 0, e1 = -1, e2 = 0
if (i > e1) {
if (i <= e2) {
const nextPos = e2 + 1
const anchor = nextPos < l2 ? (c2[nextPos] as VNode).el : parentAnchor
while (i <= e2) {
patch(
null,
(c2[i] = optimized
? cloneIfMounted(c2[i] as VNode)
: normalizeVNode(c2[i])),
container,
anchor,
parentComponent,
parentSuspense,
namespace,
slotScopeIds,
optimized,
)
i++
}
}
}
// 4. common sequence + unmount
// (a b) c
// (a b)
// i = 2, e1 = 2, e2 = 1
// a (b c)
// (b c)
// i = 0, e1 = 0, e2 = -1
else if (i > e2) {
while (i <= e1) {
unmount(c1[i], parentComponent, parentSuspense, true)
i++
}
}
// 5. unknown sequence
// [i ... e1 + 1]: a b [c d e] f g
// [i ... e2 + 1]: a b [e d c h] f g
// i = 2, e1 = 4, e2 = 5
else {
const s1 = i // prev starting index
const s2 = i // next starting index
// 5.1 build key:index map for newChildren
const keyToNewIndexMap: Map = new Map()
for (i = s2; i <= e2; i++) {
const nextChild = (c2[i] = optimized
? cloneIfMounted(c2[i] as VNode)
: normalizeVNode(c2[i]))
if (nextChild.key != null) {
if (__DEV__ && keyToNewIndexMap.has(nextChild.key)) {
warn(
`Duplicate keys found during update:`,
JSON.stringify(nextChild.key),
`Make sure keys are unique.`,
)
}
keyToNewIndexMap.set(nextChild.key, i)
}
}
// 5.2 loop through old children left to be patched and try to patch
// matching nodes & remove nodes that are no longer present
let j
let patched = 0
const toBePatched = e2 - s2 + 1
let moved = false
// used to track whether any node has moved
let maxNewIndexSoFar = 0
// works as Map
// Note that oldIndex is offset by +1
// and oldIndex = 0 is a special value indicating the new node has
// no corresponding old node.
// used for determining longest stable subsequence
const newIndexToOldIndexMap = new Array(toBePatched)
for (i = 0; i < toBePatched; i++) newIndexToOldIndexMap[i] = 0
for (i = s1; i <= e1; i++) {
const prevChild = c1[i]
if (patched >= toBePatched) {
// all new children have been patched so this can only be a removal
unmount(prevChild, parentComponent, parentSuspense, true)
continue
}
let newIndex
if (prevChild.key != null) {
newIndex = keyToNewIndexMap.get(prevChild.key)
} else {
// key-less node, try to locate a key-less node of the same type
for (j = s2; j <= e2; j++) {
if (
newIndexToOldIndexMap[j - s2] === 0 &&
isSameVNodeType(prevChild, c2[j] as VNode)
) {
newIndex = j
break
}
}
}
if (newIndex === undefined) {
unmount(prevChild, parentComponent, parentSuspense, true)
} else {
newIndexToOldIndexMap[newIndex - s2] = i + 1
if (newIndex >= maxNewIndexSoFar) {
maxNewIndexSoFar = newIndex
} else {
moved = true
}
patch(
prevChild,
c2[newIndex] as VNode,
container,
null,
parentComponent,
parentSuspense,
namespace,
slotScopeIds,
optimized,
)
patched++
}
}
// 5.3 move and mount
// generate longest stable subsequence only when nodes have moved
const increasingNewIndexSequence = moved
? getSequence(newIndexToOldIndexMap)
: EMPTY_ARR
j = increasingNewIndexSequence.length - 1
// looping backwards so that we can use last patched node as anchor
for (i = toBePatched - 1; i >= 0; i--) {
const nextIndex = s2 + i
const nextChild = c2[nextIndex] as VNode
const anchor =
nextIndex + 1 < l2 ? (c2[nextIndex + 1] as VNode).el : parentAnchor
if (newIndexToOldIndexMap[i] === 0) {
// mount new
patch(
null,
nextChild,
container,
anchor,
parentComponent,
parentSuspense,
namespace,
slotScopeIds,
optimized,
)
} else if (moved) {
// move if:
// There is no stable subsequence (e.g. a reverse)
// OR current node is not among the stable sequence
if (j < 0 || i !== increasingNewIndexSequence[j]) {
move(nextChild, container, anchor, MoveType.REORDER)
} else {
j--
}
}
}
}
}