Home | History | Annotate | Download | only in pmap

Lines Matching refs:asid

46  * that have a valid ASID.
49 * then reinitialize the ASID space, and start allocating again at 1. When
50 * allocating from the ASID bitmap, we skip any ASID who has a corresponding
51 * bit set in the ASID bitmap. Eventually this causes the ASID bitmap to fill
52 * and, when completely filled, a reinitialization of the ASID space.
54 * To reinitialize the ASID space, the ASID bitmap is reset and then the ASIDs
55 * of non-kernel TLB entries get recorded in the ASID bitmap. If the entries
56 * in TLB consume more than half of the ASID space, all ASIDs are invalidated,
57 * the ASID bitmap is recleared, and the list of pmaps is emptied. Otherwise,
58 * (the normal case), any ASID present in the TLB (even those which are no
60 * will be freed. If the size of the TLB is much smaller than the ASID space,
64 * other CPUs, some of which are dealt with the reinitialization of the ASID
68 * since we can't change the current ASID.
71 * indicates whether that pmap has an allocated ASID for a CPU. Each bit in
72 * pm_onproc indicates that the pmap's ASID is in use, i.e. a CPU has it in its
73 * "current ASID" field, e.g. the ASID field of the COP 0 register EntryHi for
74 * MIPS, or the ASID field of TTBR0 for AA64. The bit number used in these
91 * ASID for that TLB. If it does have a valid ASID but isn't current "onproc"
92 * we simply reset its ASID for that TLB and then when it goes "onproc" it
93 * will allocate a new ASID and any existing TLB entries will be orphaned.
94 * Only in the case that pmap has an "onproc" ASID do we actually have to send
99 * that has one of the pmap's ASID "onproc". In reality, any CPU sharing that
105 * 1) one ASID needs to have its TLB entries invalidated
106 * 2) more than one ASID needs to have its TLB entries invalidated
108 * 4) the kernel and one or more ASID need their TLB entries invalidated.
112 * 1) if that ASID is still "onproc", we invalidate the TLB entries for
113 * that single ASID. If not, just reset the pmap's ASID to invalidate
114 * and let it allocate a new ASID the next time it goes "onproc",
115 * 2) we reinitialize the ASID space (preserving any "onproc" ASIDs) and
118 * 4) we reinitialize the ASID space (again preserving any "onproc" ASIDs)
149 #define TLBINFO_ASID_MARK_UNUSED(ti, asid) \
150 __BITMAP_CLR((asid), &(ti)->ti_asid_bitmap)
151 #define TLBINFO_ASID_MARK_USED(ti, asid) \
152 __BITMAP_SET((asid), &(ti)->ti_asid_bitmap)
153 #define TLBINFO_ASID_INUSE_P(ti, asid) \
154 __BITMAP_ISSET((asid), &(ti)->ti_asid_bitmap)
158 for (tlb_asid_t asid = 0; asid <= KERNEL_PID; asid++) \
159 TLBINFO_ASID_MARK_USED(ti, asid); \
228 "pm %p asid %#x (%d)", PAI_PMAP(pai, ti), pai->pai_asid,
231 "pm %p asid %#x", PAI_PMAP(pai, ti), pai->pai_asid);
233 "pm %p asid %u", PAI_PMAP(pai, ti), pai->pai_asid);
249 UVMHIST_CALLARGS(maphist, "(ti=%#jx, pai=%#jx, pm=%#jx): asid %u",
253 * We must have an ASID but it must not be onproc (on a processor).
266 * If the platform has a cheap way to flush ASIDs then free the ASID
268 * ASID from the TLB when it's allocated. That way we know the flush
277 UVMHIST_LOG(maphist, " ... asid %u flushed", pai->pai_asid, 0,
282 UVMHIST_LOG(maphist, " ... asid marked unused",
289 * Note that we don't mark the ASID as not in use in the TLB's ASID
290 * bitmap (thus it can't be allocated until the ASID space is exhausted
292 * entries belonging to this ASID so we will let natural TLB entry
294 * pmap will need a new ASID allocated.
339 ti->ti_name, "asid pool reinit");
425 for (tlb_asid_t asid = 1; asid <= ti->ti_asid_max; asid++) {
426 if (TLBINFO_ASID_INUSE_P(ti, asid))
444 * First, clear the ASID bitmap (except for ASID 0 which belongs
467 * and clear the ASID bitmap. That will force everyone to
468 * allocate a new ASID.
511 * Now go through the active ASIDs. If the ASID is on a processor or
513 * that ASID, mark it as in use. Otherwise release the ASID.
567 * We only need to invalidate one user ASID.
573 UVMHIST_LOG(maphist, "... onproc asid %jd", pai->pai_asid, 0, 0, 0);
583 UVMHIST_LOG(maphist, "... not active asid %jd", pai->pai_asid, 0, 0, 0);
586 * So simply clear its ASID and when pmap_activate is
588 * ASID.
702 * ASID so there's nothing to change.
732 * If this pmap has an ASID assigned but it's not
733 * currently running, nuke its ASID. Next time the
734 * pmap is activated, it will allocate a new ASID.
776 "pmap %p (asid %u) va %#"PRIxVADDR" pte %#"PRIxPTE" rv %d",
799 UVMHIST_CALLARGS(maphist, " (pm=%#jx va=%#jx) ti=%#jx asid=%#jx",
805 UVMHIST_LOG(maphist, " invalidating %#jx asid %#jx",
822 * We shouldn't have an ASID assigned, and thusly must not be onproc
836 * If the last ASID allocated was the maximum ASID, then the
838 * available ASID.
848 * Let's see if the hinted ASID is free. If not search for
878 * The hint contains our next ASID so take it and advance the hint.
880 * There is also one less asid free in this TLB.
886 * Clean the new ASID from the TLB.
897 * Mark that we now have an active ASID for all CPUs sharing this TLB.
910 * Acquire a TLB address space tag (called ASID or TLBPID) and return it.
911 * ASID might have already been previously acquired.
927 * Kernels use a fixed ASID and thus doesn't need to acquire one.
954 * If we've run out ASIDs, reinitialize the ASID space.
958 UVMHIST_LOG(maphist, " asid reinit", 0, 0, 0, 0);
964 * Get an ASID.
967 UVMHIST_LOG(maphist, "allocated asid %#jx", pai->pai_asid,
986 UVMHIST_LOG(maphist, "setting asid to %#jx", pai->pai_asid,
1073 "pm %p i %zu asid %u",
1080 * If the pmap has an ASID allocated, free it.
1105 const tlb_asid_t asid __debugused = tlb_get_asid();
1106 UVMHIST_LOG(maphist, " asid %u vs pmap_cur_asid %u", asid,
1108 KDASSERTMSG(asid == curcpu()->ci_pmap_asid_cur,
1109 "%s: asid (%#x) != current asid (%#x)",
1110 __func__, asid, curcpu()->ci_pmap_asid_cur);
1135 pr(" asid %5u\n", pm->pm_pai[0].pai_asid);
1138 pr(" tlb %zu asid %5u\n", i, pm->pm_pai[i].pai_asid);