Sign Up
Log In
Log In
or
Sign Up
Places
All Projects
Status Monitor
Collapse sidebar
openSUSE:Leap:15.5:Update
xen.25150
60422428-x86-shadow-avoid-fast-fault-path.patch
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
File 60422428-x86-shadow-avoid-fast-fault-path.patch of Package xen.25150
# Commit 9318fdf757ec234f0ee6c5cd381326b2f581d065 # Date 2021-03-05 13:29:28 +0100 # Author Jan Beulich <jbeulich@suse.com> # Committer Jan Beulich <jbeulich@suse.com> x86/shadow: suppress "fast fault path" optimization without reserved bits When none of the physical address bits in PTEs are reserved, we can't create any 4k (leaf) PTEs which would trigger reserved bit faults. Hence the present SHOPT_FAST_FAULT_PATH machinery needs to be suppressed in this case, which is most easily achieved by never creating any magic entries. To compensate a little, eliminate sh_write_p2m_entry_post()'s impact on such hardware. While at it, also avoid using an MMIO magic entry when that would truncate the incoming GFN. Requested-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org> # Commit 60c0444fae2148452f9ed0b7c49af1fa41f8f522 # Date 2021-03-08 10:41:50 +0100 # Author Jan Beulich <jbeulich@suse.com> # Committer Jan Beulich <jbeulich@suse.com> x86/shadow: suppress "fast fault path" optimization when running virtualized We can't make correctness of our own behavior dependent upon a hypervisor underneath us correctly telling us the true physical address with hardware uses. Without knowing this, we can't be certain reserved bit faults can actually be observed. Therefore, besides evaluating the number of address bits when deciding whether to use the optimization, also check whether we're running virtualized ourselves. (Note that since we may get migrated when running virtualized, the number of address bits may also change.) Requested-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Tim Deegan <tim@xen.org> --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -569,7 +569,8 @@ _sh_propagate(struct vcpu *v, { /* Guest l1e maps emulated MMIO space */ *sp = sh_l1e_mmio(target_gfn, gflags); - d->arch.paging.shadow.has_fast_mmio_entries = true; + if ( sh_l1e_is_magic(*sp) ) + d->arch.paging.shadow.has_fast_mmio_entries = true; goto done; } --- a/xen/arch/x86/mm/shadow/types.h +++ b/xen/arch/x86/mm/shadow/types.h @@ -290,24 +290,41 @@ void sh_destroy_monitor_table(struct vcp * pagetables. * * This is only feasible for PAE and 64bit Xen: 32-bit non-PAE PTEs don't - * have reserved bits that we can use for this. + * have reserved bits that we can use for this. And even there it can only + * be used if we can be certain the processor doesn't use all 52 address bits. */ #define SH_L1E_MAGIC 0xffffffff00000001ULL + +static inline bool sh_have_pte_rsvd_bits(void) +{ + return paddr_bits < PADDR_BITS && !cpu_has_hypervisor; +} + static inline bool sh_l1e_is_magic(shadow_l1e_t sl1e) { return (sl1e.l1 & SH_L1E_MAGIC) == SH_L1E_MAGIC; } /* Guest not present: a single magic value */ -static inline shadow_l1e_t sh_l1e_gnp(void) +static inline shadow_l1e_t sh_l1e_gnp_raw(void) { return (shadow_l1e_t){ -1ULL }; } +static inline shadow_l1e_t sh_l1e_gnp(void) +{ + /* + * On systems with no reserved physical address bits we can't engage the + * fast fault path. + */ + return sh_have_pte_rsvd_bits() ? sh_l1e_gnp_raw() + : shadow_l1e_empty(); +} + static inline bool sh_l1e_is_gnp(shadow_l1e_t sl1e) { - return sl1e.l1 == sh_l1e_gnp().l1; + return sl1e.l1 == sh_l1e_gnp_raw().l1; } /* @@ -322,9 +339,14 @@ static inline bool sh_l1e_is_gnp(shadow_ static inline shadow_l1e_t sh_l1e_mmio(gfn_t gfn, u32 gflags) { - return (shadow_l1e_t) { (SH_L1E_MMIO_MAGIC - | MASK_INSR(gfn_x(gfn), SH_L1E_MMIO_GFN_MASK) - | (gflags & (_PAGE_USER|_PAGE_RW))) }; + unsigned long gfn_val = MASK_INSR(gfn_x(gfn), SH_L1E_MMIO_GFN_MASK); + + if ( !sh_have_pte_rsvd_bits() || + gfn_x(gfn) != MASK_EXTR(gfn_val, SH_L1E_MMIO_GFN_MASK) ) + return shadow_l1e_empty(); + + return (shadow_l1e_t) { (SH_L1E_MMIO_MAGIC | gfn_val | + (gflags & (_PAGE_USER | _PAGE_RW))) }; } static inline bool sh_l1e_is_mmio(shadow_l1e_t sl1e)
Locations
Projects
Search
Status Monitor
Help
OpenBuildService.org
Documentation
API Documentation
Code of Conduct
Contact
Support
@OBShq
Terms
openSUSE Build Service is sponsored by
The Open Build Service is an
openSUSE project
.
Sign Up
Log In
Places
Places
All Projects
Status Monitor