Sign Up
Log In
Log In
or
Sign Up
Places
All Projects
Status Monitor
Collapse sidebar
SUSE:SLE-15-SP3:Update
xen
xsa444-1.patch
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
File xsa444-1.patch of Package xen
From: Andrew Cooper <andrew.cooper3@citrix.com> Subject: x86/svm: Fix asymmetry with AMD DR MASK context switching The handling of MSR_DR{0..3}_MASK is asymmetric between PV and HVM guests. HVM guests context switch in based on the guest view of DBEXT, whereas PV guest switch in base on the host capability. Both guest types leave the context dirty for the next vCPU. This leads to the following issue: * PV or HVM vCPU has debugging active (%dr7 + mask) * Switch out deactivates %dr7 but leaves other state stale in hardware * HVM vCPU with debugging activate but can't see DBEXT is switched in * Switch in loads %dr7 but leaves the mask MSRs alone Now, the HVM vCPU is operating in the context of the prior vCPU's mask MSR, and furthermore in a case where it genuinely expects there to be no masking MSRs. As a stopgap, adjust the HVM path to switch in/out the masks based on host capabilities rather than guest visibility (i.e. like the PV path). Adjustment of the of the intercepts still needs to be dependent on the guest visibility of DBEXT. This is part of XSA-444 / CVE-2023-34327 Fixes: c097f54912d3 ("x86/SVM: support data breakpoint extension registers") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -185,6 +185,10 @@ static void svm_save_dr(struct vcpu *v) v->arch.hvm.flag_dr_dirty = 0; vmcb_set_dr_intercepts(vmcb, ~0u); + /* + * The guest can only have changed the mask MSRs if we previous dropped + * intercepts. Re-read them from hardware. + */ if ( v->domain->arch.cpuid->extd.dbext ) { svm_intercept_msr(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_INTERCEPT_RW); @@ -216,17 +220,25 @@ static void __restore_debug_registers(st ASSERT(v == current); - if ( v->domain->arch.cpuid->extd.dbext ) + /* + * Both the PV and HVM paths leave stale DR_MASK values in hardware on + * context-switch-out. If we're activating %dr7 for the guest, we must + * sync the DR_MASKs too, whether or not the guest can see them. + */ + if ( boot_cpu_has(X86_FEATURE_DBEXT) ) { - svm_intercept_msr(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_INTERCEPT_NONE); - svm_intercept_msr(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_INTERCEPT_NONE); - svm_intercept_msr(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_INTERCEPT_NONE); - svm_intercept_msr(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_INTERCEPT_NONE); - wrmsrl(MSR_AMD64_DR0_ADDRESS_MASK, v->arch.msrs->dr_mask[0]); wrmsrl(MSR_AMD64_DR1_ADDRESS_MASK, v->arch.msrs->dr_mask[1]); wrmsrl(MSR_AMD64_DR2_ADDRESS_MASK, v->arch.msrs->dr_mask[2]); wrmsrl(MSR_AMD64_DR3_ADDRESS_MASK, v->arch.msrs->dr_mask[3]); + + if ( v->domain->arch.cpuid->extd.dbext ) + { + svm_intercept_msr(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_INTERCEPT_NONE); + svm_intercept_msr(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_INTERCEPT_NONE); + svm_intercept_msr(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_INTERCEPT_NONE); + svm_intercept_msr(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_INTERCEPT_NONE); + } } write_debugreg(0, v->arch.dr[0]); --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -2157,6 +2157,11 @@ void activate_debugregs(const struct vcp if ( curr->arch.dr7 & DR7_ACTIVE_MASK ) write_debugreg(7, curr->arch.dr7); + /* + * Both the PV and HVM paths leave stale DR_MASK values in hardware on + * context-switch-out. If we're activating %dr7 for the guest, we must + * sync the DR_MASKs too, whether or not the guest can see them. + */ if ( boot_cpu_has(X86_FEATURE_DBEXT) ) { wrmsrl(MSR_AMD64_DR0_ADDRESS_MASK, curr->arch.msrs->dr_mask[0]);
Locations
Projects
Search
Status Monitor
Help
OpenBuildService.org
Documentation
API Documentation
Code of Conduct
Contact
Support
@OBShq
Terms
openSUSE Build Service is sponsored by
The Open Build Service is an
openSUSE project
.
Sign Up
Log In
Places
Places
All Projects
Status Monitor