Sign Up
Log In
Log In
or
Sign Up
Places
All Projects
Status Monitor
Collapse sidebar
No build reason found for SLE-Module-Development-Tools-OBS:ppc64le
SUSE:SLE-12-SP3:GA
xen.7316
58a70d94-VMX-fix-VMCS-race-on-cswitch-paths.patch
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
File 58a70d94-VMX-fix-VMCS-race-on-cswitch-paths.patch of Package xen.7316
# Commit 2f4d2198a9b3ba94c959330b5c94fe95917c364c # Date 2017-02-17 15:49:56 +0100 # Author Jan Beulich <jbeulich@suse.com> # Committer Jan Beulich <jbeulich@suse.com> VMX: fix VMCS race on context-switch paths When __context_switch() is being bypassed during original context switch handling, the vCPU "owning" the VMCS partially loses control of it: It will appear non-running to remote CPUs, and hence their attempt to pause the owning vCPU will have no effect on it (as it already looks to be paused). At the same time the "owning" CPU will re-enable interrupts eventually (the lastest when entering the idle loop) and hence becomes subject to IPIs from other CPUs requesting access to the VMCS. As a result, when __context_switch() finally gets run, the CPU may no longer have the VMCS loaded, and hence any accesses to it would fail. Hence we may need to re-load the VMCS in vmx_ctxt_switch_from(). For consistency use the new function also in vmx_do_resume(), to avoid leaving an open-coded incarnation of it around. Reported-by: Kevin Mayer <Kevin.Mayer@gdata.de> Reported-by: Anshul Makkar <anshul.makkar@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Sergey Dyasli <sergey.dyasli@citrix.com> Tested-by: Sergey Dyasli <sergey.dyasli@citrix.com> --- a/xen/arch/x86/hvm/vmx/vmcs.c +++ b/xen/arch/x86/hvm/vmx/vmcs.c @@ -470,6 +470,20 @@ static void vmx_load_vmcs(struct vcpu *v local_irq_restore(flags); } +void vmx_vmcs_reload(struct vcpu *v) +{ + /* + * As we may be running with interrupts disabled, we can't acquire + * v->arch.hvm_vmx.vmcs_lock here. However, with interrupts disabled + * the VMCS can't be taken away from us anymore if we still own it. + */ + ASSERT(v->is_running || !local_irq_is_enabled()); + if ( v->arch.hvm_vmx.vmcs == this_cpu(current_vmcs) ) + return; + + vmx_load_vmcs(v); +} + int vmx_cpu_up_prepare(unsigned int cpu) { /* @@ -1334,10 +1348,7 @@ void vmx_do_resume(struct vcpu *v) bool_t debug_state; if ( v->arch.hvm_vmx.active_cpu == smp_processor_id() ) - { - if ( v->arch.hvm_vmx.vmcs != this_cpu(current_vmcs) ) - vmx_load_vmcs(v); - } + vmx_vmcs_reload(v); else { /* --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -684,6 +684,18 @@ static void vmx_ctxt_switch_from(struct if ( unlikely(!this_cpu(vmxon)) ) return; + if ( !v->is_running ) + { + /* + * When this vCPU isn't marked as running anymore, a remote pCPU's + * attempt to pause us (from vmx_vmcs_enter()) won't have a reason + * to spin in vcpu_sleep_sync(), and hence that pCPU might have taken + * away the VMCS from us. As we're running with interrupts disabled, + * we also can't call vmx_vmcs_enter(). + */ + vmx_vmcs_reload(v); + } + vmx_fpu_leave(v); vmx_save_guest_msrs(v); vmx_restore_host_msrs(); --- a/xen/include/asm-x86/hvm/vmx/vmcs.h +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h @@ -149,6 +149,7 @@ void vmx_destroy_vmcs(struct vcpu *v); void vmx_vmcs_enter(struct vcpu *v); bool_t __must_check vmx_vmcs_try_enter(struct vcpu *v); void vmx_vmcs_exit(struct vcpu *v); +void vmx_vmcs_reload(struct vcpu *v); #define CPU_BASED_VIRTUAL_INTR_PENDING 0x00000004 #define CPU_BASED_USE_TSC_OFFSETING 0x00000008
Locations
Projects
Search
Status Monitor
Help
OpenBuildService.org
Documentation
API Documentation
Code of Conduct
Contact
Support
@OBShq
Terms
openSUSE Build Service is sponsored by
The Open Build Service is an
openSUSE project
.
Sign Up
Log In
Places
Places
All Projects
Status Monitor