Sign Up
Log In
Log In
or
Sign Up
Places
All Projects
Status Monitor
Collapse sidebar
Please login to access the resource
SUSE:SLE-12-SP3:GA
xen.5854
56fd1d74-x86-HVM-fix-forwarding-of-internally-c...
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
File 56fd1d74-x86-HVM-fix-forwarding-of-internally-cached-requests.patch of Package xen.5854
References: bsc#963161 # Commit 96ae556569b8eaedc0bb242932842c3277b515d8 # Date 2016-03-31 14:52:04 +0200 # Author Jan Beulich <jbeulich@suse.com> # Committer Jan Beulich <jbeulich@suse.com> x86/HVM: fix forwarding of internally cached requests Forwarding entire batches to the device model when an individual iteration of them got rejected by internal device emulation handlers with X86EMUL_UNHANDLEABLE is wrong: The device model would then handle all iterations, without the internal handler getting to see any past the one it returned failure for. This causes misbehavior in at least the MSI-X and VGA code, which want to see all such requests for internal tracking/caching purposes. But note that this does not apply to buffered I/O requests. This in turn means that the condition in hvm_process_io_intercept() of when to crash the domain was wrong: Since X86EMUL_UNHANDLEABLE can validly be returned by the individual device handlers, we mustn't blindly crash the domain if such occurs on other than the initial iteration. Instead we need to distinguish hvm_copy_*_guest_phys() failures from device specific ones, and then the former need to always be fatal to the domain (i.e. also on the first iteration), since otherwise we again would end up forwarding a request to qemu which the internal handler didn't get to see. The adjustment should be okay even for stdvga's MMIO handling: - if it is not caching then the accept function would have failed so we won't get into hvm_process_io_intercept(), - if it issued the buffered ioreq then we only get to the p->count reduction if hvm_send_ioreq() actually encountered an error (in which we don't care about the request getting split up). Also commit 4faffc41d ("x86/hvm: limit reps to avoid the need to handle retry") went too far in removing code from hvm_process_io_intercept(): When there were successfully handled iterations, the function should continue to return success with a clipped repeat count. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> # Commit 670ee15ac1e3de7c15381fdaab0e531489b48939 # Date 2016-04-28 15:09:26 +0200 # Author Jan Beulich <jbeulich@suse.com> # Committer Jan Beulich <jbeulich@suse.com> x86/HVM: fix forwarding of internally cached requests (part 2) Commit 96ae556569 ("x86/HVM: fix forwarding of internally cached requests") wasn't quite complete: hvmemul_do_io() also needs to propagate up the clipped count. (I really should have re-tested the forward port resulting in the earlier change, instead of relying on the testing done on the older version of Xen which the fix was first needed for.) Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -216,11 +216,17 @@ static int hvmemul_do_io( rc = hvm_portio_intercept(p); } + /* + * p->count may have got reduced (see hvm_mmio_access() and + * process_portio_intercept()) - inform our callers. + */ + if ( p->count <= *reps ) + *reps = p->count; + switch ( rc ) { case X86EMUL_OKAY: case X86EMUL_RETRY: - *reps = p->count; p->state = STATE_IORESP_READY; if ( !vio->mmio_retry ) { --- a/xen/arch/x86/hvm/intercept.c +++ b/xen/arch/x86/hvm/intercept.c @@ -140,8 +140,8 @@ static int hvm_mmio_access(struct vcpu * ASSERT(0); /* fall through */ default: - rc = X86EMUL_UNHANDLEABLE; - break; + domain_crash(v->domain); + return X86EMUL_UNHANDLEABLE; } if ( rc != X86EMUL_OKAY ) break; @@ -159,6 +159,15 @@ static int hvm_mmio_access(struct vcpu * p->count = i; rc = X86EMUL_OKAY; } + else if ( rc == X86EMUL_UNHANDLEABLE ) + { + /* + * Don't forward entire batches to the device model: This would + * prevent the internal handlers to see subsequent iterations of + * the request. + */ + p->count = 1; + } return rc; } @@ -302,8 +311,8 @@ static int process_portio_intercept(port ASSERT(0); /* fall through */ default: - rc = X86EMUL_UNHANDLEABLE; - break; + domain_crash(current->domain); + return X86EMUL_UNHANDLEABLE; } if ( rc != X86EMUL_OKAY ) break; @@ -321,6 +330,15 @@ static int process_portio_intercept(port p->count = i; rc = X86EMUL_OKAY; } + else if ( rc == X86EMUL_UNHANDLEABLE ) + { + /* + * Don't forward entire batches to the device model: This would + * prevent the internal handlers to see subsequent iterations of + * the request. + */ + p->count = 1; + } return rc; } --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -362,8 +362,8 @@ static int dpci_ioport_read(uint32_t mpo ASSERT(0); /* fall through */ default: - rc = X86EMUL_UNHANDLEABLE; - break; + domain_crash(current->domain); + return X86EMUL_UNHANDLEABLE; } if ( rc != X86EMUL_OKAY) break;
Locations
Projects
Search
Status Monitor
Help
OpenBuildService.org
Documentation
API Documentation
Code of Conduct
Contact
Support
@OBShq
Terms
openSUSE Build Service is sponsored by
The Open Build Service is an
openSUSE project
.
Sign Up
Log In
Places
Places
All Projects
Status Monitor