Unfortunately, the memory limit from them is not set. Xen/arm: No memory limit for dom0less domUs The dom0less feature allows an administrator to create multiple unprivileged domains directly from Xen. Unfortunately, when XSA-379 was being prepared, this similar issue was not noticed. That enforcement was missing, allowing guests to retain access to pages that were freed and perhaps re-used for other purposes. Freeing such pages requires that the hypervisor enforce that no parallel request can result in the addition of a mapping of such a page to a guest. Grant table v2 status pages, however, are de-allocated when a guest switches (back) from v2 to v1. The majority of such pages remain allocated / associated with a guest for its entire lifetime. Subsequent DMA or interrupts from the device will have unpredictable behaviour, ranging from IOMMU faults to memory corruption.Īnother race in XENMAPSPACE_grant_table handling Guests are permitted access to certain Xen-owned pages of memory. The IOMMU configuration for these devices which are not properly deassigned ends up pointing to a freed data structure, including the IO Pagetables. If such a device is passed through to a guest, then on guest shutdown the device is not properly deassigned. These are typically used for platform tasks such as legacy USB emulation. PCI devices with RMRRs not deassigned correctly Certain PCI devices in a system might be assigned Reserved Memory Regions (specified via Reserved Memory Region Reporting, "RMRR").
#Update citrix plugin 12.3 to receiver 4.3 large number of clients code
This bug was fortuitously fixed by code cleanup in Xen 4.14, and backported to security-supported Xen branches as a prerequisite of the fix for XSA-378. Upon switching back from v2 to v1, the guest would then retain access to a page that was freed and perhaps re-used for other purposes. The hypervisor tracks only one use within guest space, but racing requests from the guest to insert mappings of these pages may result in any of them to become mapped in multiple locations. The freeing of such pages requires that the hypervisor know where in the guest these pages were mapped. Grant table v2 status pages, however, get de-allocated when a guest switched (back) from v2 to v1. Grant table v2 status pages may remain accessible after de-allocation (take two) Guest get permitted access to certain Xen-owned pages of memory. Consequently, the guest is able to write to leaf page table entries. When sharing page tables, Xen erroneously skipped this stripping. In such a configuration the lop level table needs to be stripped before inserting the root table's address into the hardware pagetable base register. However, an IOMMU may require the use of just 3 page table levels. These page tables are presently set up to always be 4 levels deep. In the latter case, this would affect the entire host.Ĭertain VT-d IOMMUs may not work in shared page table mode For efficiency reasons, address translation control structures (page tables) may (and, on suitable hardware, by default will) be shared between CPUs, for second-level translation (EPT), and IOMMUs.
HVM guests with PCI pass through devices can mount a Denial of Service (DoS) attack affecting the pass through of PCI devices to other guests or the hardware domain. Such reboots will leak any vectors used by the MSI(-X) entries that the guest might had enabled, and hence will lead to vector exhaustion on the system, not allowing further PCI pass through devices to work properly. An x86 HVM guest with PCI pass through devices can force the allocation of all IDT vectors on the system by rebooting itself with MSI or MSI-X capabilities enabled and entries setup. The attacker gains access to data sets such as VMs, Backups, Audit, Users, and Groups.Īn issue was discovered in Xen 4.12.3 through 4.12.4 and 4.13.1 through 4.14.x. Xen Orchestra (with xo-web through 5.80.0 and xo-server through 5.84.0) mishandles authorization, as demonstrated by modified WebSocket resourceSet.getAll data is which the attacker changes the permission field from none to admin.