WYSE图形桌面虚拟化方案-Nvidia-GRID
- 格式:pptx
- 大小:3.40 MB
- 文档页数:54
桌面虚拟化解决方案目录1概述 (3)1.1项目背景 (3)1.2用户当前的问题 (4)1.3用户需求分析 (5)2系统总体设计 (7)2.1设计原则 (7)2.2系统设计目标 (7)2.3红山解决方案 (8)2.4红山方案优势 (9)3具体方案建议 (11)3.1方案设计 (11)3.2方案拓朴图 (12)3.3方案说明 (13)3.4方案分析 (14)4部署与实施 (17)4.1TurboGate安装 (17)4.2NComputing安装 (19)5产品介绍 (21)5.1红山TurboGate介绍 (21)5.2NComputing产品说明 (24)1 概述1.1项目背景Xx公司,是集研发、生产、贸易、服务于一体的技术创新型高新技术企。
目前研发软件部分主要的岗位分为开发类、配置管理类、集成编译类、QA、软件代表及测试类等。
随着企业研发办公规模扩大,办公环境的管理越来越复杂。
如何利用现有硬件资源,建立一个简单、易用、安全的统一接入平台,以有效进行办公环境的规范管理,支持可控的远程访问,同时保证重要数据和代码的安全,是企业面临的一个重大难题。
在传统的IT系统架构中,桌面即功能齐全的PC。
随着IT应用的日益强大,业务对IT 的依赖也越来越大,为每个用户提供安全高效的桌面环境成为业务开展的基本要求。
传统的PC桌面系统越来越显示出其缺点和局限性,主要表现在以下几个方面:⏹管理困难:用户要求能在任何地方访问其桌面环境,但PC 硬件分布广泛,很难实现集中式 PC 管理。
另外,由于 PC 硬件种类繁多,而用户修改桌面环境的需求各异,因此PC 桌面标准化也是一个难题。
⏹数据的安全性无法保证:一方面,数据能否成功备份,在PC故障或文件丢失时能否成功恢复;另一方面,如果PC丢失,则PC上所有的数据也会丢失。
用户的数据安全面临巨大的挑战。
⏹资源利用率低:随着硬件运算能力的高速发展,PC的硬件配置通常都远超过了业务应用系统的使用需求,大多数PC都运行在极低的负载状态,利用率在5%以下。
美的桌面虚拟化解决方案案例一、引言桌面虚拟化是一种将用户的桌面环境从物理设备中解耦,将其移至虚拟服务器上的技术。
美的集团作为一家全球率先的家电创造商,为了提升员工的工作效率和数据安全性,决定采用桌面虚拟化解决方案来统一管理和部署员工的工作环境。
本文将介绍美的桌面虚拟化解决方案的具体实施情况和取得的成果。
二、解决方案概述1. 技术选型美的选择了一家知名的虚拟化解决方案提供商作为合作火伴,该解决方案基于VMware Horizon View平台,结合了VMware vSphere虚拟化技术和NVIDIA GRID 图形加速卡,为用户提供高性能的桌面虚拟化体验。
2. 系统架构美的桌面虚拟化解决方案采用了典型的三层架构,包括用户终端设备、虚拟桌面服务器和数据中心服务器。
用户通过终端设备访问虚拟桌面服务器,而虚拟桌面服务器则连接到数据中心服务器,实现桌面环境的虚拟化和集中管理。
3. 功能特性美的桌面虚拟化解决方案提供了以下功能特性:- 高性能图形加速:通过NVIDIA GRID图形加速卡,实现对图形密集型应用的加速,保证用户在虚拟桌面环境下的流畅体验。
- 统一管理:管理员可以通过集中的管理控制台对所有虚拟桌面进行集中管理和配置,大大减轻了管理工作的负担。
- 快速部署:虚拟桌面可以根据需要快速部署和调整,节省了时间和资源成本。
- 数据安全:所实用户数据都存储在数据中心服务器中,减少了数据泄露和丢失的风险。
三、实施过程1. 环境准备在实施桌面虚拟化解决方案之前,美的进行了详细的规划和准备工作。
他们建立了一个专门的项目团队,负责解决方案的设计、实施和测试。
此外,他们还进行了现有硬件和网络环境的评估,确保能够满足虚拟化解决方案的需求。
2. 系统部署美的首先部署了虚拟桌面服务器和数据中心服务器。
他们采用了高性能的服务器硬件,并使用VMware vSphere进行虚拟化配置。
然后,他们安装和配置了VMware Horizon View平台,并将用户的桌面环境进行虚拟化。
Solution GuideBalancing Graphics Performance, User Density & Concurrency with NVIDIA GRID™vGPU ™ (Virtual GPU Technology) for Autodesk Revit Power UsersV1.0Table of ContentsThe GRID vGPU benefit (3)Understanding GRID vGPU Profiles (3)Benchmarking as a proxy for real world workflows (5)Methodology (6)Fully Engaged Graphics Workloads? (6)Analyzing the Performance Data to Understand How User Density Affects Overall Performance (7)Server Configuration & GRID Resources (11)The GRID vGPU BenefitThe inclusion of GRID vGPU™ support in XenDesktop 7.1 allows businesses to leverage the power of NVI DIA’s GRID™ technology to create a whole new class of virtual machines designed to provide end users with a rich, interactive graphics experience. By allowing multiple virtual machines to access the power of a single GPU within the virtualization server, enterprises can now maximize the number of users with access to true GPU based graphics acceleration in their virtual machines. Because each physical GPU within the server can be configured with a specific vGPU profile organizations have a great deal of flexibility in how to best configure their server to meet the needs of various types of end users.Up to 8 VMs can connect to the physical GRID GPU via vGPU profiles controlled by the NVIDIA vGPU Manager.While the flexibility and power of vGPU system implementations provide improved end user experience and productivity benefits, they also provide server administrators with direct control of GPU resource allocation for multiple users. Administrators can balance user density and performance, maintaining high GPU performance for all users. While user density requirements can vary from installation to installation based on specific application usage, concurrency of usage, vGPU profile characteristics, and hardware variation, it’s possible to run sta ndardized benchmarking procedures to establish user density and performance baselines for new vGPU installations.Understanding GRID vGPU ProfilesWithin any given enterprise the needs of individual users varies widely, a one size fits all approach to graphic s virtualization doesn’t take these differences into account. One of the key benefits of NVIDIA GRID vGPU is the flexibility to utilize various vGPU profiles designed to serve the needs of different classes of end users. While the needs of end users can be quite diverse, for simplicity we can group them into the following categories: Knowledge Workers, Designers and Power Users.For knowledge workers key areas of importance include office productivityapplications, a rich web experience, and fluid video playback. Graphically knowledgeworkers have the least graphics demands, but they expect a similarly smooth, fluidexperience that exists natively on today’s graphic accelerated devices such asdesktop PCs, notebooks, tablets and smart phones.Power Users are those users with the need to run more demanding officeapplications; examples include office productivity software, image editing softwarelike Adobe Photoshop, mainstream CAD software like Autodesk Revit and productlifecycle management (PLM) applications. These applications are more demandingand require additional graphics resources with full support for APIs such as OpenGLand Direct3D.Designers are those users within an organization running demanding professionalapplications such as high end CAD software and professional digital contentcreation (DCC) tools. Examples include Autodesk Inventor, PTC Creo, Autodesk Revitand Adobe Premiere. Historically designers have utilized desktop workstations andhave been a difficult group to incorporate into virtual deployments due to the needfor high end graphics, and the certification requirements of professional CAD andDCC software.The various NVIDIA GRID vGPU profiles are designed to serve the needs of these three categories of users:Each GPU within a system must be configured to provide a single vGPU profile, however separate GPU’s on the same GRID board can each be configured separately. For example a single K2 board could be configured to serve eight K200 enabled VM’s on one GPU and two K260Q enabled VM’s on the other GPU.The key to effi cient utilization of a system’s GRID resources requires understanding the correct end user workload to properly configure the installed GRID cards with the ideal vGPU profiles maximizing both end user productivity and vGPU user density.The vGPU profiles with the “Q” suffix (K140Q, K240Qand K260Q), offer additional benefits not available inthe non-Q profiles, the primary of which is that Qbased vGPU profiles will be certified for professionalapplications. These profiles offer additional supportfor professional applications by optimizing thegraphics driver settings for each application usingNVIDIA’s Application Configuration Engine (ACE),ACE offers dedicated profiles for most professionalworkstation applications, once ACE detects thelaunch of a supported application it verifies that thedriver is optimally tuned for the best userexperience in the application. Benchmarking as a Proxy for Real World WorkflowsIn order to provide data that offers a positive correlation to the workloads we can expect to see in actual use, benchmarking test case should serve as a reasonable proxy for the type of work we want to measure. A benchmark test workload will be different based on the end user category we are looking to characterize. For knowledge worker workloads a reasonable benchmark is the Windows Experience Index, and for Power Users we can use the Revit benchmark for Autodesk Revit. The SPEC Viewperf benchmark is a good proxy for Designer use cases.To illustrate how we can use benchmark testing to help determine the correct ratio between total user density and workload performance we’ll look at a Power User workload using t he Revit benchmark, which tests performance within Autodesk Revit 2014. The benchmark tests various aspects of Revit performance by running through a series of common workloads used in the creation of a Revit project. These workloads include viewport rotation and viewport refresh using realistic and hidden line visual styles. These areas have been identified in particular as pain points within the average users Revit workflow. The benchmark creates a detailed model and then automates interacting with this model within the application viewports in real-time.The Revit benchmark is an excellent proxy for end user workloads, it is designed to test the creation of an actual real world model and test performance using various graphic display styles and return a benchmark score which isolates the various performance categories. Because the benchmark runs without user interaction once started it is an ideal candidate for multi-instance testing. As an industry standard benchmark, it has the benefit of being a credible test case, and since the benchmark shows positive scaling with higher end GPU’s it allows us to test various vGPU profiles to understand how profile selection affects both performance and density.MethodologyBy utilizing test automation scripting tools, we can automate launching the benchmark on the target VM’s. We can then automate launching the VM’s so that the benchmark is running on the target number of VM’s concurrently. Starting with a single active user per physical GPU, the benchmark is launched by the client VM and the results of the test are recorded. This same procedure is repeated by simultaneously launching the benchmark on additional VM’s and continuing to repeat these steps until the maximum number of vGPU accelerated VMs per GRID card (K1 or K2) is reached for that particular vGPU profile.Fully Engaged Graphics Workloads?When running benchmark tests, we need to determine whether our test nodes should be fully engaged with a graphics load or not. In typical real-world configurations the number o f provisioned VM’s actively engaged in performing graphically intensive tasks will vary based on need within the enterpriseenvironment. While possible, it is highly unlikely that every single provisioned VM is going to be under a high demand workload at any given moment in time.In setting up our benchmarking framework we have elected to utilize a scenario that assumes that every available node is fully engaged. While such heavy loading is unlikely to occur in a real world environment, it allows us to use a “worst case scenario” to plot our density vs. performance data.Analyzing the Performance Data to Understand How User Density Affects Overall PerformanceTo analyze the benchmark result data it’s important to understand that we are less interested in individual performance results than we are in looking for the relationship between overall performance and total user load. By identifying trends within the results where performance shows a rapid falloff we can begin to make an educated determination about the maximum number of Revit users we can support per server. Because we are most interested in maintaining interactivity within the viewport, we’ll focus on the benchmark results from the Rotate View test. To measure scalability we take the sum of the individual result scores from each VM and total them. The total is then divided by the total number of active VM’s to obtain an Average Score Per VM. In determining the impacts of density on overall benchmarking performance we plot the benchmark as seen in the graphs below. For each plot we record the average results for each portion of the benchmark score result, and indicate the percentage drop in performance compared to the same profile with a single active VM. Because Revit is an application which certifies professional graphics for use with the application, we can focus on the professional “Q” profiles, 140Q , 240Q and 260Q which are certified options for Revit.All our testing is done with 2 x GRID boards installed in the server (2x K1 or 2x K2).In Example 1 below we analyze the data for the K240Q vGPU profile, one of the professional profiles available on the K2 GRID board. The K240Q profile provide 1028MB of framebuffer on the virtual GPU. The performance trend for the K240Q profile show a performance falloff of 109% between a single fully engaged K240Q VM and the maximum number of K240Q fully engaged VM’s supported on the server (16).We can see the superior performance offered by vGPU in the Revit benchmark when running the maximum number of VMs on a dual K2 boards (16), completes the benchmark rotation test 192% faster than a server running a single VM instance of the benchmark using CPU emulated graphics and is 614% faster than CPU emulated graphics running the same number of active VMs (16). As the number of active VM’s increases on the server, the results show a performance falloff of 109% between a single fully engaged K240Q VM and the maximum number of K240Q fully engaged VM’s supported on the server (16).Example 1 – Dual K2 boards allocated with K240Q vGPU profile (1024MB Framebuffer), each K2 board can support up to 8K240Q vGPU accelerated VMs.In Example 2 below is the Revit performance profile for the K140Q the professional profile for the K1 GRID board. The K140Q profile is configured with 1024MB of framebuffer per accelerated VM, the same as the K240Q. On a single K1 GRID board the performance profile is extremely similar between theK140Q and the K240Q profiles up to 8 active VMs, which is the maximum number of VMs supported on the K240Q. Moving beyond 8 VM’s we see that although the average benchm ark scores continue to decline the decline continues at a gradual pace until we get beyond 16 active VM’s. Beyond 16 active VM’s we see a much more rapid falloff in terms of performance until at around 24 active VM’s we see a performance level that falls below the performance of a single CPU emulated graphics VM for the first time, although performance is still significantly better than a CPU emulated graphics configuration running a matching number of active VM’s.Example 2 Dual K1 boards allocated with K140Q vGPU profile (1024MB Framebuffer), each K1 board can support up to 16 K140Q vGPU accelerated VMs for a total of 32 VMs in the tested configuration.Example 3 below shows the combined performance profiles for both the K2 GRID based K240Q andK260Q profiles and the GRID K1 based K140Q profile compared to CPU emulated graphics showing the results of the Revit benchmark rotate view portion of the test. The performance data for all three GRID profiles are virtually identical. It’s worth noting that the trend of performance falloff is similar between the vGPU results and the CPU graphics results. The similarity in falloff is likely an indication that the falloff represents a lack of enough system resources on the server as the number of fully engaged VMs increases past at certain point (for our hardware configuration that point is seen around 16 VMs). The results show that regardless of profile used vGPU offers a significant performance increase over CPU emulated graphics under the same workload.Example 3 – K260Q, K240Q, and K140Q vGPU profiles show very similar performance and falloff curve matches the CPU falloff curve indicating that system resources are likely the limiting factor.Board Profile Maximum VMs per Board Recommended range of VM'sP a g e | 11 Server ConfigurationDell R720Intel® Xeon® CPU E5-2670 2.6GHz, Dual Socket (16 Physical CPU, 32 vCPU with HT)Memory 384GBXenServer 6.2 + SP1Virtual Machine ConfigurationVM Vcpu : 4 Virtual CPUMemory : 5GBXenDesktop 7.1 RTM HDX 3D ProRevit 2014Revit BenchmarkNVIDIA Driver: 332.07Guest Driver: 331.30Additional NVIDIA GRID ResourcesWebsite –/vdiNVIDIA GRID Forums - https://Certified Platform List –/wheretobuyISV Application Certification –/gridcertificationsGRID YouTube Playlist –/gridvideosHave issues or questions? Contact us through the NVIDIA GRID Forums or via Twitter @NVIDIAGRID。
NVIDIA Tesla M10 GRID:让每台服务器虚拟100个图形加速型桌面-机械制造论文NVIDIA Tesla M10 GRID:让每台服务器虚拟100个图形加速型桌面本刊记者/ 朱辉杰2016 年5月23 日,NVIDIA 正式推出搭配全新NVIDIATesla M10 GPU 的NVIDIA GRID,为企业应用实现虚拟化开辟道路。
新产品能实现目前业界的最高用户密度,其中每块加速卡支持64 个桌面,每台服务器支持128 个桌面。
因此,企业只需较低的成本投入,即可为所有员工提供虚拟化桌面。
同时,NVIDIA GRID 软件让企业能够简化虚拟应用、桌面和工作站的部署,以满足所有使用场景和任务需求。
此外,这一产品还具有灵活的付费模式,让企业客户能够平衡资本和运营费用,同时获得最低的总体拥有成本(TCO),减轻企业的部署负担。
当前,GPU 加速型商业应用已经在企业中大量被使用,呈现爆炸式的增长态势。
Lakeside Software 提供的SysTrack Community 数据显示,在过去五年里,加速型应用所占的比例翻了一番以上,仅2016 年前几个月的增长便占了这一增长幅度的一半。
从数据中心到终端设备,为了提供最佳的用户体验,这些应用越来越多地采用OpenGL和DirectX API 等图形技术。
为了获取更好的户体验,虚拟环境更多地利用GPU。
NVIDIA 公司副总裁兼NVIDIA GRID 事业部总经理Jim McHugh 表示:“出色的知识工作者需要高性能的商业应用,才能最大限度地提升生产力。
想要实现这一点,必须利用GPU 加速。
凭借业界最高的用户密度,搭配TeslaM10 的NVIDIA GRID 让企业能够以较低的价格为知识工作者轻松实现每一款应用的虚拟化,同时不牺牲性能。
”NVIDIA GRID 是由各OEM 业所支持的图形虚拟化行业标准,兼容各种PC 应用。
搭配一线虚拟化供应商Citrix和VMware 的产品,NVIDIA GRID 能够在虚拟应用或远程桌面会话主机上带来出色的体验,单用户每月成本不到2美元。
Wyse 精简运算软件。
您需要了解的一切。
点击这里Wyse 精简运算软件利用动态应用部署、远程管理、丰富的多媒体交互功能及对常用桌面外设的支持,在各种虚拟化环境中实现最佳的用户体验...一切变得更加简单。
Wyse Device Manager 支持对Wyse 瘦客Wyse TCX 是我们的虚拟化软件套件,WyseDevice ManagerWyseVirtual Desktop AcceleratorWyse TCXTM通过Wyse Virtual Desktop Accelerator (VDA) 远开始技术范围打印Wyse Device Manager (WDM )为管理员提供了他们所需的关于各设备(包括部署的软件)的细粒度资产信息。
通过在一个具有远程诊断功能且使用简便的中央控制台上提供最新状态信息,IT 部门可以减少不必要的桌面访问。
有了安排设备更新的灵活性,包括应用附加件、OS 补丁或完整映像(在用户方便时进行),IT 经理可以减少桌面的总体停机时间,避免任何生产力损失。
同时,强大的设备分组功能和执行默认配置的能力也在配置和管理网络设备方面为网络管理员提供了最高性能和灵活性。
所有Wyse 瘦客户机都捆绑有Wyse Device Manager 工作组版,能为小型安装提供全面的管理控制。
更为强大的Wyse Device Manager 企业版提供更强大的设备管理功能,最高可扩展到数万个客户机。
两个版本采用了web-services 技术,安装和使用都非常简单。
而且,采用Wyse DeviceManager 安装服务,Wyse 专业服务团队将严格遵照您的特定要求开展工作,确保成功实施和交付Wyse Device Manager 解决方案。
继续体验之旅开始技术范围打印Wyse Device Manager 介绍快速部署。
集中管理和资产跟踪。
通过HTTPS 的安全软件映像与更新高效的,安全的,灵活的Wyse 瘦客户机管理软件继续体验之旅开始技术范围打印Wyse Device Manager 允许从远程在上万个Wyse 瘦客户机中部署软件映像。
DU-07757-001 | September 2015 User GuideDOCUMENT CHANGE HISTORY DU-07757-001Chapter 1.Introduction (1)1.1How licensing works (1)1.2License editions (2)Chapter 2.GRID Virtual GPU (3)2.1vGPU License requirements (3)2.2Licensing on Windows (4)2.3Licensing on Linux (7)Chapter 3.GRID Virtual Workstation (8)3.1GRID Virtual Workstation features (8)3.2Licensing on Windows (8)3.2.1Disabling GRID Virtual Workstation (11)3.3Licensing on Linux (13)Chapter 4.Advanced topics (14)4.1Licenses obtained after boot (14)4.2Operating with intermittent connectivity to the license server (14)4.3Applying Windows license settings via registry (15)Chapter 5.Troubleshooting (17)5.1Known issues (17)5.2Troubleshooting steps (17)LIST OF FIGURESFigure 1 GRID licensing architecture (1)Figure 2 GRID license editions (2)Figure 3 Managing vGPU licensing in NVIDIA Control Panel (5)Figure 4 Successful acquire of vGPU license (6)Figure 5 Sample gridd.conf for GRID vGPU (7)Figure 6 Managing Virtual Workstation Licensing in NVIDIA Control Panel (9)Figure 7 Applying GRID Virtual Workstation license (10)Figure 8 Success acquire of Virtual Workstation license (11)Figure 9 Disabling GRID Virtual Workstation (12)Figure 10 Sample gridd.conf for GRID Virtual Workstation (13)Figure 11 Configuring vGPU licensing via registry settings (16)LIST OF TABLESTable 1 Virtual GPUs licensed on Tesla M6, M60 (4)Table 2 Licensing registry settings (14)Chapter 1. INTRODUCTIONCertain NVIDIA GRID TM products, such as GRID vGPU TM and GRID VirtualWorkstation, are available as licensed features on NVIDIA Tesla TM GPUs. This guide describes the licensed features and how to enable and use them on supported hardware.1.1 HOW LICENSING WORKSFigure 1 provides an overview of GRID licensing:Figure 1 GRID licensing architectureGRID ServerGRID License ServerVMNVIDIA Tesla GPUNVIDIATesla GPUGRID Virtual Workstation GraphicsVMGRID vGPUVMGRID vGPULicensesLicense borrow, returnIntroduction When enabled on Tesla GPUs, licensed features such as Virtual GPU are activated by obtaining a license over the network from an NVIDIA GRID License Server. The licensed is “checked out” or “borrowed” at the time the Virtual Machine (VM) is booted, and returned when the VM is shut down.1.2LICENSE EDITIONSGRID licenses come in three editions that enable different classes of GRID features. The GRID software automatically selects the right license edition based on the features being used:Figure 2 GRID license editionsThe remainder of this guide is organized as follows:④Chapter 2 describes licensing of GRID Virtual GPU.④Chapter 3 describes licensing of GRID Virtual Workstation features with GPUpassthrough.④Chapter 4 discusses advanced licensing settings.④Chapter 5 provides guidance on troubleshooting.Chapter 2.This chapter describes licensing of NVIDIA GRID vGPU.2.1VGPU LICENSE REQUIREMENTSNVIDIA GRID Virtual GPU (vGPU) is offered as a licensable feature on Tesla M6 and M60 GPUs. When a vGPU is booted on these GPUs, a license must be obtained by the Virtual Machine (VM) in order to enable the full features of the vGPU. The VM retains the license until it is shut down; it then releases the license back to the license server.Table 1 lists the vGPU types that are supported on Tesla M6 / M60, and the license edition that each vGPU type requires.Table 1 Virtual GPUs licensed on Tesla M6, M60The higher-end GRID license editions are inclusive of lower editions: for example virtual GPUs that require a GRID Virtual PC license are also usable with a GRID Virtual Workstation or GRID Virtual Workstation Extended license.2.2LICENSING ON WINDOWSTo license vGPU, open NVIDIA Control Panel by right-clicking on the Windows desktop and selecting NVIDIA Control Panel from the menu, or by opening Windows Control Panel and double-clicking the NVIDIA Control Panel icon.In NVIDIA Control Panel, select Manage License task in the Licensing section of the navigation pane, as shown in Figure 3.Figure 3 Managing vGPU licensing in NVIDIA Control PanelThe Manage License task pane shows that GRID vGPU is currently unlicensed. Enter the address of your local GRID License Server in the License Server field. The address can be a fully-qualified domain name such as , or an IP address such as 10.31.20.45.The Port Number field can be left unset and will default to 7070, which is the default port number used by NVIDIA GRID License Server.Select Apply to assign the settings. The system will request the appropriate license for the current vGPU from the configured license server and, if successful, vGPU’s full capabilities are enabled (see Figure 4). If the system fails to obtain a license, refer to 4.3 for guidance on troubleshooting.Once configured in NVIDIA Control Panel, licensing settings persist across reboots.Figure 4 Successful acquire of vGPU licenseGRID Virtual GPU2.3LICENSING ON LINUXTo license GRID vGPU, edit /etc/nvidia/gridd.conf:[nvidia@localhost ~]$ sudo vi /etc/nvidia/gridd.confSet ServerURL to the address and port number of your local NVIDIA GRID License Server. The address can be a fully-qualified domain name such as, or an IP address such as 10.31.20.45. The port number is appended to the address with a colon, for example :7070.Set FeatureType to 1, to license vGPU:Figure 5 Sample gridd.conf for GRID vGPURestart the nvidia-gridd service:[nvidia@localhost ~]$ sudo service nvidia-gridd restartThe service should automatically obtain a license. This can be confirmed with log messages written to /var/log/messages, and the vGPU within the VM should now exhibit full framerate, resolution, and display output capabilities:[nvidia@localhost ~]$ sudo grep gridd /var/log/messages…Sep 13 15:40:06 localhost nvidia-gridd: Started (10430)Sep 13 15:40:24 localhost nvidia-gridd: License acquired successfully.Once configured in gridd.conf, licensing settings persist across reboots and need only be modified if the license server address changes, or the VM is switched to running GPU passthrough.Chapter 3.This chapter describes how to enable the GRID Virtual Workstation feature on supported Tesla GPUs.3.1GRID VIRTUAL WORKSTATION FEATURES GRID Virtual Workstation is available on Tesla GPUs running in GPU passthrough mode to Windows and Linux VMs. Virtual Workstation requires a GRID Virtual Workstation – Extended license edition, and provides these features:④Up to four virtual display heads at 4k resolution (unlicensed Tesla GPUs support asingle virtual display head with maximum resolution of 2560x1600).④Workstation-specific graphics features and accelerations.④Certified drivers for professional applications3.2LICENSING ON WINDOWSTo enable GRID Virtual Workstation, open NVIDIA Control Panel by right-clicking on the Windows desktop and selecting NVIDIA Control Panel from the menu, or by opening Windows Control Panel and double-clicking the NVIDIA Control Panel icon. In NVIDIA Control Panel, select Manage License task in the Licensing section of the navigation pane, as shown in Figure 6.Figure 6 Managing Virtual Workstation Licensing in NVIDIA Control PanelThe Manage License task pane shows the current License Edition being used, and defaults to unlicensed.Select GRID Virtual Workstation, and enter the address of your local GRID License Server in the License Server field (see Figure 7). The address can be a fully-qualified domain name such as , or an IP address such as10.31.20.45.The Port Number field can be left unset and will default to 7070, which is the default port number used by NVIDIA GRID License Server.Figure 7 Applying GRID Virtual Workstation licenseSelect Apply to assign the settings. The system will request a license from the configured license server and, if successful, GRID Virtual Workstation features are enabled (see Figure 8). If the system fails to obtain a license, refer to 4.3 for guidance on troubleshooting.Once configured in NVIDIA Control Panel, licensing settings persist across reboots.Figure 8 Success acquire of Virtual Workstation license3.2.1Disabling GRID Virtual WorkstationTo disable the GRID Virtual Workstation licensed feature, open NVIDIA Control Panel; in the Manage License task, select Tesla (unlicensed) and select Apply (see Figure 9). The setting does not take effect until the next time the system is shutdown or rebooted; GRID Virtual Workstation features remain available until then.Figure 9 Disabling GRID Virtual Workstation3.3LICENSING ON LINUXTo license GRID Virtual Workstation, edit /etc/nvidia/gridd.conf:[nvidia@localhost ~]$ sudo vi /etc/nvidia/gridd.confSet ServerURL to the address and port number of your local NVIDIA GRID License Server. The address can be a fully-qualified domain name such as, or an IP address such as 10.31.20.45. The port number is appended to the address with a colon, for example :7070.Set FeatureType to 2, to license GRID Virtual Workstation:Figure 10 Sample gridd.conf for GRID Virtual WorkstationRestart the nvidia-gridd service:[nvidia@localhost ~]$ sudo service nvidia-gridd restartThe service should automatically obtain a license. This can be confirmed with log messages written to /var/log/messages, and the GPU should now exhibit Virtual Workstation display output and resolution capabilities:[nvidia@localhost ~]$ sudo grep gridd /var/log/messages…Sep 13 15:40:06 localhost nvidia-gridd: Started (10430)Sep 13 15:40:24 localhost nvidia-gridd: License acquired successfully.Once configured in gridd.conf, licensing settings persist across reboots and need only be modified if the license server address changes, or the VM is switched to running GPU passthrough.Chapter 4.This chapter discusses advanced topics and settings for GRID licensing.4.1LICENSES OBTAINED AFTER BOOTUnder normal operation, a GRID license is obtained by a platform during boot, prior to user login and launch of applications. If a license is not available, indicated by a popup on Windows or log messages on Linux, the system will periodically retry its license request to the license server. During this time, GRID vGPU runs at reduced capability described in section 2.1; similarly, GRID Virtual Workstation features described in section 3.1 are not available.When a license is obtained, the licensed features are dynamically enabled and become available for immediate use. However, any application software launched prior to the license becoming available may need to be restarted in order to recognize and utilize the licensed features.4.2OPERATING WITH INTERMITTENTCONNECTIVITY TO THE LICENSE SERVERGRID vGPU and Virtual Workstation clients require connectivity to a license server when booting, in order to check out a license. Once booted, clients may operate without connectivity to the license server for a period of up to 7 days, after which time the client will warn of license expiration.4.3APPLYING WINDOWS LICENSE SETTINGS VIAREGISTRYGRID licensing settings can be controlled via the Windows Registry, removing the need for manual interaction with NVIDIA Control Panel. Settings are stored in this registry key:HKEY_LOCAL_MACHINE\SOFTWARE\NVIDIA Corporation\Global\GridLicensingRegistry values are summarized in Table 2.Table 2 Licensing registry settingsFigure 11 shows an example of configuring virtual GPU licensing settings in the registry. Note it is sufficient to simply configure FeatureType = 1 (GRID vGPU) and set the license server address in ServerAddress.Figure 11 Configuring vGPU licensing via registry settingsChapter 5.This chapter describes basic troubleshooting steps.5.1KNOWN ISSUESBefore troubleshooting or filing a bug report, review the release notes that accompany each driver release, for information about known issues with the current release, and potential workarounds.5.2TROUBLESHOOTING STEPSIf a GRID system fails to obtain a license, investigate the following as potential causes for the failure:④Check that the license server address and port number are correctly configured.④Run a network ping test from the GRID system to the license server address to verifythat the system has network connectivity to the license server.④Verify that the date and time are configured correctly on the GRID system. If the timeis set inaccurately or is adjusted backwards by a large amount, the system may fail to obtain a license.④Verify that the license server in use has available licenses of the type required by theGRID feature the GRID system is configured to use.GRID LICENSING DU-07757-001| 17 NoticeALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FORA PARTICULAR PURPOSE.Information furnished is believed to be accurate and reliable. However, NVIDIA Corporation assumes no responsibility for the consequences of use of such information or for any infringement of patents or other rights of third parties that may result from its use. No license is granted by implication of otherwise under any patent rights of NVIDIA Corporation. Specifications mentioned in this publication are subject to change without notice. This publication supersedes and replaces all other information previously supplied. NVIDIA Corporation products are not authorized as critical components in life support devices or systems without express written approval of NVIDIA Corporation.HDMIHDMI, the HDMI logo, and High-Definition Multimedia Interface are trademarks or registered trademarks of HDMI Licensing LLC.OpenCLOpenCL is a trademark of Apple Inc. used under license to the Khronos Group Inc.TrademarksNVIDIA and the NVIDIA logo are trademarks or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.Copyright© 2015 NVIDIA Corporation. All rights reserved.。
TABLE OF CONTENTS Users Per Server (UPS) (1)Technology Overview (3)Autodesk Revit 2016 Application (3)NVIDIA GRID Platform (3)Software and Hardware Used in the Tests (4)Hardware Encoding with NVENC and VMware Blast Extreme (5)Testing Methodology (6)The Performance Engineering Lab (6)Recommended Revit Physical System Configuration (7)Typical Revit Workstation Builds (7)Autodesk Revit Benchmark Metrics (RFO) (8)Test Workloads (9)How the Tests Were Run (10)Test Limitations (10)Test Results (11)Test Threshold Times (11)GRID K2 Test Execution Times (12)Tesla M60 with NVENC Test Execution Times (12)Test Execution Times With and Without Hardware Acceleration (13)CPU Utilization Against GPU Utilization (14)Conclusion (16)The purpose of this guide is to give a detailed analysis of how many users organizations can expect to get per servers based on performance testing with the Autodesk Revit 2016. The NVIDIA Performance Lab worked in cooperation with the Autodesk team to determine the maximum recommended number of users for the reference server configuration. Testing for this guide is based on the industry-standard RFO benchmark to determine the maximum number of Autodesk Revit users per server (UPS) that NVIDIA GRID® can support. To provide customers with a reference point, we have included testing for the latest generation of NVIDIA GRID solution, NVIDIA GRID Virtual Workstation software on NVIDIA Tesla™ M60, as compared to the previous generation, GRID K2. Based on extensive testing, NVIDIA GRID provides the following performance and scalability recommendation.Figure 1 Autodesk Revit 2016 Users per Server for GRID K2 and Tesla M60Users Per Server (UPS) The maximum number of users per server is based on performance and scalability metrics for Autodesk Revit 2016 configured to perform high workloads concurrently while maintaining reasonable usability.AUTODESK REVIT 2016 APPLICATIONAutodesk Revit is Building Information Modeling (BIM) software with features for the following elements of building design and construction:④Architectural design④Mechanical, engineering, plumbing (MEP) design④Structural engineering④ConstructionWhen architecting your NVIDIA GRID environment for Revit, you must consider both the GPU and the CPU.④Revit requires a GPU as users rotate, zoom, and interact with drawings.④Revit creates a heavy CPU load as it manages all the elements of a drawing througha database.Because it uses a database, Revit needs high performance storage to functionproperly. The heaviest Revit CPU usage occurs during data-rich operations such as opening and saving files, and updating models.NVIDIA GRID PLATFORMNVIDIA re-defined visual computing by giving designers, engineers, scientists, and graphics artists the power to take on the biggest visualization challenges with immersive, interactive, photorealistic environments.NVIDIA GRID exploits the power of NVIDIA Tesla GPUs to deliver virtual workstations from the data center or the cloud. Architects, engineers, and designers arenow liberated from their desks and can access their graphics-intensive applications and data anywhere.The NVIDIA Tesla M60 GPU accelerator works with NVIDIA GRID software to provide the industry’s highest user performance for virtualized workstati ons, desktops, and applications. This solution allows enterprises to virtualize any application—including professional graphics applications—and deliver them to any device anywhere.Since its first release in 2013, NVIDIA GRID has supported GPU cards based on two generations of GPU hardware architecture:④GRID K1 and K2 GPU cards based on the NVIDIA Kepler™ architecture④Tesla M6, M10, and M60 GPU cards based on the NVIDIA Maxwell™ architecture NVIDIA GRID has seen considerable software innovation to continue to drive the best performance and density on the market.Software and Hardware Used in the TestsThe tests described in this guide are based on the following combinations of software and hardware:④VMware Horizon running the first-generation NVIDIA GRID K2 GPU④VMware Horizon and NVIDIA GRID Virtual Workstation software running on thesecond-generation Tesla M60 GPUAs shown in Table 1, using the latest generation provides better performance and scalability, and the ability to take advantage of new features and functionality of the software.Table 1 Comparison of GRID K2 and Tesla M60Hardware Encoding with NVENC and VMware Blast ExtremeNVIDIA and VMware have been working together for several years to improve the virtualized computing user experience and enable a completely new class of virtual use cases. NVIDIA was the first vendor to enable hardware-accelerated graphics rendering in VMware Horizon View. NVIDIA then enabled the first virtualized graphics acceleration in VMware Horizon View with GRID.The VMware Blast Extreme protocol, which was released in VMware Horizon 7, enables NVIDIA GRID to offload the H.264 processing from the CPU to the GPU. This offloading frees resources for use by internal applications, increasing user density and application responsiveness. The H.264 codec lowers the demand on network infrastructure, enabling organizations to reach more users over greater network lengths. VMware offers multiple protocols that are designed for different workloads. The choice of protocol may impact performance, density, image quality, and other factors. Therefore, you must select the best protocol for the needs of your organization. For more information about Horizon with Blast Extreme, refer to Blast Extreme Display Protocol in Horizon 7.This section describes the tests performed and the method of testing used to determine sizing and server loads.THE PERFORMANCE ENGINEERING LABThe mandate of the NVIDIA GRID Performance Engineering Team is to measure and validate the performance and scalability delivered by the NVIDIA GRID platform, namely GRID software running on Tesla GPU’s, on all enter prise virtualization platforms. It is the goal of the Performance Engineering Team to provide proven testing that gives NVIDIA’s customers the ability to deliver a successful deployment.The NVIDIA Performance Engineering Lab holds a wide variety of different OEM servers, with varying CPU specifications, storage options, client devices, and network configurations. This lab of enterprise virtualization technology provides the Performance Engineering team with the capacity needed to run a wide variety of tests ranging from standard benchmarks to reproducing customer scenarios on a wide range of hardware.None of this work is possible without the cooperation of ISVs, OEMs, vendors, partners, and their user communities to determine the best methods of benchmarking in ways that are both accurate and reproducible. These methods will ultimately assist mutual customers of NVIDIA and other vendors and OEMs to build and test their own successful deployments. In this way, the Performance Engineering Team works closely with its counterparts in the enterprise virtualization community.RECOMMENDED REVIT PHYSICAL SYSTEM CONFIGURATIONPhysical system requirements for Autodesk Revit 2016 are listed on the Revit product page. Testing focuses on recommended specifications when feasible. The goal is to test both performance and scalability, maintaining the flexibility and manageability advantages of virtualization without sacrificing the performance end users expect from NVIDIA powered graphics.It has been well documented that storage performance is key to providing high performance graphics workloads, especially with many users and ever-growing file or model sizes. In the NVIDIA Performance Engineering Lab, a 10G iSCSI-connected all flash SAN from Dell EMC XtremeIO was used. At no time in these tests were IOPS an issue, but note that as you scale to multiple servers hosting many guests, IOPS needs to be monitored.TYPICAL REVIT WORKSTATION BUILDSAutodesk delivers a recommended hardware specification to help choose a physical workstation. These recommendations provide a good starting point from which to start architecting your virtual desktops. Your own tests with your own models will determine if these recommendations meet your specific needs.Table 2 lists server configuration used for the benchmark tests described in this guide. This configuration is based on testing results from the RFO benchmarks, input from VMware, and feedback from mutual customers.Table 2 Server Configuration for Benchmark Testing of Revit 2016AUTODESK REVIT BENCHMARK METRICS (RFO) Autodesk provides a tool called AUBench which, when combined with the scripts provided by the Revit Forums community, creates a benchmark called RFO. RFO interacts with the application and an accompanying model to run several tests, then checks the journal for time stamps, and reports the results. The benchmark is available from the RFOBenchmark thread on the Revit Forum.These tests are designed to represent user activities and are broken down as follows: ④Model Creation and View and Export of Benchmarks●Opening and Loading the Custom Template●Creating the Floors Levels and Grids●Creating a Group of Walls and Doors●Modifying the Group by Adding a Curtain Wall●Creating the Exterior Curtain Wall●Creating the Sections●Changing the Curtain Wall Panel Type●Exporting all Views as PNGs●Exporting Some Views as DWGs④Render Benchmark●Render④GPU Benchmark1 with Hardware Acceleration●Refresh Hidden Line View ×12 - with Hardware Acceleration●Refresh Consistent Colors View ×12 - with Hardware Acceleration●Refresh Realistic View ×12 - with Hardware Acceleration●Rotate View ×1 - with Hardware Acceleration④GPU Benchmark1 Without Hardware Acceleration●Refresh Hidden Line View ×12 - Without Hardware Acceleration●Refresh Consistent Colors View ×12 - Without Hardware Acceleration●Refresh Realistic View ×12 - Without Hardware Acceleration●Rotate View ×1 - Without Hardware AccelerationTEST WORKLOADSTo ensure you will be able to reproduce the results described in “Test Results,” onpage 11, the Revit Forums RFO Benchmark workload was deliberately chosen and simultaneous tests were executed. As a result, all virtual desktops in the test perform the same activities at the same time. A “peak workload” should be unrepresentative of real user interaction but shows the number of users per host when the load on the shared resources is highest and, therefore, represents the most extreme case of user demand.If you plan to run these tests, consider these aspects of the test workloads:④Sample workload. RFO provides their workload, which is a set of models, fortesting with.④Scripting. As RFO is historically designed for single physical workstation testing,there is no built-in automation for multi-desktop scalability testing.④Think time. Pausing between tests simulates human behavior.④Staggered start. Adding a delay to the beginning of each test offsets the impact ofrunning the tests in unison, again, simulating human behavior.④Scalability. In general, the tests are run with 1 virtual desktop, then 8 virtualdesktops, and finally 16 virtual desktops. These test runs get a baseline of results and accompanying logs (CPU, GPU, RAM, networking, storage IOPS, and so forth.).1 For the hardware acceleration comparison only.HOW THE TESTS WERE RUN1.Virtual machines were created with a standard configuration to determine thethreshold of acceptable performance.2.The RFO benchmark was run with an individual VM on each GRID K2 and TeslaM60 system to determine the peak performance when there was no resourcecontention.3.To determine acceptable performance at scale, a 25% increase in the test timerequired for the total of the tests below was added to the value obtained in theprevious step.4.The threshold of performance in virtual environment testing was done successfullyacross multiple applications.5.After the performance threshold for Revit 2016 from the single VM tests wasdetermined, it was used to indicate peak user density for all systems in this guide.TEST LIMITATIONSBy design, this type of peak performance testing leads to conservative estimates of scalability. Automated scalability testing is typically more aggressive than a typical user workflow. In most cases, rendering requests are unlikely to be executed by 10 users simultaneously or even to the degree that was replicated in multiple test iterations. Therefore, the results from these automated scalability tests can be considered a worst-case scenario. They indicate likely minimums for rare conditions to serve as safe guidelines. In most cases, a host should be able to support more than the number of VMs indicated by the test results.The scalability in typical or even heavily loaded data centers is likely to be higher than the test results suggest. The degree to which higher scalability would be achieved depends on the typical day-to-day activities of users, such as the amount of time spent in meetings, the length of lunch breaks or other breaks, the frequency of multitasking, and so forth.The test results show execution times for performing various operations under varying conditions and compare CPU utilization against GPU utilization. The recommendations for the number of users per system suggested by these results is a balance between the need for greatest scalability and the performance expectations of users. Note that your users, your data, and your hardware will impact these results and you may decide that a different level of performance or scalability is required to meet your individual business needs.TEST THRESHOLD TIMESThe RFO Benchmark does not currently exercise some of Revit’s newest GPU capabilities and was built to push the limits of dedicated hardware as opposed to the shared resources of VDI. Therefore, the decision was made to stop testing when the test times had increased to 125% of values measured on a single virtual workstation.Table 3 shows the test threshold times for the NVIDIA GRID K2 and Tesla M60 cards. Table 3 Test Threshold TimesGRID K2 TEST EXECUTION TIMESTable 4 shows test execution times for the GRID K2 system with 8, 12, and 16 VMs. Note that at the maximum possible number of users (16 by design), the total test time (248.90) is still less than the threshold value of 261.84. Therefore, 16 users can comfortably reside on the same GRID K2 server.Table 4 GRID K2 Test Execution TimesTESLA M60 WITH NVENC TEST EXECUTION TIMES Table 5shows the test execution times when the Tesla M60 is accessed through the VMware Blast Extreme protocol. Note that the number of users per host is increased to 28.This improvement is attributable to the improved Tesla M60 GPU and the use by Blast Extreme of the NVIDIA Video Encoder (NVENC) technology. The use of Blast Extreme allows Horizon to offload encoding of the H.264 video stream from the CPU to dedicated encoder engines on the Tesla GPUs, freeing up this much-needed resource for general computing purposes.The video codec is a very important piece in delivering a remarkable user experience because it impacts many factors, such as like latency, bandwidth, frames per second (FPS), and other factors. Using H.264 as the primary video codec also allows VMware to use millions of H.264 enabled access devices to offload the encode-decode process from the CPU to dedicated H.264 engines on NVIDIA GPUs.Table 5 Tesla M60 with NVENC Test Execution TimesTEST EXECUTION TIMES WITH AND WITHOUT HARDWARE ACCELERATIONThe RFO Benchmark also provides results for testing with only CPU, and not using GPU at all. For these particular tests, the GPU is ignored by the software if present, and only CPU is used to graphics processing. This test has been included in the RFO benchmark for illustrative purposes, and we present the results here for the same reasons. It is, in our opinion, highly unlikely that anyone would seriously consider using a graphics intensive application such as Revit without the benefit of a GPU. In the event that someone does, the following tables illustrate the potential pitfall of doing so.Table 6 and Figure 2 show the value of using a GPU in the VM with Revit 2016. CPU only response times (“without hardware acceleration”) are increased by a factor of ten or more.For all cases, CPU-only tests (without hardware acceleration) for a single VM take more time than 16- and 24-VM tests with a GPU (with hardware acceleration).2 Exceeds the threshold of 261.84.Table 6 Test Execution Times with and Without Hardware Acceleration (with GPU)Figure 2 Test Execution Times with and Without Hardware AccelerationCPU UTILIZATION AGAINST GPU UTILIZATIONFigure 3 shows for the M60-1Q GPU profile that the CPU becomes a limiting factor long before the GPU resources are exhausted.Revit requires significant CPU resources so investing in more cores can yield greater performance and scalability. For medium-to-large models, M60-1Q performance will be better for a real-use scenario than a M60-0Q profile, which doesn’t provide enough frame buffer for satisfactory performance for Revit workloads. However, your results will vary. Therefore, you must test with your own models to ensure the most accurate results.Figure 3 CPU and M60-1Q GPU UtilizationThe test results show that with NVIDIA GRID K2 GPUs, one reference server can support up to 16 users without exceeding the 125% threshold set by our testing team. With NVIDIA Tesla M60 GPU accelerators with VMware Horizon and Blast Extreme, 28 users per server is achievable.As testing has shown, Autodesk Revit is CPU intensive because it uses database software to manage the elements of a drawing. Therefore, the more complex the model, the more CPU intensive the software will become. When calculating scalability, be aware that the CPU will become the bottleneck before the GPU.In practice, consolidating similar loads onto servers containing GPUs is preferable to running mixed workloads. Plan for these systems to run Revit or other GPU intensive loads, leaving servers that lack GPUs for loads that are not GPU intensive.NoticeALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE.Information furnished is believed to be accurate and reliable. However, NVIDIA Corporation assumes no responsibility for the consequences of use of such information or for any infringement of patents or other rights of third parties that may result from its use. No license is granted by implication of otherwise under any patent rights of NVIDIA Corporation. Specifications mentioned in this publication are subject to change without notice. This publication supersedes and replaces all other information previously supplied. NVIDIA Corporation products are not authorized as critical components in life support devices or systems without express written approval of NVIDIA Corporation.HDMIHDMI, the HDMI logo, and High-Definition Multimedia Interface are trademarks or registered trademarks of HDMI Licensing LLC.OpenCLOpenCL is a trademark of Apple Inc. used under license to the Khronos Group Inc.TrademarksNVIDIA and the NVIDIA logo are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.Copyright© 2017 NVIDIA Corporation. All rights reserved.BPG-08489-001。