We’re expecting the channel inventory to work itself out. We are masters at managing our channel, and we understand the channel very well. As you know, the way that we go to market is through the channels around the world. We’re not concerned about the channel inventory. As we ramp Turing, whenever we ramp a new architecture, we ramp it from the top down. And so, we have plenty of opportunities as we go back to the back to school in the gaming cycle to manage the inventory. So, we feel pretty good about that. As a result, comparing Volta andTuring, entering, CUDA iscompatible, that’s one of the benefits of CUDA. CUDA, all of the applications that take advantage of CUDA are written on top of cuDNN, which is our network platform to TensorRT that takes advantage -- that takes the output of the frameworks and optimize it for runtime. All of those tools and libraries run on top of Voltaand run on top of Turing and run on top of Pascal. What Turing adds over Pascal is the same Tensor Corethat is inside Volta. Of course, Volta is designed for large scale training. Eight GPUs could be connected together. They have the fastest HBM2 memories. And it’s designed for datacenter applications, has 64-bit double-precision, ECC, high-resilience computing, and all of the software and system software capability and tools that make Voltathe perfect high-performance computing accelerator. In the case of Turing, it’s really designed for three major applications. The first application is to open up Pro Visualization, which is a really large market that has historically used render farms. And we’re really unable to use GPUs until we now have -- we now have the ability to do full path trace, global illumination with very, very large data sets. So, that’s one market that’s brand new as a result of Turing. The second market is to reinvent computer graphics, real time computer graphics for video games and other real time visualization applications. When you see the images created by Turing, you’re going to have a really hard time wanting to see the images of the past. It just looks amazing. And then the third, Turing has a really supercharged Tensor Core. And this Tensor Core is used for image generation. It’s also used for high throughput, deep learning inferencing for data centers. And so, these applications for Turing would suggest that there are multiple SKUs of Turing, which is one of the reasons why we have such a great engineering team, we could scale one architecture across a whole lot of platforms at one time. And so, I hope that answers your question that the Tensor Core inference capability of Turing is going to be off the charts.