Well, the AI field itself is a field that has its own specialty and many, many companies that all making different bets. And it's useful to take a small step backwards, which is fundamentally, AI has one need, which is much, much, much, much, much faster computation. Well, no matter how hard we push on Moore's Law and so on, that's hard to achieve. And so the answer by definition has to be therefore, architectures will become optimized for different applications. Now, architectures can mean chips, it can be groups of chips, it can be FPGAs, it can be a combination of things. And actually a number of the very large suppliers who provide combinations of processors together was graphics processors together was FPGA pieces, all with one objective optimize for an architecture that's appropriate for specific situation. And so, in many ways, we ourselves have been a user of these capabilities, because of course, we use a lot of computation for our software tools. But the hardware tools that we provide is our own way to say for certain tasks, such as for simulation, we can do much better on an emulator or FPGA board. So, I expect all of these technologies to continue to evolve to continue to be available in various combinations and the race will continually be on of how do you build machines that can do really well. My last comment is, we ourselves are the provider of tools to a vast, vast group of emerging AI companies. And this is encouraging, because I think those people will not give up for quite a while in trying new architectures and some will ultimately go into very large volume. And so, in many ways, we see a number of those companies as hard technology drivers for Synopsys. And business wise, we're doing very well with them.