Yes. So, the analogy that we're using, and I think I can credit one of our internal data scientists with this. I don't know if he got it from somewhere else, but I find it very useful. So, we think about the bounds of, you know, the containerization of all human knowledge as expressed or expressible potentially as data as being the size of a football. And if we, you know, hold that visual image in mind, we can think by comparison as the data that's been used to train today's LLMs as being the size of a dime. Now, what does that mean? That means that there's a whole lot of additional data that the models of the future are going to need to learn from to be able to function at something that begins to resemble over time as AGI, the ability to closely mimic the capabilities of a human. And when we, you know, peel that back a bit, we see lots of different things. We see expert data, reasoning data, multilingual data, multimodal data, meta-learning. So, learning and expressing as data, you know, how do human beings think when they take apart a problem, when they assign components of that problem out, when they order, you know, their operations around solving a particular problem? And the problem doesn't even have to be a particularly sophisticated one, although it can be. So, there's a ton of data that needs to be captured and that needs to be addressable in order for the models to learn from it. And that, we believe, is our opportunity, or one of our opportunities. It's not even our only opportunity, but it's a very exciting opportunity. And given the -- what, you know, what you were all and we were all reading about in their recent earnings reports in terms of the uptick in, you know, capital spending principally for these technologies and these capabilities, we believe that we're still in the early innings.