Clint Stinchcomb
Management
I think it is a great question, and I think that if you look at the evolution of what our technology partners are looking for and are working toward, if you start with 2020 with large language models, there was a lot of text that started there. And that was designed to help teach the models to read, to help create document summarizers, knowledge Q&A, support bots, act as coding copilots. We transitioned up the scale to kind of multimodal AI, which is text, which is images, which is audio, which is video, and that led to video summarization, camera assistance, text-to-image, text-to-video, and agentic AI. Obviously, that is part of the spectrum now, and that is where systems plan, use tools, and act autonomously. And the use cases there are research agents, travel booking assistants, code agents, data ops agents, CRM bots. There is almost an infinite number of use cases. And then I think certainly an exciting stage that we are in the early stages of right now is physical AI where the content is being used to embed AI into robots, into cars, into drones, into devices. And similarly, I think, an infinite number of use cases here with warehouse robots, self-driving cars, home robots, delivery drones, factory arms, all kinds of things. So extraordinarily exciting, difficult to stay up with all of the use cases, but the good news is we have such a variety, such a strong scope of video and data, that we are able to fulfill a large scope and scale of requirements.