Earnings Labs

GSI Technology, Inc. (GSIT)

Q4 2023 Earnings Call· Tue, May 16, 2023

$7.25

+0.97%

Key Takeaways · AI generated
AI summary not yet generated for this transcript. Generation in progress for older transcripts; check back soon, or browse the full transcript below.

Same-Day

+0.69%

1 Week

+30.95%

1 Month

+47.81%

vs S&P

+41.02%

Transcript

Operator

Operator

Greetings, and thank you for standing by. Welcome to the GSI Technology's Fourth Quarter and Fiscal 2023 Results Conference Call. [Operator Instructions] Before we begin today's call, the Company has requested that I read the following Safe Harbor Statement. The matters discussed in this conference call may include forward-looking statements regarding future events and the future performance of GSI Technology that involve risks and uncertainties that could cause actual results to differ materially from those anticipated. These risks and uncertainties are described in the Company's Form 10-K filed with the Securities & Exchange Commission. Additionally, I have also been asked to advise you that this conference call is being recorded today, May 16, 2023, at the request of GSI Technology. Hosting the call today is Lee-Lean Shu, the Company's Chairman, President, and Chief Executive Officer. With him are Douglas Schirle, Chief Financial Officer; and Didier Lasserre, Vice President of Sales. I would now like to turn the conference over to Mr. Shu. Please go ahead, sir.

Lee-Lean Shu

Analyst

Good day, everyone, and welcome to our fiscal fourth quarter and full year 2023 financial results earnings call. The 2023 fiscal year was filled with many positive developments, new partnerships, and progress toward achieving our goals. We also experienced setbacks and unforeseen delays on several fronts with the APU. We learned a lot during the year about the addressable market Gemini-I can reasonably pursue with our team, given our limited resources. However, we recently have made significant strides in leveraging third-party resources to help identify users, resellers, and OEMs. These resources are proving valuable in helping us identify opportunities for capturing revenue and increasing awareness of the APU’s tremendous capabilities. We have also sharpened our focus for Gemini-I to leverage our resources and prioritize near-term opportunities, such as synthetic aperture radar, or SAR, and satellites, where we have a superior solution. We understand these markets and know whom we can support and help with our offering. Another focus application for Gemini-I is vector search engines, where our APU plug-in has demonstrated enhanced performance. To this end, we have dedicated more resources and prioritized the target customers that have expressed interest in leveraging our solution. Our data science team has been busy working on a SaaS search project with one leading provider, and we plan to pivot to other players in the space once we have met our deliverables with the first partner. Looking ahead on our roadmap, we will build upon the work we are doing today in future APU versions to address large language model or LLM for natural language processing. Vector search engines are a fundamental part of ChatGPT architecture and essentially function as the memory for ChatGPT. Large language models use deep neural networks, such as transformers, to learn billions or trillions of words and produce text.…

Didier Lasserre

Analyst

Thank you, Lee-Lean. As Lee-Lean stated, we have sharpened our focus on a few near-term APU revenue opportunities. In addition, we strengthened our team with a top data science contractor whose primary job is to accelerate the development of our plugin solution for the high-performance search engine platforms that Lee-Lean mentioned. We have also begun working with a company that offers custom, embedded AI solutions for high-speed computing using Gemini-I and Gemini-II. Another critical development to improve our market access for the APU has been adding distributors. We are pleased to announce that we have added a new distributor for our Radiation Hard and Tolerant SRAM, and our hardened APU, for the European market. In addition to our partnerships and focus on near-term opportunities, we plan to build a platform to enable us to pursue licensing opportunities. This is in the very early stages, and we have work to do before we formally approach potential strategic partners. That said, we have had a few preliminary conversations on determining what is required to integrate Gemini into another platform. This would allow us to identify the specific performance benefits for a partner’s applications to ensure effective communication of the problem we solve in their system or solution. We recently demoed Gemini-I for a private company specializing in SAR satellite technology. They provide high-resolution Earth observation imagery to government and commercial customers for disaster response, infrastructure monitoring, and national security applications. The satellites are designed to provide flexible, on-demand imaging capabilities that customers can access worldwide. They recently provided the datasets to conduct comparison benchmarks on Gemini-I, and we are commencing the process of running those benchmarks. SAR is one market we anticipate that we can generate modest revenue with Gemini-I this fiscal year. GSI was recently awarded a Phase 1 Small Business…

Douglas Schirle

Analyst

Thank you, Didier. I will start with the fourth quarter results summary, followed by a review of the full-year fiscal 2023 results. GSI reported a net loss of $4 million, or $0.16 per diluted share, on net revenues of $5.4 million for the fourth quarter of fiscal 2023, compared to a net loss of $3 million, or $0.12 per diluted share, on net revenues of $8.7 million for the fourth quarter of fiscal 2022 and a net loss of $4.8 million, or $0.20 per diluted share, on net revenues of $6.4 million for the third quarter of fiscal 2023. Gross margin was 55.9% in the fourth quarter of fiscal 2023 compared to 58.6% in the prior-year period and 57.5% in the preceding third quarter. The decrease in gross margin in the fourth quarter of 2023 was primarily due to the effect of lower revenue on the fixed costs in our cost of goods. Total operating expenses in the fourth quarter of fiscal 2023 were $6.9 million, compared to $8.1 million in the fourth quarter of fiscal 2022 and $8.5 million in the prior quarter. Research and development expenses were $5 million, compared to $6.5 million in the prior-year period and $5.5 million in the prior quarter. Selling, general and administrative expenses were $1.9 million in the quarter ended March 31, 2023, compared to $1.5 million in the prior-year quarter and $3 million in the previous quarter. Fourth quarter fiscal 2023 operating loss was $3.9 million compared to an operating loss of $2.9 million in the prior-year period and an operating loss of $4.8 million in the prior quarter. Fourth quarter fiscal 2023 net loss included interest and other income of $101,000 and a tax provision of $191,000, compared to $47,000 in interest and other expense and a tax provision of…

Operator

Operator

[Operator Instructions] And the first question comes from the line of Raji Gill with Needham. Please proceed with your question.

Nick Doyle

Analyst

This is Nick Doyle on for Raji Gill. Two questions on Gemini-II. Are all the costs related to the tape out and then the testing volume production contemplated in your current outlook? And then could you expand on what kind of applications you're seeing traction in with that Gemini-II, specifically anything in ADAS and then using the large language models? Thanks.

Douglas Schirle

Analyst

Yes. In terms of R&D spending, yes, most of what we're spending today is on Gemini-II. We have the hardware team here in Sunnyvale and the software team in Israel. And there will be a tape-out in the first half of fiscal 2024 for Gemini-II. It will run probably about $2.5 million. Other than that, the R&D expenses should be similar to what we've seen in the most recent quarter.

Didier Lasserre

Analyst

And regarding the applications, you cut out. Were you talking Gemini-I or Gemini-II?

Nick Doyle

Analyst

Gemini-II, please.

Didier Lasserre

Analyst

Yes. So Gemini-II. So Gemini-II, ADAS, as we discussed in the conversation before, is something we want to address. But we most likely we'll use a partner to do that. And as far as the large language models, as we discussed, we certainly feel that the Gemini technology, the advantage in the technology certainly will be applicable there. And so whether it's a start with Gemini-II or if it's also customized with the Gemini-III is to be determined.

Nick Doyle

Analyst

Okay. That makes sense. And then just a quick one. Did you guys -- did you say if there is a timeline? Is there a timeline for the rad hard road map for the product you mentioned in the EU?

Didier Lasserre

Analyst

The rad hard and rad-tolerant SRAMs are available today. We have done some testing. It's -- boy, it's been at least 1.5 years. We did the testing on the APU. Gemini-I specifically came back very favorable. But it was -- the beam was a little bit off that day. So it was limited the test we could do. So we are actually going to do the full complement of radiation testing in the second half of this year. So we have all the data requirements for the folks that will be sending it into space. So officially, the APU will be rad-tolerant sometime by the end of this year.

Nick Doyle

Analyst

Excellent. Thank you.

Operator

Operator

And the next question comes from the line of Jeff Bernstein with TD Cowen. Please proceed with your question.

Jeff Bernstein

Analyst · TD Cowen. Please proceed with your question.

Hi, guys. Just a couple of questions for me. Just wanted to make sure I heard right. You brought on a consultant that's helping target applications for Gemini-I. Is that right?

Didier Lasserre

Analyst · TD Cowen. Please proceed with your question.

They're specifically helping us write the interfaces for some of the fast vector search platforms that are out there.

Jeff Bernstein

Analyst · TD Cowen. Please proceed with your question.

Got you. Okay. And then you said there's a custom embedded AI solutions supplier, and that guy is going to now integrate Gemini-I into some high-performance compute solutions for clients. Is that -- am I getting that right?

Didier Lasserre

Analyst · TD Cowen. Please proceed with your question.

Partially. So it's not limited to Gemini-I. It's Gemini-I and Gemini-II. And they have a multitude of different potential applications ranging from SAR to sell at applications to marine, search and rescue. There's a lot of different applications that they're looking at it for. Some of the cases, they'll be able to use essentially our Leda boards. But in many cases, they will be developing their own ultra-small boards for some of these applications that our boards are considered a little too big for those applications. So it's a multitude of different applications, and it will be for both Gemini-I and Gemini-II.

Jeff Bernstein

Analyst · TD Cowen. Please proceed with your question.

Got you. Okay. And then as far as the large language model kind of applications, I think there's two potentially. Correct me if I'm wrong. One is just to run queries as opposed to train and just run queries of these large matrices quickly and at low power. And I guess the other one has to do with making training more efficient by being able to not redo matrices over and over again as you do new learning. Is that right? And which are we talking about here today having some light of the day?

Lee-Lean Shu

Analyst · TD Cowen. Please proceed with your question.

Yes. Our primary target will be to the search, which is increase path, okay? We are now on the training path, okay? But if you can do search efficiently, you can help the training, okay. Like we can do dual shop training or single short training, which means you don't need even to train the data set. If you have a first security come in we don't recognize, we can store into our memory chip. And the second time, similar item coming in that you can recognize it right away. That's very different from traditional training, okay? Traditional training, you have to run the whole model, whole data set, over again. That's very, very time consuming. So if you can do dual shop training, you have the capability to do that, then you can save the training part tremendously.

Jeff Bernstein

Analyst · TD Cowen. Please proceed with your question.

That's great, thank you.

Lee-Lean Shu

Analyst · TD Cowen. Please proceed with your question.

Yes.

Operator

Operator

And the next question comes from the line of Orin Hirschman with AIGH Investment Partners.

Orin Hirschman

Analyst · AIGH Investment Partners.

Hi how are you?

Lee-Lean Shu

Analyst · AIGH Investment Partners.

Good.

Orin Hirschman

Analyst · AIGH Investment Partners.

So one of the things that the Gemini architecture in memory processing architecture is very good at, which really wasn't a tremendous interest when you first introduced Gemini, was this natural language processing. And all this time, the whole world has changed, and you've got things like ChatGPT and other similar types of NLP situations where it actually exactly fits in to what you do best. So I guess it sounded like from one of the prior comments on the last question that you're actually having code and drivers written to be able to optimize the use of Gemini-I and certainly Gemini-II for this application? So I would think that one of the simple applications where you could sell a lot of boards is just simply on the acceleration where everybody is having difficulty using GPUs, because they're not - this is not where a GPU signed on an AI side in terms of NLP in order to accelerate something like ChatGPT.

Lee-Lean Shu

Analyst · AIGH Investment Partners.

What's the question again?

Didier Lasserre

Analyst · AIGH Investment Partners.

Yes. What question?

Orin Hirschman

Analyst · AIGH Investment Partners.

So the question is, isn't that - is in fact, is that a priority in terms of what you're working on to be able to introduce your own acceleration boards to do it with partners or is, in fact, a great application? It sounds like certainly so far on the call that it's a great application for the Gemini APU?

Lee-Lean Shu

Analyst · AIGH Investment Partners.

Okay. I think I discussed this on my statement. The biggest challenge for the large language model, are two-prong. First one, you need a very large memory. The second one, you need a very high-bandwidth memory. Those are two very difficult things to achieve, okay? I think today at the market - in the market that nobody has this solution. It's a good solution. Okay. So just as I mentioned, we do have very exciting discussion with a - we call it a larger cloud service provider and to see how we can help from our Gemini foundational architecture to see how we can help to move this thing forward, okay? We already have a very, very good memory bandwidth, okay. That's why I mentioned in my statement I say we are 15 times memory bandwidth over today's state-of-the-art GPU okay or payroll processes, okay? And that's our inherent architecture, okay? So, if we can add this one to the high memory capacity, then lead which nobody in the market can provide, okay? So now we're very excited. We try to explore this advantage we have and see where we can go from here.

Orin Hirschman

Analyst · AIGH Investment Partners.

Any ideas when you will - the coding to the interface to be able to demo the type of acceleration gains that we're talking about with something like a ChatGPT or something like that? So customers can actually see some type of benchmarking even with Gemini-I and maybe a simulation until Gemini-II is ready. When will that for code or code be ready that's what you mentioned you're working on?

Lee-Lean Shu

Analyst · AIGH Investment Partners.

Yes, yes, yes. The OpenAI, they have plug-in. Okay. So basically, you can put your software to plug in to the main machine and then you can utilize the existing model and then do the plug in. So right now, we are working on it. Okay. And Gemini-I presently and the Gemini-II to follow on. And that at least second, you can extrapolate from how well those are working and extrapolate to the future.

Orin Hirschman

Analyst · AIGH Investment Partners.

Any idea when we might see some benchmark in coming months?

Lee-Lean Shu

Analyst · AIGH Investment Partners.

How was it? Maybe a quarter or two, we will have something to tell to you guys.

Orin Hirschman

Analyst · AIGH Investment Partners.

Okay. And just a related question but even more futuristic. There's talk of doing something similar to, let's say, and there are a number of projects. And in fact, even you had an early project with movie, muve to be able to show off what you can do in terms of visuals as well? And I guess my question is taking that same natural language processing and doing it on a visual level is beyond belief in terms of computationally intensive, but also well suited for what you guys do. Is anybody talking about doing anything like that? Obviously, you did that early demo, which impressed a lot of people. But obviously, that's even a step beyond what almost what people have dreamed of today. But you can't do that using current architecture. So any thoughts on that from a futuristic perspective and will that need a Gemini-III or can that be done in Gemini-II? And then one last follow-up question?

Lee-Lean Shu

Analyst · AIGH Investment Partners.

So you're asking whether - how we want to do in the future generation is that?

Orin Hirschman

Analyst · AIGH Investment Partners.

No. Specifically, the more important part to me is just in terms of incredible visual search capabilities, almost like NLP search capabilities on visuals. You did that impressive early demo with movie and then some other experimental projects. And people all over the world is starting to do experimental projects on - massive amount of visual data. Any more thoughts as to - that's obviously very suitable or uniquely suitable for what you do versus just you know or just GPUs for that matter. Any other interesting projects like that movie project? And I know it's a bit futuristic, but has anybody done more in terms of that massive type of visual search, comparative visual search using NLP for visual search using Gemini?

Lee-Lean Shu

Analyst · AIGH Investment Partners.

Yes. We look at - with our - just I mentioned with our - the partner, we extend the architectural advantage of Gemini architecture. We look at the one workload, we can be - if we have enough memory, we will be 10x faster than any solution exists today, okay? So that's why we are saying, hey, we have this inherent advantage there. But the thing is we don't have enough built-in memory for that, okay? So, if we can combine for the future road map, so if we can put enough memory into it, that's why you are looking for all the magnetic performance better than existing solutions.

Orin Hirschman

Analyst · AIGH Investment Partners.

On that note, a closing question. Just in terms of the - of what nanometer geometry is being used for Gemini-I, Gemini-II and what you're thinking for Gemini-III. And obviously, that will affect what you just discussed in terms of the ability to pack in memory and et cetera, if you can kind of tell us more about that? And then just one follow-up, and that's it from me? Thank you so much.

Lee-Lean Shu

Analyst · AIGH Investment Partners.

Yes. Today, we - Gemini-I is 28-nanometer, and the Gemini-II is 30-nanometer. And if we look at the future, okay, today's sales our GPU is a 4-nanometer. If we look at future and then we do 5-nanometer and then we build in the 3D memory in there, because the only way you can get a high capacity memory with a reasonable framework is 3D memory. So, if we put in the 3D memory with a 5-nanometer, we'll be order of magnitude better.

Orin Hirschman

Analyst · AIGH Investment Partners.

So this is the follow-up question - with understanding that in terms of Gemini-III, but knowing that Gemini-II is going to be the platform coming up here shortly, I mean, the key platform. In terms of your ability to accelerate NLP again, non-visual forget about that futuristic question? But here today in terms of accelerating NLP applications and ChatGPT, et cetera, is Gemini-II got enough in it so that your competitive flash even superior on that type of application to leading edge GPU, optimized GPU like Hopper-style GPU. Have you passed that with Gemini-II? And the question only is, can you leapfrog it even further? That's my last question? Thank you so much.

Lee-Lean Shu

Analyst · AIGH Investment Partners.

As I mentioned there are two things, big memory capacity or big memory bandwidth, okay? We have one of them. So if any workload can fit into our chip will be the best solution out there. There are many, many cases like that, okay. Even the ChatGPT, it doesn't have to be a humongous dataset. Okay. So it can be a smaller dataset. And so, the dataset can fit into our chip. We will be - the number one in the market.

Orin Hirschman

Analyst · AIGH Investment Partners.

Okay great, thank you so much.

Operator

Operator

[Operator Instructions] The next question is from the line of [Luke Bohn], Private Investor. Please proceed with your question.

Unidentified Analyst

Analyst

Hi, good to be back, hope you all well, very exciting announcement and development. Great to hear all the comprehensive layout there? Just for really kind of minor clarifications and going a little bit broader with the near-term potential. Wondering if your Amazon Web Services server offering is capable of fielding, say, like just a broader range of companies, potential end use cases that could more or less play around with your service without having to go through more complex processes of embedding or other integration processes, just plug and play? And so what you can do for their applications, especially thinking about vector search, but also rich data, like was mentioned, maybe for metaverse, maybe for advanced registration, things like that. Yes wondering how you're seeing the potential to expand Amazon Web Services or a similar offering on say, Azure or other clouds and especially how that would relate to an earlier rollout of Gemini-II from your own facility, your own servers on the cloud?

Didier Lasserre

Analyst

So, we've started - as we've discussed in the past, we've started the integration with Open Search. And so that's ongoing. It's really - and we have already set up our own servers for that. We have some here in our Sunnyvale facility, some in our Israeli facility and then we also have some at an off-site facility that's directly across the street from AWS West, and it's directly connected. So we've had that in place with the Gemini-I. Over time, obviously, we would migrate those to Gemini-II. So those are in place. We do have some SAR demos that people can run off of those remotely. It's not set up yet to be able to do load your own data. It's the data that - sets that are already in there, which you can run. And so we're not at the point yet where you can enter your own data, at least not larger datasets. So we - but that is certainly the direction we're going. We're just not quite there yet.

Unidentified Analyst

Analyst

Do you have a timeline on when you would be able to roll out those interactive features and capacities?

Didier Lasserre

Analyst

We're shooting for this year. Some of the examples you brought up is going to be - we're going to get some help from this data science contractor that we have on board now. So it's something we're trying to rollout second half of this year.

Unidentified Analyst

Analyst

Thanks so much. All right, that's all I have [indiscernible] yes appreciate it, yes.

Didier Lasserre

Analyst

Thanks Luke.

Operator

Operator

And the next question is a follow-up from the line of Jeff Bernstein with TD Cowen.

Jeff Bernstein

Analyst

I just wanted to see if you can give us an update on the Elta SAR application, and what's going on there?

Didier Lasserre

Analyst

Yes. So as you recall, we did the POC with them, and it was a very broad POC. It could be used for different vehicles or vessels. It could be used at a lot of multitudes of heights from 100 meters to much, much higher, obviously, into space. And so, the initial program they're looking at for us was just a single laptop, I guess you could call it that. And they had already been using a GPU. And so they're using the GPU still for that program. There's a form program that they're looking at us for now. And so, we're going through that process with them. So it will be - it won't be another POC, because we've already done one, but it will be a kind of a bit of a different project than what we are working on with them. But it will still be under SAR, and it will still be the same algorithm, so it should be a simple integration.

Jeff Bernstein

Analyst

Okay. And then just wondering about waiting to hopefully get some space providence on the Rad-Hard SRAM and wondering if you guys have any visibility now on when that launch might happen or is it permanently scrubbed?

Didier Lasserre

Analyst

No, it's not permanently scrubbed. We follow-up - I get your frustration because I'm with you on this one. So it's not scrubbed. There were multiple programs that they - when I say they, there was a few as we talked, defense contractors we're using it. There have been a couple of the programs that have been scrubbed, but the larger ones we're looking at have not been scrubbed. They're certainly still out there. It's just - they've just been pushing out the launch dates. And we're just not getting a good feel for exactly when the next launch is going to be. Originally, we know they were delayed, because they couldn't get some critical components. And now it's just a matter of getting them to actually do it. So, the answer is we're still optimistic about it. It's just the timing is elusive for us on when it's actually going to happen.

Jeff Bernstein

Analyst

And so can the European distributor kind of do anything on the - Rad-Hard piece or are they stuck with just doing Rad-tolerant until you get that space province or is it a different approach in Europe?

Didier Lasserre

Analyst

No, they're definitely going to be going after everything. So the folks that we've already sent parts to that were looking to get heritage, it's really just a heritage part. And the heritage just is the signal to the world that says your parts have been launched into space, and they work. And so it's really - it's an additional check mark and a box for a lot of these folks. But it doesn't change the fact that our parts are already internally qualified to work up there. So, we know they will work based off of the testing that we have done. So this European distributor is going to be finding additional opportunities for us. I mean, the folks that we were looking to do the heritage for the short-term launches, those were U.S.-based companies. We have shipped some Rad-tolerant and at least one Rad-Hard to a European customer, but they were not the ones we anticipated to get us the initial heritage.

Jeff Bernstein

Analyst

Okay, all right, great. And then any update on some of the scientific applications is Weizmann Institute come back for more boards or any analogous type customers in pharma, med-tech, biotech, universities, et cetera?

Didier Lasserre

Analyst

Universities, yes, so we're candidly not spending a lot of time on that market. The revenue opportunities for the other markets we've discussed today are larger. We do have two universities that - let me think - yes, two. There are two different applications for two different universities that are looking at them for genomics. And so they will be running - they'll be essentially doing the algorithms and doing the write-up. But personally, we are not spending much, efforts ourselves. We've already done a plug-in, specifically for the Biovia, Tanimoto. And so, it just doesn't make sense for us based off of our limited resources to spend more time developing more algorithms for more platforms. The revenue volumes there just aren't as great as they are in other markets we're addressing.

Jeff Bernstein

Analyst

Make sense, make sense. Thanks.

Operator

Operator

There are no further questions at this time. I will now turn the presentation back to the hosts.

Lee-Lean Shu

Analyst

Thank you all for joining us. We look forward to speaking with you again when we report our first quarter fiscal 2024 results. Thank you.

Operator

Operator

That does conclude today's conference. We thank you for your participation and ask that you please disconnect your line.