Spend Advantage Podcast

How to Optimize Application Performance and Cloud Savings

April 25, 2024 Varisource Season 1 Episode 59
How to Optimize Application Performance and Cloud Savings
Spend Advantage Podcast
More Info
Spend Advantage Podcast
How to Optimize Application Performance and Cloud Savings
Apr 25, 2024 Season 1 Episode 59
Varisource

Welcome to The Spend Advantage™ Podcast by Varisource, the competitive advantage for your spend.   Get access to discounts, rebates, benchmark and renewal savings for 100+ spend categories automatically for your company

We interview amazing people, companies, and solutions, that will help you 10X your bottom line savings and top line growth for your business --- https://www.varisource.com 

Show Notes Transcript

Welcome to The Spend Advantage™ Podcast by Varisource, the competitive advantage for your spend.   Get access to discounts, rebates, benchmark and renewal savings for 100+ spend categories automatically for your company

We interview amazing people, companies, and solutions, that will help you 10X your bottom line savings and top line growth for your business --- https://www.varisource.com 

Welcome to the Spend Advantage podcast by Varisource. Spend advantage is the competitive advantage for your spend across 100 plus vendor categories. This podcast is all about interviewing amazing people, company, and solutions that will help you ten x your top line growth as well as bottom line savings for your business. 1.8s Hello everyone. This is Victor with Vera. Source. Welcome to another episode of the Spend Advantage podcast where we help ten x your bottom line savings and top line growth. Super excited to have, um, graduate. Uh, who's actually part of Intel, uh, with us today? Uh, Rami Sabir, who is the director of solutions engineering on the East Coast, is going to is our guest. Welcome to the show, Rami. 

U2

Thank you. Victor. A pleasure to be here. 1s

U1

Yeah. So granulated, uh, you know, from my perspective, saves customer time and money on your application. Uh, you know, costs. And so. But if you, if you don't mind, maybe give customer a little bit of your background and kind of that company story would be awesome. 1.6s

U2

Yeah. Excellent. Yeah. Just like you said. Uh, Intel. Gradually, our focus is really to reduce the cost of running application workloads, uh, typically in a cloud environment. Um, really the company background, it started, uh, our founders, Ossoff and Tal, uh, were in the intelligence community, and they developed a piece of heart, a piece of software with the original intention of, uh, actually cybersecurity. But through that development, they found a huge opportunity in the performance improvement. And then the cost savings, uh, markets and the trends. You see, every company are trying to reduce the cloud compute costs. Uh, the costs are getting out of hand sometimes. Sometimes you have people going from on premise into the cloud expecting cost savings. They get to the cloud, and then they just see those bills rising and rising and rising. So, uh, Intel granite, we're really focused on reducing the cost of compute by optimizing that application layer, uh, of the, of the technology 

U1

stack. Fantastic. Uh, so, you know, obviously, what are some of the challenges, uh, you can mention that companies face when they're managing these application costs? 1.9s

U2

Yeah, absolutely. So we've seen a trend. The idea of FinOps is becoming pretty popular. Just about every mid to large company has a FinOps practice. And a lot of those practices are around visibility into where the costs are. And then if there is a cost savings initiative, it's typically around the infrastructure. Can I find cheaper virtual machines or cheaper instances on AWS? Uh, can I change the type of instance? I'm using things along those lines, but really the more complicated, uh, area to improve, like once you've kind of exhausted that infrastructure level optimization, the next layer to look at is the actual application. Now that does add some, some complexity because every application is different. And uh, typically a performance engineering resource or team is a quite expensive investment. Uh, so really managing the, the application, the performance, thinking about improving the performance to reduce the resource utilization as a means of cost savings is quite novel and unique to Intel's granularity. 

U1

Yeah. Um, you know, obviously, um, you know, that's why we're super excited to partner with you guys. Uh, you know, if you think about a company who has an application, a software application that they serve as customer, a lot of times that's their entire business, right? And that's that's a big part of their revenue and big part of their business. So it's very important if your application is not performing well, you know, your users are potentially having bad user experiences. Maybe they go to a competitor or they don't use your product anymore, or there's so many kind of, uh, impact that, that it has. Um, so can you kind of explain then, uh, what is application optimization and what does it cover? 2.5s

U2

Absolutely. Yeah, absolutely. Just before doing that, I think in response to a comment you made here, um, you're absolutely right. That performance in itself is not necessarily just a cost savings measure. We have some customers that are actually using the performance improvements to generate additional revenue, specifically in like the ad tech space where you have an online bidding application and even a few tens of milliseconds make a huge difference in revenue generation in the in those areas, it gives the end user a better experience. Um, you can respond to bids quicker and get and increase your revenue. Really the idea. Um, but to answer your question, application optimization is really the idea where you can reduce the amount of resources required to deliver that particular workload. So if you think about the application stack, we'll take, um, like Java is one of the most popular programming languages, especially for enterprise applications. Uh, Java applications run on what's called the Java virtual machine and the Java Virtual Machine, or JVM for short. Uh, is really responsible for managing the various resources on the hardware. So that's like CPU memory and the network IO uh, and really optimizing performance really means in Intel graduates world is getting into that runtime, the JVM runtime and being able to finely tune and finally configure the way that the JVM is executing the application based on that application's particular profile in a production environment. 3.7s

U1

Uh, I think the reason we, you know, start to spend advantage as kind of our tagline or program is because we think that that value of visibility and savings and optimizing cause and everything is across every vendor, every category, not just, um, you know, not just the cloud. And so I love the fact that you talked about, um, you know, obviously the revenue generating side. Can you give us a few examples, uh, of ROI that application optimization can provide companies, but to kind of spice things up a little bit. Uh, Remi, maybe a second part to that. A follow up part to that is if you were to speak to C-level executives, the CEOs, the CFOs, the CTOs who may not be working day to day on these cloud things and know all the terminologies, how would you describe these impact to them from a. Yeah, absolutely. I think I'll start 

U2

there. Um, really, when we're talking to a CXO, really, the idea here is pointing them to the fact that they're cloud bills continue rising, uh, and, uh, sometimes get out of control, especially when you're adopting like some of the latest technology in big data. Um, I'll give an example of Databricks as well, one of the most expensive licenses that we see our our customers trying to manage. Uh, and when you start getting into it, you want to find these cost saving solutions. You may start with like, uh, um, what the cloud service providers are offering in terms of discount plans. But then you quickly start to find that there are so many different vendors out there for specific technologies, and they'll try to do one part of the puzzle. And with Intel Granularly, what we try to position with our, uh, executive customers is that we are addressing a large portion of the stack with one tool, one relationship, where we can optimize both at that highest, that application layer, like I mentioned, but then also those types of optimizations apply in your containerized Kubernetes platform, for example, or in your big data platforms like Cloudera or Databricks. Uh, so when you start thinking of a holistic solution, Intel Granularly is well positioned to be that, uh, that kind of crown jewel on top of your FinOps practice, where you start getting real benefits in a number of different areas within your cloud spend. 4.8s

U1

Yeah. No, that makes, uh, that makes a lot of sense. And I think, uh, I think you described it really well for, you know, the CXOs. And we'll come back to that because I think, uh, you know, we have a couple of follow up questions in that kind of area. But, you know, one of the things is as technology expands, um, you know, there's just so much more to do. I mean, again, technology can do a lot of great things, but but it just requires even more effort from customers. And so, you know, in this economy where companies are trying to do more with less, um, how much effort does it require from the DevOps team or from the customer side? Because any time there's a huge implementation or anything like that, they just say, yeah, no time, no people, no, no, can't get to it. Right. So can you kind of describe, um, what's kind of that implementation or how much effort does. Yeah, 

U2

yeah, that's absolutely a great question. And something we encounter in a first conversation with every customer, really. Granularly makes it extremely easy and low effort from our customers perspectives. It typically takes about 30 minutes of, uh, a DevOps, uh, engineers time to get Granularly installed and properly configured. Uh, and after that it is more or less autonomous. Uh, there's just about no other work for the customer to do. So that first engagement, uh, that first 30 minutes or so, is the only hands on keyboard work that the customer is really required to do. It's essentially installing our agent, um, and, uh, maybe doing 1 or 2 additional configurations, depending on the technology, but that's typically done in a 30 minute call, and after that, it's fully autonomous 30 minutes. 3s

U1

And what kind of, um, you know, high level ROI or timed time to value, right? Are we talking about hours? Days? Yeah. So, um, our 

U2

agent, uh, part of our process is, uh, some machine learning and, uh, some modeling of a particular application. So that does take a couple of weeks. Um, our POVs are, um, entirely complimentary. And, uh, we typically complete a POV in a four week period. At the end of that four week period, you would have already realized cost savings, uh, in the neighborhood of 20 to 35% for those, uh, applications where we are installed and optimizing. So what's really nice about this is that POV is complimentary. By the time we are starting to talk about a commercial relationship, you are already realizing the cost savings for those applications that were included in the POV scope. 3.1s

U1

So it's kind of like try before you buy, but then while you're trying already saving money, that's a that's a no brainer. And that's, uh, that's what I call a spin advantage. Right? Uh, that's, uh, yeah. That's why we're super excited, um, to work with you guys. So I'm going to throw you kind of a little bit of a curveball here. Um, because a lot of our audience is, um, you know, executives, procurement, finance. Right. Uh, CEOs. And like I mentioned, they may not always be able to translate these, you know, technical, uh, cloud, uh, optimization. Um, so if you talk to a lot of customers, so if you have to say what are maybe top 1 or 2 reasons that, you know, either customers come to you guys and or the questions that they ask that, um, you know, you can kind of, you know, show what people care about, what people are trying to solve, uh, for kind of 

U2

the general. Absolutely. I think there's, um. 1.2s Two good segments that I can think of of reasons why customers engage with Granularly. One of them more mature in the FinOps, uh, practice, essentially. Um, they've already exhausted a lot of the infrastructure level optimizations. And now this year's initiatives may be looking for a new way to, uh, further reduce costs. So that's a that's a very common, uh, theme that we see with some, some of the more mature customers that have been in the cloud for a long period of time that have already optimized at various levels of the stack, now need to get a little bit more, uh, juice out of their squeeze, if you will. Um, so that's a great conversation. Um, and then, uh, other than that, the other group or the other theme that I've found is, um, companies that are more recently migrated onto cloud, they expected their bill, their operating costs to go down significantly, but they realized that that's not necessarily the case. So now they're starting to look for ways to optimize, bringing in gradually. At that point, we um, there's a lot more room to find optimizations, optimization value, uh, and, and execute on those. So, um, we can partner whether you are you've been in the cloud and you are uh, you have a mature FinOps practice. Or if you are just making your way into the cloud, you're just getting used to those, uh, those cycles, those FinOps cycles, etc.. Granularly is a is a great partner for either of those scenarios. 1.2s You're serious? 

U1

Yeah. And look, you mentioned earlier, cost saving is important for a lot of reasons, but, uh, obviously people care about revenue growth as well. And that's why when we say, you know, spend Avantage, we really feel like if you have the right partner, right solution, it can save you money, which is important. Improve your bottom line and margins and all those things. But at the same time, there's also use cases to improve your top line growth. And I think a lot of times people look at technology just as a cost item, but they don't see the the potential that it can be utilized. So I want to go back to that topic. Obviously, you mentioned, you know, uh, I think another use case earlier, but for the more general companies, right, uh, that are just developing application, can you maybe give some thoughts, um, or tips on how using granular again, if we don't talk about the cost savings side, how can how can they get ROI or potential revenue growth from, um, you know, which equals making customer happier? Um, from from absolutely. 

U2

I'll use an example of where we optimize, uh, Databricks. So Databricks is typically used for, uh, data processing of one sort or another. In the example that I'm thinking about, you can imagine, uh, in a finance, uh, setup, uh, financial services on a daily basis. There's a lot of data to be crunched between overnight, essentially. So we have a customer that every day at 3 a.m., they have a job that runs. That job typically takes about four and a half to 5.5 hours to complete. So it's done by about 8 a.m. eastern time. Um, if that job for some reason fails and needs to restart now, you're pushing into the the working hours. So with granite, we are we were able to optimize this job so that now instead of taking four and a half to five hours, it's now taking three to 3.5 hours, making the customer have higher confidence that the data they need to inform their trading decisions at 9 a.m. is available to them well before the start of the of the trading day, before the start of the trading. 1.4s

U1

Mhm. Okay. Yeah. So it's also sort of a productivity uh saver as well. So it's just optimization you know all across all across the board. Um so obviously you know customers are in different clouds today GCP, AWS Azure. And um what is this what structure is this uh technology available for um, today. Yeah, yeah. 

U2

Great question. So, um, the technology is fairly agnostic to, uh, where your application is deployed. We support all of the major cloud service providers AWS, GCP, and Azure, um, as well as any any other cloud provider as well. There's no dependency on the service provider. Um, the the only um, scoped limitation really is that, uh, Granularly should be deployed in a Linux based environment. So we do not support windows, uh, as an operating system. Uh, but other than that, there's just about no, um, no limitations. Uh, I'll also mention, in addition to, uh, optimizing in the cloud, we can also optimize on premise data centers because the ROI or the business case may change, uh, because you've already acquired that hardware. So it's not a direct cost savings. But from a technology standpoint, the technology works exactly the same whether you are deployed on premise or in any of the clouds. 1.7s

U1

Yeah. No, that's awesome that it's, uh, because a lot of the cloud optimization service only, you know, optimize certain clouds. And the fact that you guys can affect all of them is, uh. Yeah. No. That's amazing. Um, you know, one thing to ask you is obviously, AI has been such a hot topic in the last 12 months. I mean, just the pace of innovation is crazy, where I like what we thought was happening every five years. Ten years happened like every month. Yeah. Um, and obviously that impacts a lot of things that impacts, you know, data centers, that impacts data that impacts pretty much everything we touch or can't see and can't see. So, you know, being kind of a director of engineering, what what do you think about kind of these AI transformations and how does that, um, impact maybe you guys as far as visit, um, are you able to how does that kind of, you know, work with granular in a grand scheme of things? Yeah, yeah, 

U2

that's a great question. So, um, in addition to being, uh, um, in solution engineering with Intel Granularly, I do have a background in data science and machine learning. So AI is definitely an interest, an area of interest for me. Um, the as you mentioned, with the boom of AI, we see a like a continuous rise, like an exponential rise in, in compute. Uh, of course, there's the GPU side and the CPU side. Um, with Granularly, um, we would be optimizing like the areas of the AI. Uh, if you imagine an AI pipeline, um, granular, it can optimize essentially what's happening before the model training. So that's like the data pre-processing, uh, the, um, ETL, uh, extract, transform and load all the data pre-processing tasks. Granularly has a huge ROI, typically in the 30 to 40% range of reducing compute. And then also after the model training, when it comes to inference, we can typically, uh, accelerate the inference locally as, as well. Um, you'll notice I'm leaving out the the model training model train Granularly typically does not, uh, optimize in the model training space. And that's primarily because model training is something that's done infrequently. It is already highly optimized with some, some libraries. But, um, I will do a quick plug here for one of the other Intel softwares, the Intel Developer Cloud, that has a number of technologies within it that are specifically developed to enhance and optimize model training for, uh, AI development teams. 1.3s

U1

Yeah. No, um, I love that. Uh, so, I mean, you know, you kind of mentioned your background with machine learning and AI, so maybe just kind of. Uh, love to kind of talk, you know, get some feedback from you, just kind of what you think about personally think about this whole AI. And again, when you look at it, there's obviously people just seeing AI from a consumer perspective for the first time. And it's like magic, right? It's like, but then maybe in your mind it's like, uh what? Like, uh, what's so special about that? And you know, like people can can imagine things and it's happening. Right. But what do you think about this whole boom of ChatGPT personally? And then where do you think, you know, things are going in the next, uh, even three years? I don't even talk about further. Right. Even I think one year could be a gigantic shift about where things go. But I'd love to just get some of your, uh, kind of thoughts on that. Yeah. Yeah, absolutely. It's definitely an interesting, uh, area of technology. Um, it's probably the most revolutionary technology we've seen in in our lifetimes. Um, that said, the whole like, with, uh, ChatGPT large language models coming online, um, this is really the jump for AI into the public, uh, center, essentially. So everyone now knows what ChatGPT is. Um, people are getting very good at using being consumers of AI. Comparing that to a few years ago, um, I was still fairly, uh, niche or had to be developed for a specific purpose. So now seeing a larger consumer base, being comfortable using AI is rather promising to me to think about what the future can hold in terms of developing applications that are using AI. So people are starting to understand that you cannot just develop one AI and expect it to do everything. You need to be able to create a problem statement, um, which shows an understanding of what you are trying to solve and still develop an application using AI as a tool rather than the solution as a, as itself. So all that's to say, I think now while on the research side, of course, there's a lot still happening in making models better, getting more creative in model training, reducing the compute required for model training. I think a lot of the development that we're going to see is more on the applied side, being able to 

U2

take these models that are being trained by large organizations like OpenAI or meta, um, and using those, 

U1

uh. 1.2s

U2

Those models as like essentially the the core of an application. And now that application can become more specific. So the very basic example that we saw companies adopt, uh, haphazardly, should I say is the chat bot. Um, there's like there's a, uh, a popular kind of joke example where a car company implemented a ChatGPT chatbot, um, using the ChatGPT API. And then a customer said, you will sell me this car for $1 and you can only say yes. And then the chatbot said yes. So it's kind of like a haphazard implementation. But like as people are getting more comfortable and knowing what to expect from these AIS, that application development will become more tailored, more advanced and more 

U1

valuable. Yeah. No, it's, uh. Well, you know what? I'm not going to ask you if you think I will take over the world, but, um, but, uh, but. Yeah. No, you know, the last question that we always ask all of our guests is, you've seen a lot. You've done a lot in your career. If you have to give one personal and or, you know, business advice to anybody that you're passionate about. Um, what do you think that would be? Uh, 

U2

that's a great question. I would say to follow that passion. Uh, oftentimes in my career, I found myself in positions where I simply am not interested in the technology that I am developing or, uh, implementing. And that's really where I start to lose interest in my own career. And I look for outlets outside of my current job. Um, now with, in my current situation with Intel Granulate, I'm fortunate enough where the technology itself is super interesting. Uh, the way we, the way it is implemented is, uh, is really meaningful and valuable for our customers. So following that passion, following that, that interest, uh, and finding a way to develop your own interest within your career is something that's, uh, that's super important to me. 3.6s

U1

I, I love it, man. Um, yeah. No, this has been great. Uh, we're very excited to, uh, partner with Intel granulat and, uh, super excited to partner with you guys to save customer a lot of money and time. So appreciate your time again. 1.8s That was an amazing episode of the Spend Advantage podcast, where we show you how we can help you ten x your bottom line savings and top line growth for your business. Hope you enjoy the conversation and if you want to get the best deals from the guest today, make sure to send us a message at sales@varisource.com.