More episodes
Telemetry Now  |  Season 2 - Episode 28  |  January 23, 2025

The Role of AI in IT Operations

Play now

 
In this third episode in our customer series, Avi Freedman, Jezzibell Gilmore, and Joe DePalo from Netskope join host Philip Gervasi to explore the transformative role of artificial intelligence in IT operations. From NetOps and security operations to executive-level strategies, the group talks real-world AI use cases, potential risks, and best practices for safe and effective implementation.

Transcript

Artificial intelligence is very top of mind across most industries and now especially in the world of tech and IT operations.

Now with us again today in this Telemetry Now customer series are Avi Freedman, Joe DePalo, and Jezzibell Gilmore to talk about AI, real use cases in IT operations, some security concerns and considerations, and how we can look at AI from a high level executive perspective as well. Now I love talking about AI, and so I'm very excited about its role in tech and this episode. My name is Philip Gervasi, and this is Telemetry Now.

Avi, Joe, and Jezzibell, thank you so much for joining me again. It's been really great to record this series of podcast with you, and I am looking forward to today's episodes, focusing specifically on AI, artificial intelligence.

And, Avi, you especially, I know have no opinions about AI. Is that right?

None at all.

Yeah. Right?

But I would like to, stay focused today, if you would, on the application of artificial intelligence on, network operations, security operations, and, IT operations more broadly.

Certainly, there's a lot to discuss, on the on the on the grand scale of our artificial intelligence in, you know, in society, and and in the world and for humanity. But I'd I would like to discuss with you and hear your thoughts about its application, its uses in IT ops. And, and I think to do that, I think it would be helpful if we started off, with hearing your your definitions, kinda like your opening thoughts of what artificial intelligence actually is. I know that folks define it in different ways and different terms and have different, perceptions about what it is from both the technical perspective and and even kind of an operational workflow perspective. So why don't we start with that?

I was, fortunate enough to be involved, in not doing my own AI research, but my uncle was doing, AI research in medicine, back in the eighties, which was before one of the many AI winters or few AI winters that had happened before recent times.

Ackland was symbolic to list machines and fuzzy logic and expert systems and neural nets, which we still use. And, well, at the time, people said, oh, what's what's AI? I said, AI is the stuff that we don't yet know how to do and teach us a separate class in computer science.

And then once we know how to do it, we just do it. And, the stuff that we don't know how to do, we keep pounding our heads against. And so it's been interesting to see the evolutions and some techniques become so useful that they can be, you know, applied more generally. But, nowadays, I just think of it as a set of techniques, that, the net result of is is generally helping to automate the humans and making them more productive and powerful. And underneath, you know, is a bunch of technology, that I think will include even more than the LLMs that, it does include even more than the LLMs that has been so hot for the last couple of years. Mhmm.

Right. Joe, what's your take?

Yeah. To me, I think, much like Y2K or ipv6, AI is something that is now grabbed in the mainstream media that doesn't really have a correlation of of, of reality. And so what I mean is AI has existed in some form for a very long time. You think of things like Grammarly or, services where a computer or a technology is interfacing with the user then interacting in an independent way.

And so AI has existed. I think when you look at it from an enterprise or from a mainstream media, that's where obvious points come in to where it's stuff we don't really know yet, and there's evolution there. And so the the one thing I do know is it's not going away unlike, ipv6 or Y2K. It's here to stay.

It is a big part of businesses.

It's used in every aspect of people's lives, what whatever the application is. And so being able to monitor it, to transport it, and, secure it is is going to be a challenge for network and and security operators for for a long time.

I know everyone's looking for the equivalent of what, Rob Systrom came up with the business idea of, y two k y Jelly that you could spread in your network and would y two k proof it.

You know, everyone wants the AI magic, they can buy and apply.

So Right. Right. Well, that's the concern. Right? That there's both, AI washing and marketing hype, and this is how we can you know, let's change our branding on our website or or, also the the FUD, the fear, uncertainty, and doubt, as far as, like, AI, eventually developing into some sort of superhuman intelligence and taking over the world, which, you know, I I personally, I think is laughable at at least at this stage.

And so I I do believe, Avi, that you make a point that we're sort of rehashing and rebranding as AI, things that we've done for a very long time. So when we're applying some ML models to do some, you know, I I say sophisticated data analysis, but it doesn't even have to be that sophisticated.

Is is that now AI? And if so, doesn't that we mean we've been doing AI since we were sophomores in college when you apply a linear regression model and all of a sudden you can make a simple prediction.

Now, certainly, that's different than, like, large language models and the sophistication we have with those. But, I I do wonder if you're you're correct that we're half rehashing some of the things that we've always been doing both in in, sort of data analysis and even in security operations. I mean, I remember learning about heuristics and how we can identify patterns and then, you know, make some sort of intelligent decision to reduce the, you know, with the false positives and things like that that my SOC is coming up with all the time.

I think the truth is somewhere in the middle. Mhmm. So, yes, there's a lot that is not LLMs and and and that people have been using for some time in most mature systems, to do machine learning.

And, you know, the the the very interesting thing about LLMs is how fast they've been moving and probably are likely to continue to. Now like the people that, you know, engage in navel contemplation about whether we live in a simulation, I don't personally find it to be that rewarding to think about when we have AGI.

And I think it's also you know, the thing that people get confused by is also the wonderful thing about LLMs, large language models. They look like they're thinking, but they're not. Mhmm. They they give you things that you would not have expected as output, which is really intriguing, and we have to think about how to harness.

But it seems very easy and, you know, you go to, to to say, oh, awesome. I will just ask it how to configure my network.

And, you know, you could have really big problems that way because at least your own thinkos are your own thinkos that you might type o thinko whatever that are your own things. But, like, now you're gonna debug something that's subtly wrong from something that isn't even thinking. It's just predicting with some randomness.

We're not there yet. So how do you tame these things? I think it's by we talk about it by a combination of these techniques and some good old human, you know, guidance.

But, you know, it's awesome. You can search your knowledge bases, and it's awesome. You can have code and explained and help humans learn. But I think it all comes back to how do we use these techniques to accelerate humans, not replace humans. And that, there's been a lot of progress on in the last few years.

Yeah. Yeah. I just gave a talk about this, called using network, large language models and network operations, the human factor, because it really is all about augmenting a human being, an engineer.

So, you know, you're you're talking specifically about large language models, and I'm glad you are because that's, like, the thing that folks are usually referring to when they say AI. They're not usually talking about the more advanced sophisticated AI workflows and ML workflows and even AGI. Although, I do see some videos and articles from time to time on Medium and things like that. But generally speaking, we're talking about folks, you know, using ChatGPT or perhaps another publicly available foundational model.

Or if they're, you know, really nerdy like me, they downloaded llama three point two on their laptop, and they're messing around with it. But, ultimately, that's what they're doing. They're using a model that is, probabilistic under the hood. Meaning, like you said, it's just predicting the next item in a sequence, word in a sentence, so there's no inherent intelligence.

But but, I mean, I say that, while also seeing some of the results very, very powerful. I mean, really neat. And I know that some experts are still trying to figure out what where is this emergent behavior coming out of within the hidden layers of some of these neural networks. They don't understand.

There's something magical going on.

I I would use the word you know, there's there's the effect of intelligence in some ways. What there isn't is understanding, which is why I get sad when people say hallucination because that implies cognition or thinking. Mhmm. Mhmm. And it isn't thinking. That was what some of the earlier techniques were trying to model.

The idea was, oh, you have enough neurons and you do enough stuff and but we're not really training LMs to be exactly like the human brain. It's not we don't really think that's the totality of what we're doing is just being pattern matching. We have thought and cognition and explanation.

And most of what people are talking about in terms of advancing LLMs is not being able to explain, I'll use a big word, the semantics of what this stuff is, but just what data input changed their predictions. That's the x that's that's the XAI version, you know, for LLMs. But because there are these really cool things that it can do at a baseline, if you're willing to accept, so maybe maybe correct, maybe not, I think the real question is, how do companies that have datasets and models in humans harness the larger set of things that you might be surprised by that they can almost answer to filter out the, sorry, bullshit and, accelerate humans with confidence. That's the challenge for companies and vendors, and that's the fun part.

I think that's my cue. And so, again, shameless plug. But, what we found at Netskope is the AI consumption and use in enterprise followed the same path as social media. So in the early days, five, eight, ten years ago, enterprises just blocked all social media.

Right? And then it became a useful tool for companies to to wanna use it. So then they had to use filters. And that's where Netskope allows for, enterprises to filter applications, but also the content within the applications.

A a bank can use Twitter, but they can't actually talk about stock symbols and things like that. What we found in AI was that the enterprises are very unaware of the data sharing and what's going up into the cloud. We have a a customer that was sending all of their Zoom calls into the cloud for for notes, and we have them sending, proprietary code base into there for editing. And so there definitely is, like, like, I keep using this word, but there's the evolution of this type of a technology where the enterprises are unaware of how AI is being used in their environment, and that the data loss, the IP loss.

You You know, if you're gonna have FUD over anything around AI, it's the it's the fact that you have a leak in your environment. And if you block AI altogether, then you're limiting capability. The amount of, we we use AI in our in machine learning in our product. We use it in our operations and the value we get to lift the the the feature functionality, but also the the man hours replaced.

It's it's it's incredible. And so you're gonna have to figure out how to live with it, but also protect yourself, because, you know, these these enterprises are unaware of the the the weapon that they have, you know, given their employees.

I have to assume it's also an attack vector that some folks or rather it's a new attack surface that folks can use. You know, I was just talking to somebody recently who said, you know you know how, like, you can just like, if you're not really that great with Python, you can just ask ChatGPT to create a Python script? Well, what if you are, like, an an entry level bad guy?

Now you can ask it to, like, write the Python script to create the ransomware, do the thing that you might otherwise so it lowers the barrier to enter, like, the criminal sphere. I just that boggles my mind because I don't think that way. You know what I mean, Joe? Right.

Yeah. Hundred percent. And it and and, again, it's one of those things where if you're using a public if you're using maybe a less reputable AI service and you're uploading data, your data is gonna be shared in that learning process and people have access. And so the the amount of phishing and spoofing and stuff we're seeing is incredible, not to mention, you know, you know, the the fraud that happens externally.

But like you said, it it won't take much now for for your script kiddie to go to AI and ask them, hey. What data is available? What do you see? Where where would you attack this?

Well, write me a tool or, you know, a simulated email. I had a a a an employer a a few jobs ago, basically, give a bunch of approvals for for for paying money that wasn't real. And so it's gonna be rampant in in as a tool to be used for crime, it's gonna be rampant for enterprises to manage their business. It's incredible vector in the sense of nothing has ever been created that is so beneficial yet so, vulnerable at the same time.

It's it's gonna be a hell of a show for the next so many years.

It sounds like AI is something that you can't live without, but you also are going to have a really difficult time managing to figure out how to leverage it to both protect and, your success and from the challenges, the security challenges that you may have. How should the executives think about this? Right? And if you were an engineer, you may have a way of looking at it. If you are an executive, you have responsibilities, to the company to both grow the company and protect it.

So how would one look at AI as a tech?

That's a great question, and, it's something that, I think the first step there is to be aware. Right? So you can't you can't assume or turn a blind eye to it and assume it'll figure out. You have to have visibility into you have to have a policy.

You have to have visibility into that policy, and then you have to have an understanding of the impact. And so, our favorite thing to do at Netskope is we ask the CISO or the CTO or CIO how many cloud applications they think they're using, and they say three or four, and we show them it's a hundred and fifty. Right? And so the same thing happens in AI.

If you ask a CTO or a CIO or a CISO how many AI interfaces, they're gonna say, we use ChatGPT. And the reality is it's probably dozens, if not fifty. And so that visibility into the AI application, how it's integrated into their workforce. My example earlier about the, company that was using Zoom to record all of their internal calls, like, you're gonna have to have track and visibility to that because, when you get exploited or have a problem in in your in your board or your shareholders are asking how'd you let it happen, you can't say, well, I didn't know it was happening.

And so visibility is the is the key, you know, and and, being able to see what's happening in your shop.

Yeah. The good news is that, there's a framework, so let's all hail and thank GDPR for subprocessors that security groups have led and privacy groups for years now, which the AI tools fit into.

The what are those tools, and can they be captured like the social media probably has run ahead of people realizing it. I can say I'm pretty sure that most executives are aware that they need to track these things down and SaaS applications of all these things and and have this. And, you know, the the AI even the SaaS vendors that were subprocessors, you know, got into some trouble. Some people definitely got their hands slapped for saying, oh, but and by the way, we changed our policies, and we can use all your content, you know, to train our AI, and that didn't last very long. I won't name any names.

So I think the good news is some of the frameworks that we've had, in place for privacy protection have become frameworks that force us to review these things. And I know, you know, it can't take work large enough. We have all these things too, that we have to, you know, track and review. And, no, you can't have that tool until we go and talk to them.

But then, yes, how do you actually get that that estate and that, understanding, and how do you apply policy to it, has, caught up, done a lot of work on that in in the last year. So it's definitely top of mind for folks and, people looking for services that can help them, tame the complexity and impose policy.

But certainly, in spite of these security concerns and and we need to discuss those and understand those and, look at those from a technical perspective, policy perspective, and, Jezzibell, as you said, from an executive decision perspective. There there must be some use cases that we can point to for using specifically large language models in network operations, security operations, IT operations more broadly. Avi, would you agree with that?

Yeah. Absolutely.

Okay. Well, I mean, please elaborate. What do you believe are the use the specific use cases for LLM? Wait.

There's more. Yeah. But wait. There's more. Right?

Yeah.

So, you know, we see a few really good use cases for LLMs. One of them is the Copilot, the helping really I mean, one of the biggest things they've been great for is parsing lots of content and making it accessible in English, so natural language processing. And not just English, but the language. So if you build a Copilot and use some of these models that are trained, you can also use them as translators and get not full accessibility, but a lot. We're seeing a lot of our our customers actually try to build this themselves, where they're trying to tie all their different tools in. So, for example, we have API we can expose so that when someone's trying to build the Uber Copilot, they can actually ask us in language instead of API. All we're doing is turning it into API internally, but at least they don't have to then understand the network widgets and words and all those things that are not in the generalized model.

Right.

You know, a second thing is explaining. So, if you're trying to learn something, I'll use, you know, not maybe Kentik example, but, we see this a lot for code. It could be the same thing. You dump your configs in.

You can ask a question. So you have to realize it may not be completely accurate. But one of the things that it has proven pretty good for is, hey. I want to understand, a config, without going and reading all the manuals.

So this is an example of accelerating humans. So please explain to me, you know, what does this config line mean in this context and, you know, has been pretty effective at taking the the knowledge base type stuff and the pretrained type stuff and the actual config and helping people. So that really can help. Now would I say, LLM, please translate this to config to Juniper config and just dump into the router?

No. I would not. And it's really important just like GPS has worked, you know, fifteen years ago to view it as a precocious fourteen year old and not you know, if your spidey sense tingles that maybe something's off, you know, and you should review it. Because if you don't understand well, as I said, there's no understanding in LOM, so it doesn't actually understand you.

So but that still is a huge accelerator. I mean, I did a lot of work in the nineties to just try to explain all this BGP stuff, you know, and prefixes and NLRI and all this because, you know, it can be really hard to understand. And anything that accelerates humans, that can be really great. And then the third thing, and and this is you know, that we see, this this this hits us a lot is it can be useful to try to pull together insights and potential root causes.

The only thing is you really better have a system that's rooted in the data and the meaning and understanding to check whether it's spewing bullshit or not, or Right. You could just waste a lot of time. So but because there's emergent behavior, because it understands, you know, it it can effectively parse language and and translate again, I even I use the word understand, but I don't really think it does, you know, understand in that sense, can be very useful for, generating insights and analytics if you can tame it properly. An example of the way we would do it is to say, hey.

Show me the things that you think are interesting, and then we'd look at our actual topology database and say, is this stuff even related? Because sometimes, again, the LOMs will think there's there's there's something going on when there isn't. But it can it can give you some things to look at that you can use other techniques to winnow down that turn out to be very, you know, interesting in in in in helping people with analytics.

Right. Yeah. There is there are certainly ways that we can mitigate hallucinations and ensure a greater level of accuracy. I say mitigate, not solve, because even the underpinnings of how a large language model works, you know, you're talking about the quality of your training data.

And then after your training data, you're talking about the quality of the data that you're pointing to. You're talking about your decision of which vector database that you've chosen. Perhaps there's a problem in in, incorrect semantic proximity when you use your own rag system. There's so many things involved, but there are but like you said, it's iterative.

We're we're not, you know, relying on it to run our networks just yet, but it's certainly a a a helpful tool for a human being engineer that's just running a network and running a a SOC. You know?

When are we gonna get to the you know, I need a pilot program for b two twelve, helicopter hurry so, you know, I can get information. You know, you talk about having to be able to absorb information faster. And, you know, can I have it implanted? Can I have it downloaded? Like, at what point that Internet I am part of Internet that and you're part of Internet that it is no longer a thing that we define as, you know, you have to have some sort of connection to. You have to have a device, to connect to.

And how does that affect us?

Yeah. It's interesting. Your your your question actually made me think of something, which was, I talked earlier about how the the edge is blurred. Right?

How with between increased edge bandwidth and device power. But now you also the AI also blurs the line between capabilities. I think Phil talked about just some entry level bad guy creating whatever. And so now you have you have bandwidth, you have blurred edge, and then you have all the compute power you can think of, you know, for nineteen ninety nine.

And so it is going to be very interesting from that perspective. Your question was more about, you know, how more how it integrates more and how it becomes part of our life and, you know, and the chip gets embedded, I guess, if you will. But, I think during definitely, during our lifetimes, we're gonna see the merging of that where AI will be just part of everything, and there won't be as much interaction. And then, the the capability and the access to things is gonna be unlimited.

It's gonna be, it's inevitable, I think. You know?

Yeah. I just read a paper over the summer. So not just, but just recently called situational awareness that was making the rounds in tech tech circles, talking about the, the exponential curve and the orders of magnitude of improvement, just as an example between, like, GBT three point five and then four, and it's in its its ability. So there's no rule that says that we're gonna continue to follow that same curve.

But if we do, I don't know, Joe. That's a little bit concerning. That's an understatement. Yeah.

And I I can tell you from just pulling it down to physical layer, we're running out of power. Right? And so there is probably since the dot com days, there's more money now being pumped into AI, whether it's technology, interface, and or the infrastructure.

I know a I know a stealth public cloud, company that's that's gonna build these massive GPU centers, the power plants. And so it's gonna be, affecting all aspects of our life from physical infrastructure to interfaces. And so if you're an enterprise person and you're not, if you don't have a strategy on how to protect yourselves from AI, attacks, how to how to consume AI to advance your business or, you know, how to make sure you're controlling your your employees' use case. And I can see why it's a scary thing, why it's where there's a lot of fun around it. So, you know, you better get paying attention.

Yeah. I think the human factor is gonna be increasingly important on the security side. You know, we've been aware that I don't know that I agree with the assertion that you just need to assume that you've been breached, you know, as a baseline of security. I I wouldn't say that, but there's definitely people that have been saying that. You know, you can't prevent at least.

And there was always the question of the insider as being, you know, how important is that versus the the outside. But now that the humans who are at the business, you know, it can be even harder to stay ahead of the bad gals and guys and, you know, understand what's real and what isn't, training awareness, you know, and just, you know, being out there and understanding. It's just another area of technology to track and advance. On the other hand, it makes the business more effective, and I'm I I think the services are coming about that are gonna help with this, and and net will be a benefit.

I don't think a lot about AGI, but I think, you know, Christophe, who runs product for us, has been pretty firm. We've got at least another, you know, two, three, four cycles of every six months, the net effectiveness of the LLM based techniques to at least surface possibilities and and, mimic, you know, human behavior is gonna double. So we've got our own Moore's law like thing. I don't know if someone's named it.

And if you aren't taking advantage of those things, your competitors will be. And if you're not thinking about what those, potential impacts will be, again, your competitors and and the bad folks are. So I still think it's net exciting.

I think it it it can really I don't wanna talk about poetically about bringing humanity into a new era, but help humans focus on what they're best at. It just requires a little bit more bullshit detection, you know, from all of us and training around it.

So Yeah.

Yeah. I have to I have to mention that when digital calculators first came out, it didn't presuppose that, therefore, no one knows math anymore and that all math was solved, but it was a tool in our hands, a human being, like you said, Avi. And the same thing with, like, early spreadsheet spreadsheets. Like, what was the precursor to Excel? Like Lotus Lotus Notes.

Now we can do that What does it count?

And Lotus one two two. Yeah.

Okay. Fair fair enough. Whatever whatever one you what do you, you wanna choose, but, certainly, that was yet another tool like the calculator that allowed us to do those things and and at scale. Yet, we still have a lot of accountants, and I just got a letter for twenty twenty four tax preparation. He charges a lot of money. So there is still, that perspective that these technologies are tools in our hands to help augment us as, in in our case, engineers, security engineers, network engineers.

But like with any new technology, any new tool, there is, there is a concern about using it properly and safely, which, which, of course, Joe, I'm sure he would agree.

I'll give you an example that I got from, two angles. One first was my father, and then was my physics teacher, which was basically, you should know what the answer is before you ask the device.

And, again, LLMs can sometimes surprise us with this. But my chemistry my I'm sorry. My physics teacher explained, you know, if you get all the numbers right, but the exponent wrong, that's the difference between a chemical reaction and a nuclear reaction.

And and the same thing, you know, if you know the order of magnitude, you should know the understand within, you know, a factor of ten, the order of magnitude of what the tip will be even if you get confused, you know, not confused, but even if it's slow to multiply six times the number, should be able to figure out within a doll you know, what whether it's dollars or cents or whatever. And it's always been the same thing with the spreadsheets and with models, and you should if the answer is surprising, you should think about it. But that also means we shouldn't be, like, you know, in idiocracy and just depend on, you know, the, computers to do everything. So What?

Automation. I don't think we're gonna get into automation and intent in this podcast, but that is a big use of LLMs as, you know, Phil, you did the presentation on. And that's really one of the big burning questions is when are we gonna have the make it so command? The opposite of ranted.

The, you know, push the button, you know, make it so number one. But that requires, you know, again, knowing that you're not gonna subtly confuse things and make things worse. So I think we're getting there. I think these techniques are really gonna be really helpful.

But, you know, I think I wasn't at OnoCon, but just looking at the presentations, it's better than a bunch of people banging on routers with wrenches, which was the nineties, but it's still a bunch of toolkits that you have to use and figure out for yourself. So it'll be really interesting to see how that evolves, you know, and for the enterprise beyond just, you know, life cycle automation interfaces, you know, systems of record and things like that.

And so it's a great time to be alive and watching if you're if you're nerdy and find this stuff fun.

Yeah. Yeah. And, and the thing is that it's not even the large language model that does that analysis and does that button pushing. That's just the human interface like you were saying. It just makes it easier. The barrier of entry is lower.

But GPT and Claude and name your model can't actually do the sophisticated data analysis. You know? They're the semantic framework.

So I think in twenty twenty five, we may be having another discussion, Avijo, Jezzibell, on, you know, agentic AI and how we can, kick off a very sophisticated workflow without knowing anything about it, just using natural language and some kind of a Janus system.

So, Joe, as we kinda wrap up here, what are some steps or ways of thinking that folks in technical leadership and maybe even executive leadership can take, can approach, this entire matter around using AI, which we've established as a very can be a very useful tool for enterprises. Right? We we kinda settle that. But how can we do it safely, securely, and and mitigate risk as much as possible? I know you can't, like, completely eliminate it. What can we do to mitigate it and use it safely?

Yeah. That's that's a great question, and a lot of our customers and peers in the industry are trying to solve this. And I I break it down into pretty much two simple steps. Right? The first one is visibility, visibility into what is being used, and, where and how and all the things. So you need a tool to have that interface to be able to understand.

And then the second is whether it's filter or block. Right? So it's it's not unlike your network. Right?

You have to know what's going through it, and you have to know if you wanted to go through it. And so, if you're an executive or a leader or somebody responsible for the consumption of AI within your enterprise, you need to make sure you have a a very comprehensive visibility into how it's being used and then ultimately some controls on if you're going to be able to filter, block it, limit it, and understand it. Because just like the Internet, just like social media, like anything, it's it can be a very useful tool, but also very dangerous. And so we've talked a lot about on the podcast a lot of the scary things, a lot of the potential, a lot of the amazing things that'll be productive, but, visibility and then the the ultimate control is the the the short term, the only way to really understand and and protect yourselves from AI.

All the other use cases about taking over the world and all the bad guys, we'll we'll leave that for for Hollywood for now. But, from an enterprise perspective, just focus on that visibility and and ways to control and steer, steer users to the right services.

Yeah. I I definitely agree. Having an inventory, understanding, tracking, so that visibility and control training. Again, I don't think anyone needs to know that that's something that people are you know, but decide, is that gonna be something that you expect everyone to do on their own, which then could get into people maybe playing with, you know, models and some corporate data that they shouldn't. So, you know, policy training, technology training, you know, is also important in this. And then the last part is, as I think Joe mentioned also, just make sure your vendors are part of this, you know, system.

So, both directly what models and other things you're using, but also, to be auditing your vendors. But I'm sure I'm sure that we see most of our most of our larger customers, enterprise and our service providers or enterprise in this regard, already have that into their in into their, privacy and security review frameworks.

Yeah. Absolutely. And I and I think it's important to remember, that a lot of the best practices and security workflows, data hygiene that we've been doing for years and years in IT in general with regard to data in motion and, the policies and workflows that we have in place in our organization, the oversight, and how we deal with storage and encryption. All of those things are foundational and applicable to any new technology that comes down the road, including AI right now.

Of course, AI has certain new challenges that we have to solve, but, I think having that foundation is still, where we need to start. Now with that, Avi, Joe, Jezzibell, we're gonna close out now, and it's been a pleasure to have you on again in this customer series of Telemetry Now. I look forward to the next one soon. Now to our audience, if you have an idea for a show or you'd like to be a guest on Telemetry Now, I would love to hear from you.

You can reach out to us at telemetrynow@kentik.com. So So for now, thanks so much for listening. Bye bye.

About Telemetry Now

Do you dread forgetting to use the “add” command on a trunk port? Do you grit your teeth when the coffee maker isn't working, and everyone says, “It’s the network’s fault?” Do you like to blame DNS for everything because you know deep down, in the bottom of your heart, it probably is DNS? Well, you're in the right place! Telemetry Now is the podcast for you! Tune in and let the packets wash over you as host Phil Gervasi and his expert guests talk networking, network engineering and related careers, emerging technologies, and more.
We use cookies to deliver our services.
By using our website, you agree to the use of cookies as described in our Privacy Policy.