Telemetry news now.
Welcome to Telemetry News Now. We have some really exciting headlines for you today starting miles and miles beneath the waves, miles deep in the ocean, all the way into outer space. Yes. That's right, folks.
Here at Telemetry News Now, we bring you the entire spectrum, the entire gamut of tech news crossing our entire atmosphere of our very Earth or something like that. And now today, it is just me and Justin, my cohost. Leon is not with us today, so we wish him the best until he returns. And so, hopefully, Justin and I can muddle through our headlines today and bring you some semblance of decent commentary and, and reporting.
So let's dive into our first headline of the day.
First up this week, we have an article from Network Computing, talking about Intelsat and SoftBank partnering to create a hybrid network that integrates satellite and 5g terrestrial services.
The gist here is that this allows a customer to be able to roam from 5gs when they have a good signal coming from a tower using the terrestrial network to the satellite communications that Intelsat provides using both low and medium orbit satellite communications. So, I I find this really interesting to be able to combine those two types of network communications to bridge the gap when there's loss of cell phone signal. I know that's something that I struggle with on a daily basis myself personally. I live in a subdivision where, my 5g cell phone carrier doesn't have the best signal, and so I have to roam onto my Wi Fi when I'm in my house to be able to get cell phone signal. And I'm sure I'm not the only one in America who are even globally dealing with that.
So Yeah.
Absolutely. You know? And to me, it seems like there was a shift in the headlines in the past year or two from, you know, we need to get high access to high speed Internet to everyone's home. And so there was a focus on rural communities.
There was a focus on underserved communities and and understandably so. And I'm sure that's still going on. But as far as what I see, like, in the news, as far as headlines are concerned, there is a there is more focus on access to high speed Internet from anywhere for anyone. And so there's a focus on mobile devices, on 5g.
And now this combination of 5g and satellite. We saw Starlink being deployed down to North Carolina, in the wake of hurricane Helene and presumably of Florida as well after Milton.
And so this convergence of 5g, of satellite, of whatever is necessary to get people access to high speed Internet, anywhere in the world at any time, whether you're in the middle of the ocean, whether you're in a rural area, and and and then having it be high quality, high speed. That's really neat, and I think this is part and parcel of that initiative that's been happening, partly, from a commercial perspective. Obviously, there's a lot of money to be made there, but also from, getting information out to individuals. So there's certainly a societal benefit and cultural benefit to, to moving in this direction and to seeing this convergence of technologies for the same ultimate goal.
Next headline is, Aryaka bringing CASBs into unified SaaS seat fold that's brought to you from NetworkWorld just a few days ago on October eighth. The story here is that Aryaka is an enhanced its SaaS y platform by integrating a cloud access security broker, a CASB, for the purpose of making cloud application control and visibility better. Now the update simplifies management by integrating security policies at the network layer, and the platform also is gonna include Aryaka's interactive product experience, IPX, if you're familiar with that, and that's for testing SASE in nonproduction environments.
And, and, of course, be you know, what kind of update would we have if there was an AI involved? There is a new AI perform feature. AI perform is the the actual name of the feature to optimize AI workloads. I'm not sure what that means as far as AI workloads going over your SD WAN network. I I really don't know, but, that is the the the update from Aryaka.
And, of course, that's a big part of everybody's, plans for the future. So they also plan further improvements with AI Secure, another product name, AI Secure, to enhance access control, threat protection, and data loss prevention for AI traffic. Justin, I don't know exactly what that means. I I'm a big fan of Aryaka.
I've used them in the past, but I don't know what this means specifically for AI traffic, AI workloads. I don't know.
Yeah. It does seem a little bit like AI washing potentially in this case. Right? Like, they did talk about how they can help with transmitting the large volumes of data that AI is gonna process, which, obviously is gonna be important the more AI models are being trained and so forth.
But how they're actually providing additional value there was a little bit unclear from the article. But, you know, I think it's interesting to be able to combine the security stuff that comes with the, the CASB, the cloud access security broker, with the secure access service edge, the SASE, right? So collapsing networking and security down into one discipline for, for an enterprise IT shop, being able to have your, your networking and security all built into one, having to buy and manage firewalls, not having to buy and manage, CPE for each one of your branch locations, getting all that as a service delivered from, you know, Aryaka in this case, I think does provide a lot of ease of use and a lot of streamlining of the way the network architecture is built.
So I think this is definitely the way of the future for sure. The other thing I thought was interesting in the article that Aria had called out was that they do run their own network. They have their own points presence. So they deliver a good network performance for this service.
So it's one of the their advantages that they bring to the marketplace.
It it's an advantage. I mean, I agree with you there, but I don't think that's their main advantage personally because other vendors, you know, consolidate service chaining. This is not, like, a new thing, in my opinion. Like, they're they're not this is not a groundbreaking thing.
But I I will say that I am I am a fan of Aryaka. What I think really their their benefit is and you might not want Aryaka for this reason, but I think they're, they're offering that, sure, they still load, but they have an SD WAN box on-site and they load balance across links, and that's very SD WAN like or it is SD WAN. But what they do is then bring your long haul traffic over their backbone so they can guarantee, whatever SLA you work out with them. And that's great.
And and they have the infrastructure globally to do that and offer an awesome service. Other SD WAN vendors are gonna just have some kind of overlay over the public Internet, and then they're gonna load balance across links by looking at, the quality of that link and whatever kind of testing they do with probes. And so you get your, quote, unquote, QoS over the public Internet. So it Aryaka is different in that way.
As far as this service chaining thing, I think this is just something they needed to do because that's how you stay competitive. I mean, others are doing the same thing where you have your branch offices. I want all the same stack of vendors, and I want my service chaining to be as easy as possible. I want one box on-site.
I'm gonna point to a CASB, do my scrubbing there, and then go out to the Internet. So this makes a lot of sense for Aryaka to do that in conjunction with the backbone that they offer, as part of their SD WAN solution overall, I think, is a very compelling solution for folks.
What I don't understand exactly is what they mean by this whole AI thing because, I mean, the nature of how AI workloads work is that they operate in whatever bespoke data center, and you have your clusters of GPUs all talking to each other within the data center. Where is the traffic going over an SD WAN in this case? I understand the the concept of a reverse CDN and all of the telemetry going from all your IT devices, like your smartwatch and your Tesla and all those kind of things going back up to data centers so that, these models can be trained on that. But I don't I don't think that's what we're talking about here, and that's not like I I don't I I don't think that's the impetus behind this. So I'm not quite sure what, Aryaka is referring to with, with their AI solution.
Well, we also are thinking about AI workloads being the training data, right, which that does require large volumes of data. And typically, like you said, you keep that within a single data center because those GPUs need the data from a previous run to to do the training and so forth. But there's also the query against that. Like, if you go to your web browser, you go to ChatGPT, and you ask the question ChatGPT, the interaction between your web browser and the front end of ChatGPT, that's not as nearly as large volume of data.
Right? So it could be that it's that too. I don't know. We we need more details to really understand what they're talking about here.
Yeah. Yeah. Like, what is AI traffic? What do they even mean by that?
So moving on.
Mhmm.
Alright. So next up is an article from SiliconANGLE titled, Arista Networks doubles down on cloud vision. So Arista Networks has updated its cloud vision platform to unify network management across three different data or three different network environments, that being data centers, campuses, and cloud environments.
So, you know, some of the key features that they're bringing to the market with Cloud Vision are real time insights, automation, and support for AI driven analytics. So keeping with the AI theme here, they are applying some models from AI to analyze the traffic running across the various, Arista networks. I think most of us, when we think about Arista, think of the equipment, the hardware, the the network switches, so forth that they provide. But, you know, Cloud Vision has been around.
I think it was originally launched, like, maybe twenty fifteen. I'd have to double check that fact, but, you know, it's been around for a long time. But they're continuing to invest in it, doubling down on on bringing more features to the market on this. So, Bill, I'm curious to see what your take was from this article you pulled out of it.
Yeah. I've always been very impressed with Arista and how they've embraced, you know, just o o the open concept, not the open concept of your house like I have, but the open concept of your network, operating system. And being able to integrate and and do that, really, they I feel like they they were one of among the big vendors, okay, like, of that group, of the big networking vendors. They really always seem to be at the forefront of that.
And, you know, I think there's also an understanding that they get other other vendors do as well, but they really get that the network or, you know, the system of application delivery is a lot of different components.
And to stay focused on just data center switching is really missing a lot of the holistic perspective of what a network is and what application delivery is all about. So there's a lot that you can do there to expand into these other areas, like like Cloud Vision purports to do and have, a view into security, a view into automation services, perhaps some sort of AI driven analytics, like you said.
And, and and, yeah, and then become more of a single tool or at least like a con consolidation of multiple tools so you have fewer to to manage your entire network. I mean, they talk about specifically reducing operational complexity. Right? Mhmm.
And I think that is really important.
However, it's it I think there's a lot of operational complexity in inherent in a in a single tool to do that. Like, you're talking about decreasing operational complexity of my entire end to end with this tool. That means that you have to buy this beast of a tool that has, you know, access and view and management oversight into my public cloud, into my data center, into my WAN and SD WAN, into all my security tools that I'm using.
I think every single vendor has tried something like that, and then the solution, whatever it happens to be, sort of goes away after a few years. Yeah. That was my experience with when I was working for VARs. I've employed many of these types of solutions, and then they, you know, they get played around with a little bit, and then they go away.
Well, you know, I first of all, I you know, when I talk to customers, I hear really good things about Cloud Vision. I think they may be closer to the goal here than a lot of the other vendors.
And part of it is that it's a it's a SaaS solution. Right? So they're delivering in a SaaS as opposed to an on prem, you know, install it in your data center and have to manage and maintain it and patch it and all that kind of stuff. So that that helps with some of it.
You know, the other challenge I always see, maybe is what you ran into, you know, as of ours, like, how many companies do you work with that are single vendor? Like, the only vendor in their data center is Arista. I'm not saying they're not out there, but I think a lot of the environments are, you know, multi vendor. Right?
So they may have some Cisco stuff. They may have some Marissa stuff. They have an older architecture, a newer architecture. So they're always in this kind of, like, refresh cycle where, you know, one particular vendor's orchestration system doesn't cover their entire data center footprint.
And that's a really difficult problem for the hardware of a hardware vendor, the rest or anyone else, to solve how do I, you know, how do I deploy config, and how do I do all of the observability across a hybrid Cisco and and Arista environment as just as an example.
You know, one thing I was thinking, though, is I've worked in a lot of networks over the years, SMBs, large businesses, government, you know, municipal networks, municipal government networks, that kind of thing, all the way up to gigantic global networks. And I know that it's the thing to say that, you know, everybody's going to a multi vendor. I don't really see it that much.
Maybe in the service provider space more, but I I mean, even in one particular, organization that I worked in, I was a consultant to them, global global network. Let let's just say that the organization has over three hundred thousand employees. You'd know the name. I'm not gonna say, though. But, yeah, they were, like, ninety nine point nine nine percent Cisco. They were literally just a few HP chassis switches, like, out and about. But from their security, from their, routing and switching was all Cisco.
Mhmm.
And then I've been in organizations where it was all HP or or something like that. Now I I will say that I have seen organizations be multi vendor in the sense that each kind of block of their network was a vendor, but not necessarily the same vendor as another block. Like, for example, all my firewalls are Palos. All my switching is Cisco.
Right? And my and my data center is all Arista. So my closets are Cisco, my data center. So I I have seen that in a multivendor sense. And if Cloud Vision is going in that direction where they're gonna literally manage that entire broad sweeping, scope of of networking, that's awesome. I mean, that's awesome, but but good luck.
I think that's gonna be a very difficult thing to do though. Right? Like, I mean, to kinda answer your question, like, when I worked in in service provider environment, we would have a potentially a different vendor for different areas of network. Like, our BRAS edge would be one vendor.
Our core would be another. And then as it came time for a refresh cycle, we'd go to RFP and let the vendors compete. But we didn't have, like, one market where the edge was one vendor and another market was a where it was a different vendor because it was just too complex to manage and maintain configs and all that kind of stuff on. And I think that's pretty common both cross service provider enterprise.
You know, helps you avoid vendor lock in a little bit, but yet still keeps the simplicity of a single vendor. But, you know, when I when we had orchestration systems, like, you know, I pick on any pick on any one vendor, but it's really difficult for them to keep up with the config changes of their competitors. Right? Like, you know, let's just talk about it first to CloudVision for a second since they're the one we're talking about.
Like, is Cisco gonna call them up and say, Hey, by the way, we changed the syntax in our Nexus product for the, you know, VLAN configurations in this version of code. No, they're gonna have to figure that out on their own, right? So it just makes it really complex for the hardware vendor to manage a automation system that's gonna work in a multi vendor environment. You almost gotta turn to a third party for that to really do it well that can partner with each one of those vendors.
It's just a, you know, just the way it is.
Yep. Yep. Makes sense. Okay. Moving on.
Getting into one of the hotter topics of the day. OpenAI releases, zero one. That's the first model that they say has reasoning abilities. Now this is not like yesterday's news. This did happen a few weeks ago now.
But being a biweekly show, we only got to it today. So, I read through some articles, from The Verge, the OpenAI website. I have an entire feed of AI news coming to me. So I read this all over the place in recent weeks, and then I have personally experimented with, o one quite a bit over the last couple weeks. Interesting stuff. So to summarize it for you, OpenAI has released a new model called o one.
And, you can if you have a subscription, you can start to use it. I think it'll show up in the menu as o one dash preview. There's another version, a cheaper version called o one dash mini, and they're focused on focusing on reasoning and solving complex problems faster than human beings can. So the models are aiming to improve things like coding, math, multistep problem solving through what I what looks like to me reinforcement learning, and then what they call a step by step chain of thought approach.
And so what you do is you put into the prompt your your thing, your question, whatever input, and it takes longer to respond than you might be used to if you're using, like, you know, GPT four or four Omni. Right? So it's gonna take much longer, and the idea is that it's really thinking through what's going on. I use the term thinking or the word thinking with air quotes because there's no thinking going on, but it is approaching your prompt from many different angles mathematically.
Right? And then taking that much extra time, is you know, it's supposed to give you a better answer, more accurate answer. And the results are that is what the that is what's happening. It is far more accurate, is in those kind of contexts like math, generating code than its predecessors and then and a lot of the other models out there, even the other foundational models, the big ones like Claude and llama.
So it's interesting stuff. One thing I noticed, though, is that you can actually expand in the in the window, all of its chain of thought steps.
And so you can actually see how it worked through it.
Yeah. It's pretty neat. It's pretty neat. I will say that it's kinda weird because sometimes, you know, it's taking, let's say, thirty two seconds.
Right? Something that long. Right? Maybe maybe not quite that long to answer a very simple very, very simple question.
And then you expand it to look at its chain of thought, and you see, wow. It really wasted an incredible amount of time and effort going through this entire complex chain of thought to come up with a very basic answer to a very basic question.
So, you know, it's still interesting, though. And and the idea that this this whole process is iterative, and they're moving forward to, number one, being more accurate Mhmm. But also being more sophisticated in understanding nuance and then responding in that. So so yeah. You know, though there are advancements in accuracy and reasoning, o one is slower and more expensive than its predecessors. So, you know, I don't know what the ultimate goal here is for OpenAI, to develop a human like AI, whatever that means.
But probably, you know, I think in the more immediate future, it's it's I think it's safe to say more autonomous systems that are capable of more advanced decision making and and more nuanced interpretation of language and, and and things like that. So I I personally don't use o one preview that much for the day to day stuff. For Omni is what I use on my I don't use it every day. But, you know, when I use, ChatGPT specifically, I'll be using for Omni.
But I will say, and then I'll throw it to you, Justin, I will say in comparing it to other models that I use, like Claude or llama, and I use llama quite a bit lately, it it still does outperform those those other models. So in spite of these, you know, inaccuracies or things that it's still working through, Philip pretty cool.
But I I do think it's important to reiterate one thing.
I've been using the term, like, thinking and reasoning, and I may have said intelligence. I don't know. But, large language models aren't intelligent. They don't think. They have, they're built on something called probabilistic mathematical models, and that means that they're basically they look at your your prompt, your input, whatever it happens to be, and then based on the body of training data that it was that it was trained on, it can then best predict what the most likely sequence of words should be in response to your prompt.
So there's no actual, like, intelligence, in as much as we know what intelligence is. And I'm not trying to downplay what language large language models are and generative AI is. It's this is happening at great scale and at great complexity where it's, it's generating very accurate results now with GPT o one, and it's really cool, really impressive, like I said. And and I and I think there's a lot of great, usefulness, especially for our industry, the the tech world. And, you know, I I I follow this industry very, very closely because I think there's a lot of value here. But I I think it is important for us to understand exactly what the technology is.
Yeah. I mean, I figured we were gonna start to see a lot more announcements from OpenAI on new models. If, folks haven't listened to the last episode, one of the things we covered there was that they've recently changed from a not in for profit to a for profit organization Yeah. Presumably to get more funding, to be able to do more r and d, so we should start to see more iteration on things like this.
So, you know, I figured we'd see stuff like this. I, you know, I found it really interesting, one of the videos that OpenAI has on their website where their engineers that developed this were actually talking about what it is, why they developed it, had the analogy of reasoning that a chat GPT, GPT-four style model is really similar to like, if you ask someone, well, what's the capital of Italy? They don't have to really think too hard to say it's wrong. Neither know the answer or they don't.
Thinking about it longer and harder is not likely gonna change their answer. Right? But if you ask them to solve a complex mathematical algebraic, let's say, equation, they're going to have to spend a little time thinking about that. They're going to have to break it down into smaller steps, figure out, you know, how to solve for the variable and then come back with the ultimate answer.
And so that's the analogy they use for reasoning. I really like that. You know, I think that's sounds like from the way you were describing your interaction with it, what's really going on under the hood is it's breaking a bigger, more complex problem down into smaller pieces, analyzing each piece, and then aggregating the answer back together. So, yeah, I think it's just really fascinating to see the various different approaches.
I think a lot of times we hear about AI, people just assume AI equals large language models, and it's just responding, you know, question and answer kind of like interaction. But there's actually a lot more to, you know, to to AI than just LLMs. So.
Mhmm.
Oh, yeah. For sure. And and I think that's probably where the real value is, especially for network operations, our business or IT operations, whatever. The real value isn't gonna be in just LLMs. Right. It's gonna be using LLMs as the interface, the the the natural language interface to do other cool stuff. Like, an LLM can't do, like, advanced statistical analysis on your data.
It can look at the results of your statistical analysis that you did with, like, NumPy or whatever, you know, Python library, and then give you, like, a summary and insight. And you can interact with data really, really well because it's it's a semantics framework, the language framework. It, air quotes air quotes now, understands what you ask and then gives you back, you know, hopefully, a correct answer. And I think that's why folks are really excited about o one is because, you know, I'm willing to wait those extra ten, twenty, thirty, forty seconds to get a really good correct answer.
Now that that the thing is here, though, we are talking about, GPT, which is a foundational model trained on the Internet. Right? Mhmm. And so it can't answer questions about domain specific knowledge, like your EMR system and your patient records or for us, like flow data from my network and and, you know, whatever streaming telemetry and syslog and all that stuff. So regardless of how cool o one is, I do believe that a lot of the value is integrating this large language model front end, perhaps even a small language model because we don't need that kind of training data. Right?
And then integrating it with a, you know, like a rag system or whatever other kind of system to interrogate that, like, relevant dataset, perhaps getting into using agents to then say, you know, if the correlation coefficient comes out at point eight, you know, it's suggesting a strong co a strong correlation, do this action, you know, so you can start incorporating agents where it feels like there's all this cool decision making. But, ultimately, the large language model is just our interface with the data to sort of get to get that going and generate the Python script to do the thing. You know? So really cool stuff, though. I love talking about it, and I could very easily keep going for the rest of our show. So we should move on to the next headline.
Alright. And with that, next article from network computing as well.
Gonna dive into the undersea cables from outer space to undersea cables.
Now we're talking about I see what you did now.
Yeah.
I see what you did.
Smarter subsea cables to provide early warning systems. Scientists are proposing equipping subsea telecommunications cables with sensors to monitor ocean currents, temperatures, and seafloor motion.
This would help, enhance understanding of climate change, earthquakes, and tsunamis, especially in vulnerable regions like the South Pacific.
This was really interesting. I was at a conference not too long ago in Australia and talking to one of the attendees, and he works for a company that does a lot of undersea cable work connecting the various cities in Australia, as well as connecting Australia back to other parts of the globe. And he mentioned this to me. That was the first time I heard about them embedding these sensors into the cables themselves. Mhmm. They're doing it a little bit selfishly because it does help them get early warning signs when they have, damage to their cables, which is a big problem as we've talked about on, the Telemetry Now podcast in the past, whether it's undersea earthquakes or it's damage from anchors from boats, there's a lot of, you know, potential risks to undersea cables.
And it's expensive and time consuming to repair them. So the sooner you can find out you've got a problem, the sooner you can dispatch a ship out there to fix it because there's only so many of those on the planet. So, you know, they're selfishly investing in the, in these sensors to help solve their own problem, but it does provide benefits to science where they can, sense changes in in the temperatures in under the ocean, which can affect the coral. They can detect, like I said, the the earthquakes and changes that would affect the marine life. So there's a lot of broader, scientific research use cases for this type of data, which I I find really fascinating.
Yeah. And and, you know, doesn't Noah already have, like, buoys all over the world, floating around gathering, like, water temperature, air temperature, and, I mean, I don't barometric pressure. I don't know what they what kind of telemetry they gather, but, I know they're gathering a bunch of telemetry and then, you know, feeding it into whatever models they're using to to study climate, to study weather patterns, and all that kind of stuff. So it makes a lot of sense. And we already have this infrastructure link at the bottom of the ocean where you can, you know, add sensors to it and have, have that much more data to hopefully, you know, make make that much better predictions and and have that much of a deeper understanding of our climate and the weather. So so moving on, TikTok, our favorite social media platform was sued by thirteen states and DC accused of harming younger users, a headline from Reuters from October eighth.
So thirteen US states and the District of Columbia here in the United States filed lawsuits against TikTok accusing the platform of harming young people by promoting addictive behavior through its content.
I don't know what they're talking about. Social media is not addictive whatsoever. That's ridiculous. No.
I'm just joking. Yeah. I'm sure you can cut the sarcasm with a knife coming through your earbuds or your speakers right now. The lawsuits claim TikTok targets children with addictive software and misrepresents its content moderation capabilities, prioritizing profit over user well-being.
I've never heard of that happening before with a with a company, but I guess, you know, there's a first time for everything.
The lawsuits seek financial penalties, while TikTok denies the accusations, calling them misleading and expressing disappointment over the legal action instead of collaborative efforts to address industry wide challenges. So they are trying to point here at the fact that social media industry wide is a thing, and it like, why are you picking us out? I think that's clear in the, response from TikTok and their parent company, ByteDance, I assume.
Yeah. I mean, this has recently become a passion of mine. I recently read a book that was talking about this very thing and how, you know, social media and sort of the, like, context switching that is inherent in social media, the short form videos and scrolling and all of that kind of stuff actually, creates new neural pathways in our brains. It actually kind of rewires our brains to adjust to that way of behavior.
And it's bad enough for us as adults. It's part of the reason I got off of Facebook many years ago and kinda limit my social media down to just LinkedIn and what I have to do for work. But, you know, it's really a problem for our children whose brains are still developing. I have a daughter who's gonna be twelve in a couple of weeks and been really thinking long and hard about how much I allow her to do as far as social media goes.
I've long been reluctant to turn her loose on the Internet because that can be a dangerous place, but I think social media just takes it to the next level. Like he says, it's not just TikTok, this article specifically, you know, talking about TikTok, but I think, you know, just social media more broadly, is a bit of a problem. And, you know, really what I think this article is talking about, and part of the reason that they're bringing these lawsuits, is there aren't a lot of options, you know, for parents that are trying to protect their children from these dangerous addictions is like, there's not a lot of options.
The age restriction that the social media companies put out there is literally just a click through. You just have to acknowledge that you're that age or go in and set your, you know, your birthday to prove it. It's like, oh, it's not that hard for a, you know, a ten year old to scroll back and say I was born in nineteen seventy eight and bang, I'm old enough to be on this website. Like, there's no real checking of the ID like you'd have at a bouncer at a bar or something.
Right? So how do you how do you really protect social media online? There's not a lot of options.
Yeah. Yeah. Yeah. And, hence, the claim that TikTok misrepresents its content moderation capabilities.
But I will say we're calling out TikTok. Right? And TikTok kinda fires back and says, hey. Hey.
Hey. What about the rest of the industry? This is an industry wide challenge. Sure. And and and they're right when they say that.
Nevertheless, we called out TikTok, and I wonder if this is a cover for getting closer to TikTok and getting a handle on TikTok in the United States without outright calling them a national security threat and saying that there is data exfiltration going on and all these other things because that is a growing concern among, cybersecurity professionals.
Mhmm.
And, we've talked about, TikTok and so other social media, but we've talked about TikTok on the main podcast, Telemetry Now, several times and how, the company does that and for what purpose.
So I wonder I wonder if this is a cover to, get a handle and get some control over the TikTok deployment deployment. I say deployment like a network engineer. The tick the use of TikTok, among just regular citizens in the United States. So I'm not sure.
I think it's gonna be definitely both. Like you said, I mean, all of the social media companies collect a large volume of data, you know, presumably for ad targeting, maybe some other things. It's harder to trust TikTok given the relationship that it has with the parent company ByteDance and the Chinese government and that there aren't strict well, I'll do this air quotes, strict privacy laws that are in place in China the way there are in the US. They're probably still, you know, not fantastic in the US, but we know they're looser in in China. So there's a lot of skepticism in the industry and for sure, among regulators and what TikTok and their parent company are doing with data. So I'm sure that's part of it as well, although that's not really called out in this article.
No. No. But I I like to read between the lines, Justin. That's kinda my thing.
Alright. Last but not least for this week is an article from NPR talking about the justice Department calling for sanctions against Google in landmark antitrust cases.
The article really focuses on Google's, agreements with companies to be the search engine of choice, calling them monopoly, antitrust, but they go on to talk about their suggestion for solving this problem is to break Google up. So I think this will be an interesting one to keep an eye on.
We're starting to see a lot more, lawsuits, as we just talked about in the previous article being filed against big tech companies, especially social media and online companies. So I think it'll be interesting to see, where this one lands. Curious to to hear your take, Phil.
Well, I mean, from a from an antitrust perspective, it's not like it's anything new. There was a there was an antitrust case against Microsoft back in the late nineties, nineteen ninety eight. And so this idea of trying to restore competition in an industry where one particular vendor, in this case, Google or Applemet, is, becoming so dominant that it, that it that it precludes competition and it undermines our free market. I get that. And I'm and I'm, you know, I'm a free market person, and I believe in private businesses having the freedom to do what they want and then growing and becoming dominant in an industry.
But Mhmm.
There does come a point when you're like, jeez. This is actually detrimental to society now. And so that's when it starts to get gray. But I do wonder if this is more about, like, what they're doing with, with AI now, and this is, in anticipation of that.
I I'm just thinking out loud. I mean, I don't know. And, you know, that that might be the concern that underlies, this antitrust, initiative. Yeah.
I was gonna say the article does talk about how Google's doing that. Right? They're using search engine, search engine results to train Gemini to train their own AI. So that's one of the, actually, the, antitrust things that the justice department is bringing as part of this lawsuit is is exactly that. So I think you're absolutely right. That is probably the way of the future for search engines in general or online behavior.
Yeah. And that that's actually kinda concerning. I mean, think about it. Let's say that, like, you know, search engines as we know it starts to disappear and and is replaced with just you. You put your prompt and then you get your answer from Google's AI or whatever other AI. Right?
And so you are sort of locked into whatever answers those algorithms and models and companies choose to provide you. I mean, we're sorta we sorta got that now with search search engines in the sense that they, you know, they give you the links that they give you. But you still have some semblance of choice. Maybe maybe that's just a an illusion of choice.
I don't know. That's a that's a topic for another episode. But, certainly, you know, you can, go down whatever rabbit hole, which is very different than, that little blurb that you get at the top now. You know, like, when you Google something and then you get, like, the Google AI response.
Imagine that's, like, all you have moving forward. So, interesting stuff for sure. So moving on to upcoming events, we have Nanog coming up in, October twenty one to twenty three in Toronto.
Nanog is a favorite event of ours, and our own Justin Ryburn, who I'm speaking with today and you're listening to, will be, will be addressing the crowd along with Doug Madory, the infamous Doug Madory, our director of Internet analysis. Justin, what are you gonna be speaking on at Nanog Pro?
I'm gonna be speaking on BGP flow spec. So Okay. Looking forward to that. Great. Great.
We have DevOps Days Boston. That's over in my neck of the woods coming up October twenty one to twenty two. We have the PA NUG. That's the Pennsylvania NUG in the Philadelphia area.
I think it's in King of Prussia. I'm not sure. October twenty fourth. If you haven't attended NUGS, I highly recommend that you do, and I have been to the PA NUG once, already this past summer.
It's fantastic.
We have ONUG, New York City, AI networking summit, they're calling it, October twenty three and twenty four.
I will not be attending that one, but there's a lot of interesting stuff going on there. I see some of the speakers and some of the topics. Security field day, part of the tech field day organization, is October twenty three and twenty four. So check out that live stream.
We also have OHNUG. That's different than Onug. Sorry for for doing that. The other one was Onug, o n u g.
This is o h n u g. This is, the Ohio networking user group that's in Cleveland on November seven.
And then we have, on that same time, NFT thirty six, November six and seven. And last but not least, AutoCon two coming up in the end of November November twenty to twenty two. And to just throw it out there, Justin, myself, couple of other other colleagues, Steve Meuse and Mike will be leading a workshop on network observability while we're there.
And, yours truly will be giving a talk on, large language models and their role in network operations.
And that's the news for today.