
Privacy Please
Tune into "Privacy Please," where hosts Cam and Gabe engage with privacy and security professionals around the planet. They bring expert insights to the table and break down complicated tech stuff everyone can understand.
Privacy Please
S6, E233 - DeepSeek, AI Innovation, Privacy Concerns, and Cybersecurity Revelations
The episode examines the implications of a recent hacking incident involving the Chinese AI company DeepSeek, which claims to outperform competitors on cost and performance. We discuss the risks associated with AI tools, the necessity for better governance, and the broader impacts of AI on cybersecurity and data privacy.
• DeepSeek's emergence as a significant player in AI
• Performance claims that challenge established tech firms
• Consequences of the recent hack on industry perceptions
• The dangers of unregulated AI usage in corporations
• Governance challenges surrounding AI adoption
• Personal experiences using AI-driven coding tools
• Future predictions on AI's role in security and privacy
All righty, then. Ladies and gentlemen, welcome back to another episode of Privacy. Please, cameron Ivey, here alongside Gabe Gumbs. It's 2025. We're already almost through January. This is crazy. One twelfth of the year down.
Speaker 2:Twelfth of the year down.
Speaker 1:Not a good way to count the years, but Twelve is out of the months of the year, in case you were wondering. I'm sure you knew that Maybe's in case, just in case just think one month down and we're checking our bingo cards.
Speaker 2:What have you had on your bingo cards so far for the year?
Speaker 1:well, I definitely had ai on there, but, uh, I don't think I had this story.
Speaker 2:I think we had ai on there, I think we all had ai on there, right, I think even the students say, like AI was, it was the centerpiece, it was the. Yeah, yeah, it's the centerpiece of the bingo card. It's the centerpiece of the bingo card, but this week we got a new AI number called, and I certainly didn't have this on my bingo card. So what do we got?
Speaker 1:So a company called DeepSeek. It's a Chinese AI company. So a company called DeepSeek, it's a Chinese AI company. They recently launched a R1 large language model, so it basically is OpenAI's competitor and they were recently hacked.
Speaker 2:They were recently hacked, which is just like one of the many reasons they've been in the news this week, right? So the first one was they announced that they have better performance, and not just announced but like demonstrated they had better performance and could do it less expensively. In fact, they published a bunch of papers on it, right, like they could get better performance than chat GPT for far less the cost, and that sent large sectors of the economy kind of tumbling a bit right, like the stock markets and futures around things like the chips needed to power this new AI gold rush. Folks like NVIDIA took a solid dip yesterday. A bunch of tech folks took a solid dip yesterday as investors were worried that like hey, you're telling me these guys can do it better and faster and cheaper and faster and cheaper and cheaper and cheaper, like it's less filling and it tastes great. I thought we had the market courted on less filling and tastes great. Turns out they don't. Turns out they don't.
Speaker 1:That's a significant number, though, so they're claiming $5.6 million. Compared to billions spent by a company like NVIDIA, that's a ridiculous order of magnitude to do 100.
Speaker 2:Right, that's not just like oh, we shaved 20%, 50%, that's like we shaved hundreds of percents off of this cost. That's a lot of zeros.
Speaker 1:That's a lot of zeros.
Speaker 2:That's a lot of zeros. And if you were in the AI shovel selling business versus the mining for AI gold business, like the Invitas of the world, you were happily just mass producing as many shovels as you could so that people could go dig for AI gold. And now all of a sudden you're hearing that the price of gold has tumbled. No one might want your shovels.
Speaker 2:Actually, a better analogy might be that these guys came along and they've got a bulldozer. You're out here with shovels and they're like what I can make? A bulldozer for half the price of your shovel. That's not good, it's not good if you're in the shovel business, it's great if you're in the bulldozer business.
Speaker 1:So, gabe, what does that mean? Speaking of a company that comes in? I mean, this is pretty typical for any new technology, right? There's always someone that tries to come in and says they can do it cheaper and better and faster. Yeah, so this is not anything out of the norm in terms of this is just new now because they were hacked. But do you do? We know why they can do it cheaper?
Speaker 2:So I don't personally know why, and I haven't seen any what I call good explanations of it yet. I'm kind of still digging through that myself. But for everyone else that wants to dig in alongside, they did publish a lot of information around how they do it, and so I'm kind of going to go through that information firsthand myself so I can get an understanding of it and then meander out from there and see what others say. But I am not certain is the answer. I'm not sure the information is out there though, so check it out.
Speaker 1:So, with a company like this coming in hot claiming these things and then getting hacked, what does that mean for the industry right now?
Speaker 2:Well, I think there's a significant privacy problem with them getting hacked. Llms are, as they suggest, large language models, and the large part about them is they already have a lot of data that they themselves have hoovered up, but what they also have is all the information that people have been putting into it right. So you know there's been a lot of talk over the last 12, 24 months around you know, good corporate governance around your employees using things like chat, gpt, because you don't want your corporate secrets inside there. Let's just assume that people are bad corporate citizens because they are, and not intentionally, though most, like 99% of the times, it's usually out of a necessity. That's generous. That's generous, yeah, but nonetheless I'll give it to him. I'll be generous. Today. Most of that, that naughtiness, is driven by a need to be productive, and when tools like gpt come along that allow you to be exponentially productive, it's really tempting to put all that corporate governance aside and say, fuck it, I want the help, I want the tool, I want to dig for gold faster.
Speaker 1:Don't we think that governance was on the bingo card this year as well? I mean it always is.
Speaker 2:That's a good point.
Speaker 1:I think you're right.
Speaker 2:It is a good point. I don't know that I had governance explicitly on the bingo card, with the exception of machine identity governance which is just another fancy way of saying.
Speaker 2:like you know, there is a lot of work being currently done to really solve for a lot of the machine identity problems that computing currently has. Right, so not just like humans and their passwords, but the cloud in particular has a lot of identities. Every system service, you name it, and then they're so granular and there's so many different options, and then there's every different platforms. There's a huge problem there, and so, from a governance standpoint, that certainly was on the bingo card. From a human governance standpoint yeah, that was always there. How do you effectively manage your company's ability to keep their employees productive with new tools while not exposing sensitive information? And so probably don't have to tell most of our audience this, right, but you folks have seen some of these toolings show up in the marketplace. I think we've had a couple of guests on who these were the problems they were solving for, right, like they created solutions to help main to to to be corporate information from from these types of things.
Speaker 2:But getting hacked LLM, an LLM platform, getting hacked is is just problematic for those that use it. Right, like you. Yeah, you may have put more in there than you know you should have, and you. You figured it's all good, like, screw it, I don't care, it's just open AI. They're just going to use it to study and make the LLM better. Right, make the model better. But what happens when they get hacked?
Speaker 1:Right, yeah, that's the human side of us thinking whatever it's just they're not going to use it against me.
Speaker 2:Right, right. But what if they do? What if the leopards eat my fish?
Speaker 1:yeah, just because you're a big cat doesn't mean you're not gonna get eaten by another big cat nature, catty, cat world it's true. I would say that makes more sense when you know big cats. There's no like big cats would destroy big cats don't mess around, even a little bit. No no, I mean, they still play like cats but they do.
Speaker 2:They do. Big dogs are just cows. Big cats are murderous creatures that can you milk big dogs? I mean, according to meet the fuckers, you can. You can milk cats, milk cats.
Speaker 1:Cats.
Speaker 2:You can make cats. Does it happen to you?
Speaker 1:Interesting. That's great. Yeah, it would be. Any of our listeners that may know a little bit more about this would love to hear from you and, if you know a lot about it, want to come on the show, would love to talk more about it on an episode, more about it on an episode, and maybe well, you know, we're going to keep close watch on this, how this progresses. It's obviously AI's top of mind for 2025. And this is definitely interesting what this is going to do down the line. Gabe, I know you were working on something recently. We'll switch gears unless there's anything else you wanted to touch on.
Speaker 2:I was working on some AI stuff myself the intersection of AI and cybersecurity. One of the things I've been doing is trying to make myself even more productive. I spent almost all my career as an ethical hacker in different fades, if you would, so sometimes that skill set was largely as a defender, other times as a builder and for a large number of years as just a breaker. So I still enjoy breaking things and I haven't kept up with as much breaking as I used to. It's kind of one of those byproducts of moving into I guess you could call it more management type roles and less, you know, just pure individual tribute roles, but it's the thing I love near and dear. And so I came across a platform that a friend introduced me to the other day.
Speaker 2:I'm not going to shamelessly plug it just because there's tons like it and I don't want to bias anyone one way or the other. There's lots of really good platforms, but it's an AI, it's an AI coding assistant platforms. There's lots of them, lots of them. There's some that, like Klein, they work at the command line interface level, which is really cool, so like they interface with both your IDE and your editor, and like a text editor and and like your GBT models. There's stuff like Copilot. There's lots of ones.
Speaker 2:There's tons of ones out there but I was using one recently and I was blown away by this one in particular, at how good it was at prototyping things that I wanted to build.
Speaker 2:And so, in particular, what I've been working on is prototyping some attack tools based on some new theories that we've been playing with over at Myota specifically actually, that we've been playing with over at Myota specifically actually, and so one of the things that is on the bingo cards is that AI is definitely going to make cyber attackers more adept. In the last few weeks of myself playing with AI tools to create better attack tools, I'm just amazed at what I've been able to do in a short time, and I might release some of these things. I may just show them kind of publicly as proof of concepts. I'm not super interested in just littering the sidewalk with a bunch of rusty razor blades. Really, yeah, ai has gotten that good.
Speaker 1:Could that be something that we did on the show? We could totally do that on one of the episodes, yeah show it on screen and kind of just go through it, and then we can have our it's a great idea.
Speaker 2:It's a great idea. Yeah, okay. Years ago, we were planning some other privacy research. This was right before COVID hit, and I was working on a project that didn't finish. That was collecting open source data from around town and trying to triangulate sensitive data around it, but this is a good follow on to some of the tooling that I started working on then. I mean, what I'm getting at, though, is just more in the AI bingo topics. Ai continues to. Definitely it's not going anywhere. I don't think anyone's going to argue with that, but where it shows up in our security and privacy world, I think we're going to be more and more surprised by Its ability, and if this can continue to be done cheaper and faster with things like DeepSeek. I mean, I don't know where the upper limits are, yet. I've heard a lot of people already try to put where they think the ceiling is on generative AI. I think they might be wrong. I think they might be wrong.
Speaker 1:I'm not a betting man, but if I were to bet I would definitely go with your gut on that one.
Speaker 2:Yeah, I'm willing to bet that just based on the news that Deep Seek announced yeah, we don't know what the actual ceiling is yet. I'm willing to bet. We just don announced yeah, we don't know what the actual ceiling is yet.
Speaker 1:I'm willing to bet. We just don't. No, and I don't even think we ever will. Then there's that. Here's my analogy. It's like the ocean we don't know everything no-transcript. I mean, yeah, that also. There's that too, the internet works very much like that. That works.
Speaker 2:You pee in the ocean when you can't remove the pee. The internet is very much the same way.
Speaker 1:That's true. Actually, the internet is like the ocean. Yes, yeah that's better.
Speaker 2:There are dark, dark depths to it that you should avoid. There's a couple of really cool playgrounds where you can go hang out with some cool folks and maybe sip on a rum and coke. There's some lawlessness on the high seas.
Speaker 1:Yeah, yeah.
Speaker 2:A lot of pirates just kind of sailing around doing what they will. Lots of people just I'm the captain now and there's a lot of trash in it. There's a lot of trash just floating around.
Speaker 1:Is that true and this could be my ignorance, but is it true that some countries just dump their trash into their oceans they don't have like landfills and stuff? I wouldn't be surprised.
Speaker 2:Off the top of my head. I mean, at least one country comes to mind off the top of my head. Can I guess it?
Speaker 1:Yeah, Is it India?
Speaker 2:It is Sorry India, sorry yeah, is it India? It is Sorry India, Sorry yeah that would make sense. I don't know how widespread the problem is, but I've seen some problems reported. Yeah, I wonder why that is Infrastructure Lack of yeah. Yeah, it's a shame. I mean there's a why is after that answer and there's another why is after. There's probably like seven more why is's? Before you get to the real answer. But the surface answer is infrastructure.
Speaker 1:Yeah, that's for sure. Well, to shift gears here. We're recording this on Privacy Day, so happy Privacy Day. Happy Privacy Day, folks.
Speaker 2:Yeah and Privacy Week Looking out for your privacy.
Speaker 1:Anything else we want to touch on.
Speaker 2:No, I think we'll keep our eyes peeled on the DeepSeek stuff as I make a little progress on our own little AI project. We'll get into that a little bit more. We got a new domain we're launching, yeah, problem Lounge.
Speaker 1:Some new looks, hopefully soon enough and some new looks hopefully soon enough and some new guests and lots of new things on the horizon 2025 onward and upward 2025. Let's do it alright, we'll see you guys next week.