AI tools are making it faster and easier than ever to find personal details about anyone, often turning harmless online information into real-world risk. In this episode of Today in Tech, host Keith Shaw speaks with Chris Wingfield, a former military digital targeter and now SVP at 360 Privacy, about how generative AI is transforming online surveillance. They explore how scammers, stalkers, and even corporate actors use AI to weaponize public data β and what individuals and businesses can do to protect themselves. From Google searches to Zillow listings and LinkedIn profiles, your digital trail is more dangerous than you think.
Register Now
Keith Shaw: If you thought having personal information on the internet was bad enough, the bad guys are now using AI to accelerate what once took days into minutes.
Through automated profiling, digital reconnaissance just got a turbo boost β and it's bad news for anyone trying to protect people, assets, or infrastructure. We're going to talk about these issues and more on this episode of Today in Tech.
Keith Shaw: Hi everybody, welcome to Today in Tech. I'm Keith Shaw. Joining me on the show today is Chris Wingfield. He is the Senior Vice President of Strategic Innovations at 360 Privacy. Welcome to the show, Chris. Chris Wingfield: Thanks, Keith. Appreciate you having me on.
Keith: And I'm going to ask if that's your real name β we're going to get into that. You were one of the first people I met who actually calls himself a tinfoil hat person. Chris: Awesome, yeah.
Keith: So let's talk briefly about what you do now at 360 Privacy. But before that, you were in the military doing a lot of this digital β what do you call it? Digital targeting? Chris: Digital targeting.
Keith: So tell me a little bit about what you did for the military. Chris: Yeah. In a former life, I was a digital targeter for the intelligence community.
I started on the linguistic side of the house β I speak several languages β and then moved over to signals intelligence, which is the exploitation of devices. I spent a lot of time overseas doing both tactical and strategic intelligence work for the intelligence community.
About three years ago, I left and began working in the private sector. Keith: Are the tactics the same now? Or were they better in the military β without giving away any secrets, obviously? Chris: Yeah. Iβd say the methods or methodology are essentially the same.
If you can target someone around the world, you can target someone here. Arguably, it's easier to target people within the United States because privacy laws are weak and personally identifiable information is so easy to find.
With just a credit card β or even a free Google search β you can find all the PII you need. Keith: Okay, I want to set the table a little bit before we talk about how AI is now creeping into this.
Can you give a brief overview of open-source intelligence? Sometimes people use the acronym OSINT. Do people actually call it "Oh-sint"? Chris: Okay.
Keith Shaw: Talk about what that process is β what is digital targeting? And where would people go to find a lot of this information? Obviously, the internetβs been around since the early days of ARPANET, but it's really been consumerized for what β about 50 years now?
Chris Wingfield: Yeah, 100%. And if you look at the data broker space, that's really where a lot of this personally identifiable information comes from. When you do a Google search, what people think of as βopen-sourceβ intelligence often includes things like social media.
Some people even argue that the dark web is open-source, because really, anybody can access it. You used to need special tools, but now you can just install a Tor browser extension on your current browser, and you're in. So it really encompasses a lot.
The data broker space, for example, is expected to be worth around $440 billion by 2032. Thatβs how much money is being made selling your data. When you look at the top data brokers, you're talking about credit bureaus β and everyone listening is probably affected by their credit score.
That data trickles down from highly credentialed business-to-business platforms like LexisNexis, TLO, or Thomson Reuters CLEAR, all the way down to public search sites like BeenVerified, Whitepages, or a Dores report. These reports contain the same kinds of data β your relatives, phone numbers, physical addresses.
Basically, everything someone would need to target you or your network. Keith: And a lot of that information wasnβt on the internet before β but now it is, because everything's been digitized.
Back in the day, if you were a private investigator, you had to go look through public records, phone books, or even visit libraries. Now thatβs all online, right? Chris: Yeah. Not just probably β it absolutely is.
We used to have to physically case a house to see where the doors were, where the windows were, get a layout. Now I can go to Zillow.com or Realtor.com, or even YouTube, and I can digitally case a house.
I always tell people: you're just one Google search away from a digital threat becoming a physical threat β if thatβs the intent of the actor. Keith: Yeah. Google Maps and Google Earth, for example β I can see a picture of my house.
I remember the first time I looked it up, it was still painted the old color. Then we repainted it and I figured, "They'll never come back out here β it's too remote." But apparently, they did. Chris: Or they got the new image from a satellite. Yeah.
Keith: I'm not saying you knew my house changed color before I told you⦠but maybe you did? Chris: I might have. Keith: Before the show, we talked about whether you'd dox me on-air. But thankfully, you're not going to. Chris: Not on the show.
Keith Shaw: Letβs talk about real estate listing photos. If someone is selling their house, theyβre likely to have professional interior photos posted. That ends up on sites like Realtor.com, right? Chris Wingfield: Absolutely.
If you're buying or selling a home, you should get it into the contract agreement that those images will be taken down through MLS once the sale is done. Because if they arenβt removed, they stay up β and proliferate across sites like Realtor.com, Zillow, Redfin, and others.
Now youβre also dealing with platforms like YouTube, where real estate tours live forever. You can try to contact the video creator and ask them to take it down, but thereβs no guarantee. I recently worked with a very well-known celebrity, for example.
They had done a video interview from inside their home showing off a redesigned closet. What I told them was: "You own an old house in this city.
The window and door sizes are unique to that architectural style." Even though they updated the interior, they didnβt move things like the HVAC or change the window placement.
From that interview footage alone, I was able to compare the closetβs window to a Zillow listing, and β despite new paint β it was a match. Keith: Thatβs wild. Now letβs pivot a little. There are clearly good reasons people might use these digital techniques.
Can you talk about the βgood guyβ use cases?
Chris Wingfield: Yeah, so when we talk about the good use cases, it really comes down to your intent.
If I want to understand someoneβs digital footprint from the perspective of an adversary β but Iβm acting as a good guy β I can help identify vulnerabilities and suggest ways to reduce exposure.
That means looking at things like social media settings, account recovery methods, or even how Google contributions leave breadcrumbs about everywhere you've been. You might not realize that something like your company bio could be revealing. Does it mention youβre married? Have kids? Where you live? What you do?
Bad actors can take all of that and exploit it. But there are also plenty of good actors β corporate security teams, privacy consultants β who use the same techniques to lock things down and reduce risk. Keith Shaw: And of course, law enforcement uses this for investigations.
If you're trying to catch a criminal, you use the tools available. Chris: 100%. Even realtors might use credentialed data platforms like LexisNexis to vet whoβs showing up for a house tour.
So there are lots of use cases β even outside law enforcement or government β where having access to this kind of data helps. Keith: Iβll also add journalism. Investigative reporting, background checks for stories β thereβs overlap there.
And from a corporate perspective, itβs great for security and risk management. We actually did an episode recently about insider threats, where someone was using similar tools to investigate employees who might be spying. Chris: Absolutely.
I think the big takeaway is: yes, hackers and scammers are using this, but itβs also about how professionals can use it to improve defenses and raise awareness. Keith: Right. And I remember that email scam that was going around β a few months back.
It said something like, βI installed software on your device. Pay me $2,000 in crypto.β The scary part was that it also listed your home address, phone number, and a photo of your house. Chris: Exactly. Thatβs when the scam gets really personal.
When someone sees their street view photo or home address included, they panic. Theyβre embarrassed and scared, and they pay. Thatβs the human reaction. Keith: I donβt remember if I personally got one of those emails, but I definitely read about it.
Chris: Those types of scams go back to around 2017 or 2018, but back then they usually didnβt include personal information. Now, they do. And thatβs what makes the emotional impact so much stronger. Keith: What do you tell clients in that situation?
Chris: I always say: take a weird or suspicious sentence from the message and search it on Google. A lot of the time, youβll see itβs a known scam. Itβs really about educating and empowering people to verify these things, rather than just reacting emotionally.
Keith Shaw: Letβs pivot into generative AI now. Thatβs clearly the accelerant in all of this. How is AI changing what used to take hours or days? Chris Wingfield: Yeah, itβs a game-changer.
Take Anthropic, for example β they released a report in April about how bad actors used Claude in March to create malware. One user didnβt have technical skills, but still created a working attack vector using the model.
Thatβs what AI is doing: flattening the curve between highly technical people and amateurs. But I also want to say β the good guys can use it too.
We tend to look at AI through a dystopian lens, but tools like Claude, ChatGPT, Grok β they can all help defenders βred teamβ their own systems and prepare for what an attacker might do.
Keith: So someone might say, βHey, Iβve got 3,000 VoIP numbers I want you to look at.β And your response would be: βLetβs first figure out why thatβs a problem in the first place.β Chris: Exactly.
If attackers are using VoIP numbers to apply for fraudulent loans, donβt chase each number β plug into an API that detects VoIP and require a valid, physical phone number instead. Itβs about fixing the root problem, not just reacting to symptoms.
Keith: And AI just makes that whole process faster? Chris: Way faster. When generative AI first came out, everyone thought: βWell, at least weβll still know itβs a scam if it has bad grammar.β But now itβs fixing that.
And more than that, itβs pulling in personal details β photos, job info, company names β crafting much more convincing phishing attempts.
Keith Shaw: I mean, it used to be that youβd hire a private investigator to follow someone or case a house. Then we moved into the 2000s, when the data broker industry started booming.
Now, instead of learning advanced Google search techniques β or βGoogle dorkingβ β anyone can just ask a generative AI model to do the work for them. Chris Wingfield: Thatβs right. And it only takes six or seven minutes.
You let the AI handle it, and it comes back with a basic profile or report. Thatβs enough for a threat actor to pivot from digital to physical quickly. Keith: What else is AI doing now that we couldnβt do before?
I mean, a lot of this data was already on the internet β but now it just feels like AI speeds everything up. Chris: Yeah, the core data was always there. The difference now is automation and accessibility.
You can just ask the AI, βFind this person,β and it starts pulling context. Weβre also starting to see the rise of autonomous agents β systems where you assign tasks and they work in the background, chaining together smaller tools to carry out larger objectives.
Keith: Almost like assigning your own team of micro-assistants? Chris: Exactly. Think LangFlow or agentic AI β thereβs often a βmiddle managerβ agent coordinating the others. But the effectiveness really depends on how each model was trained and how it presents personally identifiable information, or PII. Take Claude, for instance.
Itβs trained using a method called βconstitutional AI,β where it justifies whether a request is good or bad. ChatGPT uses reinforcement learning with human feedback β so it's more binary: good or bad, period. Then you have Grok, from xAI, which is much more open and uncensored.
Its whole premise is: βIf itβs on the internet, you should be able to see it.β Keith: So every model handles these privacy and ethics decisions differently. Chris: Right, and how you prompt them really matters. You canβt just say βFind Keith Shaw.β You need to provide context.
Our research team once got a model to write malware β not by asking directly, but by describing a sysadmin scenario and prompting it step by step. If you learn how to prompt properly, you can make yourself a harder target and better understand your vulnerabilities.
Keith: So most professionals are probably using multiple AI systems β ChatGPT, Claude, Grok β all at once? Chris: 100%. Some platforms, like Perplexity.ai, let you choose which model to use for each query.
You can say, βUse Grok for this,β or βUse Claude for that.β Each one is better at different tasks β advanced reasoning, arithmetic, coding, etc. Keith: But in these search scenarios, accuracy really matters.
You donβt want just any result β you want the correct one, especially if youβre looking for a specific person. Chris: Exactly. For example, if Iβm going on a podcast, I might prompt the model: βIβm about to speak with Keith Shaw.
Here are a few public links I found β can you build me a full profile?β That gives the model enough grounding to work from. The real power comes from what we call βpivot pointsβ β starting with something small like a phone number or address, and expanding from there.
AI takes care of the tedious groundwork, so I can dive deeper, faster. Instead of spending an hour, I spend five minutes confirming everything on breach forums, public records, etc.
Keith Shaw: I remember when generative AI first came out β late 2022, right? I tried it and asked, βWho is Keith Shaw?β And it was terrible. It had no idea who I was. Chris Wingfield: And now? Keith: Now it probably knows a lot more.
And what's scary is β Iβm also a ChatGPT subscriber. So it knows me, because Iβve been feeding it more and more data in our conversations. Sometimes I even upload a photo of myself. Iβve had it draw pictures of what I think it thinks I look like. Chris: Yeah.
Keith: I was doing that stupid baby filter thing recently. You upload a photo and ask it to generate a baby version. You can't say βmake me a baby,β because of the guardrails, but if you phrase it right, it still works.
And hereβs the weird part: it drew the back of my laptop exactly like the photo I gave it β from one of my podcasts. And when I asked it to make me look like a 10-year-old, it added a Global Tech Tales sticker on the laptop.
I never asked it to do that. It just knew. Chris: Thatβs because it remembered your previous inputs and associations. Keith: Yeah. It freaked me out a bit.
Chris: And that brings us to an important point: you have the training data cutoff β like Claudeβs is October 2024 β but once it starts pulling in live search data from the internet, it can begin connecting dots even faster.
It can pull transcripts, podcast appearances, webinars β anything thatβs public. When I target myself, I ask: βHave I ever mentioned my city of residence? Did I ever say something specific in an interview?β A lot of these models can find that if the content is online.
Keith: So part of the strategy is understanding how visible your information is to search engines β and to AI. Youβve got to know whatβs out there so you can limit what attackers can use. Chris: Exactly. We always say: reclaim the pivot points.
Whatβs helping an adversary build a picture of you? What digital breadcrumbs are you leaving? Start there.
Keith Shaw: Letβs talk about what companies can do. Whatβs the first step in protecting employees and data against this AI-enabled profiling? Chris Wingfield: First, identify what data shows up in search results. What of that data is being indexed by Google? What is being presented by AI?
Letβs say you find your information on BeenVerified or Whitepages. If it appears in Google search results, you can take action.
Google has a program called Results About You β if youβre logged into a Gmail account, you can upload three versions of your name, three addresses, three emails, and three phone numbers. Every 24 hours, Google will scan and show you what it finds.
You can then request removal with a click. It wonβt remove the data from the source, but it will eliminate it from Googleβs index. Keith: Thatβs already a huge help. Chris: Yeah.
And once you know where the links are, you can go to the source site β like Whitepages or BeenVerified β and submit removal requests there. If you chip away at five or ten of those a day, it becomes manageable.
Keith: But what about business records β like LLC filings? If someone used their home address when registering a company? Chris: Good question. Thatβs trickier, because those are considered public records β things like state databases, OpenCorporates, or SEC filings. Google wonβt remove those.
But what you can do is update the registered agent information with a new address, and then ask Google to re-index. It wonβt erase what was there, but itβll reduce what appears in search results moving forward.
Keith: But people just assume, βWell, itβs on the internet β thereβs nothing I can do.β That feels like the default mindset. Chris: Itβs the #1 thing I hear: βItβs public record β what can I do?β But there are ways to minimize risk.
Even when your data is leaked to the dark web, there are steps you can take to protect yourself.
Chris Wingfield: If your info is on people search sites, and you find it in Google results, you can start by requesting removal from search. Then go to the root β remove it directly from the data broker sites.
But letβs say youβve registered an LLC with your home address, and thatβs showing up in OpenCorporates or on your state website. Google wonβt remove that because itβs considered public record.
So in that case, change the registered agent, update the address, and then ask Google to re-index so the old data doesnβt surface as easily. Keith Shaw: What about something like SEC filings? Chris: SEC records are permanent.
If your name or address is in there, itβs going to live on the internet. That becomes part of your baseline exposure β a key piece in your personal risk score.
This is why we tell clients: the vast majority of attackers β whether low, mid, or high sophistication β are going to start with a search engine. And what weβre seeing now is fewer people using Google and more using tools like ChatGPT.
Search engines, Wikipedia β those are trending downward. AI interfaces are trending upward. Keith: Right, because now, people donβt need to understand Boolean logic or use special syntax. You just ask the AI to do it. Chris: Exactly. People used to pay for access to LexisNexis or BeenVerified.
Now, if I can just ask ChatGPT or Grok for that same information β and I get it faster β Iβm going to do that. Keith: So if youβre a company, protecting your people is protecting your company. Especially those in high-visibility roles. Chris: 100%.
Start with your physical security and cybersecurity teams. Ask: whatβs the digital footprint of our executives, or customer-facing staff? Who are people likely to target?
Then work to remove any pivot points β email addresses, home addresses, names of spouses or children, job details β that could help an attacker build a dossier. Keith: Do most people even realize they can get this info removed? Chris: Thatβs what surprises me most. People assume, βItβs public.
Itβs too late. Iβm screwed.β But thereβs actually a lot you can do. And we havenβt even talked about the dark web yet.
Keith Shaw: Yeah, letβs go there. So if your data shows up on the dark web β like in a breach β what are your options? Can you do anything? Chris Wingfield: Absolutely.
I apply the same playbook to the dark web that I do to the open web or social media. You start by devaluing the data before it ever hits the dark web. Letβs say your phone number gets leaked.
If itβs a VoIP number or a burner number, no big deal. Same with credit cards β if you use virtual cards through a service like Privacy.com, you can shut them off in seconds.
If youβre using alias emails β like iCloudβs βHide My Emailβ or ProtonMailβs aliases β then a breach doesnβt really hurt you. The idea is to pre-expire your sensitive data. Keith: But Social Security numbers are harder to deal with, right? Chris: They are.
If your SSN was leaked β say, in the AT&T breach where background check data was compromised β you didnβt do anything wrong. But you can still take action. The biggest thing you can do? Freeze your credit. People think itβs hard, but itβs actually simple and free.
Keith: Unless you go through a service that tries to upsell you the entire time... Chris: Right. But you can go directly to the credit bureaus β Experian, TransUnion, Equifax β set up accounts, and freeze your credit. You can toggle it on or off as needed.
And theyβre required by law to process those requests quickly.
Keith Shaw: I do wish there were better protections built into the credit system β like freezing by default unless you opt in to open it. But weβre not there yet. Chris Wingfield: Right. Hopefully weβll get there. But in the meantime, awareness is everything.
One of the biggest things I see is old accounts being compromised β like you signed up for some random photo-sharing site in 2008, forgot about it, and now itβs been breached. Keith: Yeah.
And at that point, your nameβs in some ancient database and thereβs no real way to remove it. Chris: Exactly. I always tell people: if your data shows up in a breach and ends up on a dark web forum, trying to negotiate with the poster is usually pointless.
You might reach out, pay them, and theyβll still sell the info anyway. Or they say they deleted it β but you have no idea who they are or if theyβre telling the truth. At that point, your data is a commodity.
If itβs a massive breach, like the βnational public data breachβ with hundreds of millions of Social Security numbers, itβs been downloaded thousands of times already. Keith: So the focus should be: βWhat can I do now that itβs out there?β Chris: Right.
Knowing whatβs on the dark web gives you situational awareness. If youβre one of the millions in a data dump, an attacker might randomly select your SSN and try to open a loan. But if your credit is frozen? They move on. Youβve made yourself a harder target.
Keith: So you donβt want to be that βnext personβ in the list without any protections in place. Chris: Exactly. Know whatβs out there, devalue it, and put up barriers that frustrate the attacker before they get anywhere near you.
Keith Shaw: Letβs talk about something we discussed before the show: βdata poisoning.β Thatβs the idea of taking an offensive approach β feeding misleading info to confuse the systems that collect your data. Can you explain that? Chris Wingfield: Sure. Itβs a really interesting strategy.
Letβs say you have a LinkedIn profile. Most people include their real city and state. If you live in a small town and have a unique name, that makes it super easy for someone to Google your name, city, and state β and find your home address instantly. Keith: Right.
Because then the people search sites kick in and connect the dots. Chris: Exactly. So instead, you might say youβre located in a major metro area, like New York City. Now your signal gets lost in the noise β millions of people live there.
Youβve just made the job harder for someone trying to profile you. Same with your resume. A lot of people upload resumes to LinkedIn. That might include your full name, phone number, email, even your address β all searchable and scrappable.
Itβs better to send your resume directly to the people who need it, and keep your social profile vague or high-level. Keith: And what about things like public reviews? Chris: Great question. Google reviews are a huge exposure point.
I worked with someone recently who had posted 127 photos across 10 years. One person, half a million views. Thatβs enough for a bad actor to build a heat map of where you live, work, and travel. Most of the reviews were within five miles of their home.
Chris Wingfield: So when you're already putting out content β like reviews, check-ins, or photos β you need to think proactively. First, clean up anything youβve posted that might expose your home, your kids, or your daily habits. Then, you can start planting what I call proactive breadcrumbs.
Keith Shaw: Thatβs my favorite phrase so far. Chris: Right? So for example, make your social media private if you donβt need it public. But if you do have to be public β like you do, Keith, with a public-facing role β you can still add misdirection.
Change your LinkedIn URL, for example. That breaks a lot of the business-to-business data brokers that scraped it years ago. Now theyβre tracking an old URL that doesnβt resolve anymore. Keith: Thatβs a smart move. Just changing the URL messes with their systems? Chris: Yes.
And donβt include your middle initial on LinkedIn. Avoid listing your specific city β especially if youβre in a small town. The more unique your details, the easier it is to pinpoint you. Even subtle changes like that force aggregators to rework how they track and connect your data.
Keith: That feels so counterintuitive. For years, Facebook and LinkedIn have told us to fill out more information β likes, interests, hobbies β so we can find friends or get job offers. Chris: Absolutely. The platforms are designed to get you to overshare. They monetize your data.
The more you post, the more valuable you are to advertisers, recruiters, and data brokers. Keith: I remember removing all the TV shows and movies I liked from Facebook years ago. I realized they didnβt need to know that. But Iβm still probably more public than Iβd like to be.
Chris: Youβre not alone. Most people are. But it's not about deleting your accounts altogether β itβs about being smart with what you post. Think about non-essential elements of information. Does it really matter that LinkedIn knows the city you live in? Probably not.
Does it matter if your past experience lists βBostonβ? Thatβs okay. But your current town? Not necessary.
Keith Shaw: But what about job seekers? If Iβm looking for a new job, shouldnβt I have a complete LinkedIn profile? Chris Wingfield: Yes β build out your profile, 100%. Your job history, your roles, accomplishments β those are important.
But things like your current city, your birth date, or personal contact info? Thatβs what Iβd call non-essential. It doesn't affect your credibility, but removing it can seriously help reduce your exposure. Keith: And what about resumes?
Chris: People often upload their resumes directly to LinkedIn, and thatβs where it gets risky. Resumes usually include your phone number, email, and sometimes even your home address. Anyone can download it. I always recommend: send your resume directly to hiring managers, not to the entire internet.
Keith: And youβve worked with people who say, βI donβt even use Facebook anymore,β but their profiles are still public? Chris: All the time. They think theyβve gone dark β but their old content is still live.
Maybe they havenβt posted in six years, but there are still photos of their house, their family, or pictures taken right outside their home. If I can see the house in the background, I can use Google Street View or satellite images to confirm a location.
Keith: But youβre not recommending people delete their social media entirely, right? Chris: No, not at all. A lot of people need social media β for business, for personal branding, for income. Itβs more about finding that balance between convenience and security.
And fortunately, there are ways to make small changes that have a big impact on your security posture. Keith: Like multi-factor authentication? Chris: Exactly. Thatβs a perfect example. People hear about MFA all the time, but donβt always understand it.
Itβs one of the simplest ways to improve your account security. The more friction you add between a bad actor and your data, the safer you are. Keith: Do you think people donβt realize how much theyβre exposing themselves? Chris: Yes β and they donβt think like bad actors.
They assume no one would ever use their bio, or their kidβs name, or that Zillow photo against them. But those are the exact things bad actors look for.
Keith Shaw: What about the legal and ethical side of all this? I mean, thereβs whatβs legal, but also whatβs right. And then thereβs whatβs just creepy or dangerous. Where are the boundaries? Chris Wingfield: Great question. Honestly, itβs become the Wild West β especially in the U.S.
Itβs often easier to target people here than overseas, because of weak privacy laws. In Europe, you have GDPR. Here? Not so much. As for ethics, it comes down to intent and outcome.
Letβs say someoneβs kid joins a new baseball team, and the parent wants to check out the coach online. Thatβs not illegal β and arguably itβs ethical, because theyβre trying to protect their child. But what if that information is then used in a way that harms someone?
Suddenly, what started as a good intent has a negative outcome. So ethical decisions here are very contextual. Itβs a moral compass issue, not a legal one. Keith: Have you ever found something really embarrassing about someone, and then had to decide what to do with it? Chris: Yeah.
A good example is the Ashley Madison breach from several years ago. That was devastating for a lot of people. But the way I approach it with clients is: whatβs actionable?
If I find that a password you used in that breach is still active somewhere, we need to change it. If your credit card or Social Security number was in that breach, we need to cancel or freeze it.
Itβs not about embarrassing people β itβs about protecting them and taking tangible steps to reduce risk.
Keith Shaw: So back to the dark web. If your info is out there β because of a breach or a leak β what can you do? Or is it too late? Chris Wingfield: You can take action. Again, think offensively. Did I give out my real phone number?
Or was it a VoIP number? Was my email an alias? Did I use a virtual credit card? When you use those tools, the damage from a breach is minimal. You can cancel or replace things easily.
Even for things like Social Security numbers, you can take steps β like freezing your credit β that significantly reduce your exposure. Keith: But most people donβt even realize they can do that. They just assume theyβre stuck. Chris: Thatβs why education is key.
People have access to digital tools, but no training on how to use them safely. We accept terms and conditions without reading them, and our data gets sold downstream. But if you understand your exposures β and your options β you can take back control.
Keith: Chris Wingfield, this has been both energizing and terrifying. Thanks for being on the show today. Chris: I get that a lot. Thanks, Keith. Keith: Thatβs all the time we have for this weekβs show.
Be sure to like the video, subscribe to the channel, and drop your thoughts or questions in the comments below. Join us every week for new episodes of Today in Tech. Iβm Keith Shaw β thanks for watching. Β
Sponsored Links