Earlier this month, Cyberscoop posted an article warning that North Korean hackers have begun using what they referred to as “polished” LinkedIn profiles to target job seekers with enticing but fake job opportunities. In the last year there has certainly been attention paid to the increasingly clever tactics bad actors are using to trick the social media platform’s users into divulging personal information or downloading malware. Strike Source set out to find out what this might look like, and what the average user can do to protect themselves from falling into the trap.
As with anything, these interactions usually begin with a simple connection request. It will be from a person who you dont know, and from an account that has enough detail in it to pass as a real person. The kicker with these accounts is that the profile’s headshot is usually believable unless you look at it very closely.
In March 2022, NPR posted an article with some simple visual tricks and techniques you can use when looking at a picture to tell if it is AI-generated. It turns out there are dozens of sites on the internet which can be used to generate a photo of a person who does not exist. These sites range from being free of charge, to pay per picture, or a subscription service. With these sites, it is worth noting that the old adage “you get what you pay for” definitely applies. The photos on the free sites tend to have more flaws in them and are easier to detect.
Our journey begins with an account request from Meghan Alexander. The person behind the creation of this account is clearly using a high quality service to generate this picture, because it is nearly flawless. It is this flawlessness in the picture, however, that deserves attention.
The first thing one can do to check the validity of an account is search the person’s name on Google. If Mehgan were real we would expect to find another source with information about her. More than likely that would be a social media platform, but could also be a school or company website. It should not come as a surprise here, but the search for Meghan resulted in no findings with anyone that either looked like her or lived in the same area.
The next step is to figure out the where the picture came from. Before AI sites made it so easy to make up a fake person, individuals had to steal the photo of another person to use as their own. Anyone who has ever watched the MTV show Catfish knows that one of the easiest ways to expose a fraud is to do a reverse image search using Google to figure out where the picture came from. This method is not nearly as effective now with the AI-generated images, but it certainly still has its merits.
When the photo is dropped into Google Image Search, it first opens with Google Lens. Google Lens will analyze the photograph and look for clues within it. This can be helpful to to an investigator, as sometimes objects can be local to certain regions. In this case the photo of Meghan did not reveal much, other than the fact that the sweater she is wearing can be purchased on the popular shopping site Poshmark for $20.
The next step was to click the button in Google which searches for the image source. This is a scan that looks for a match to the photo. Since this photo was generated by AI, there were no direct matches to the photo, but it did identify photos which were similar.
The top suggested image source was another LinkedIn account. We obscured the name of this individual, because she is a real person with a legitimate LinkedIn account. It is clear that her profile picture was used as source material for the AI to generate the picture of Meghan. When the photos are placed side by side and reviewed it is clear that the torso and the background are identical, even down to to darker spots to the left in the background. The AI generator simply lightened the image and changed the head.
Now that we have established that Meghan’s account is clearly a fake, we decided to take it a step further and see if we could ascertain why someone went through such lengths to create this fake account. To do this, I accepted the person’s invitation to connect, and decided to wait and see what happened. In less than a day I received a message from Meghan offering me an opportunity to consult with her on starting my own franchise.
The request seems simple enough. If you were to click on the link (which I never recommend you do unless you have the proper training or experience to do so safely), it takes you to a calendar where you can schedule a time to chat with this person. In order to schedule that call you do have to divulge your name, email, and phone number.
Another interesting thing about this message is that it is clearly part of a bigger scheme. As soon as I received the message from Meghan I immediately recognized it, because I had received the same message from another person a few days prior.
This second message was eye opening to me. As a cybersecurity professional and an OSINT enthusiast it was very easy for me to pick out Meghan’s account as being fake; but this other Megan, which I had already fallen into the trap of connecting with, got past me without a second thought.
The first thing to notice about this new Megan’s account is that her picture is far more difficult to peg as being fake or AI-generated. For starters, the picture has a background that is not obscured. Megan is also not nearly as perfect looking as her counterpart Meghan. She has several more years of life under her belt, and her skin has the blemishes and imperfections you would expect to see on someone of her age. The reason for the difference in this picture is that, once again, the generator took the profile picture of a legitimate user and modified it similarly as it had done with Meghan’s. The only differences in the Megan picture is the addition of glasses, a different mouth, and a small change to hairstyle.
It is also worth noting that when you look at the messages I received from both Meghan and Megan, that the link in the calendar invite of both of them had the same first name but different last names. With all of the hard work the individuals responsible for the charade put into it, messing up all of the names in the links was a very silly mistake to make.
For professionals who decide to use social media platforms, especially LinkedIn, as a means to connect with other professionals or even grow their brand, it is important to understand the risks associated with it. Earlier this year, Strike Source exposed one such effort by China to lure American post-doctoral students with job ads specifically tailored to them. That was just the tip of the iceberg. Bad actors from many countries such as China, Russia, and North Korea are honing their craft and finding more deceitful ways to get access to the private data of American citizens. Only by practicing safe social media use and ensuring you know and trust the individual you are about to share their data with, can you keep your information safe.