Huge breach affects 9 million Cathay Pacific customers

Airlines aren’t having a good time of things at the moment. Even if you managed to dodge the recent British Airways fallout, you may well be caught up in the latest breach affecting no fewer than 9 million customers of Cathay Pacific.

So what was taken? The impact this time around isn’t so much where payment information is concerned, as the 403 credit card numbers the hackers grabbed had all expired, and the 27 live ones had no CVV stored. It isn’t even passwords, as the airline claims none of those were grabbed. The issue is that the hackers took 860,000 passport numbers, 240 Hong Kong identity cards, and all personal data that goes with it.

What Personally Identifiable Information (PII) was compromised?

Here’s what the criminals ran away with in the Cathay Pacific breach: PII. Namely: nationality, date of birth, name, address, email, telephone numbers, frequent flyer membership numbers, customer service remarks, and “historical travel information.” The data accessed from passenger to passenger varies, so there’ll be some with almost nothing to worry about and others wondering how they drew several short straws simultaneously.

If you’re wondering why breachers continue to steal PII, this data is incredibly useful for anybody planning a targeted attack, be it phishing, social engineering, or plain old convincing malware. Some of the scams could easily become real-world issues, as opposed staying firmly behind the computer screen.

At this point, we’d typically advise anyone affected by the breach to be extremely cautious of any missive sent their way from those claiming to be Cathay Pacific. Don’t hand over payment information to random phone callers, avoid clickable links in emails persuading you that your password has expired, and so on.

There’s only one slight problem with this: the breach apparently took place in March 2018, or at least that’s when they discovered a breach had taken place. It then took until May for them to confirm data had been accessed without permission.

As a result, it may not be much use at this point to say “Watch out for this” when it’s already happened. If the airline is correct in its thinking that no data has been abused yet, then what you can do is visit the website set up in the wake of the breach (called a “Data security event”) and use the relevant link for US and non-US customers to get things moving.

Note that Cathay Pacific points out they’ll never ask for personal/financial information related to this breach, and they also list a sole email point of contact for any further communications. Should you receive a note from an address other than the one mentioned, you can safely ignore it.

To ease the fears of worried customers, Cathay Pacific are offering ID monitoring services. And if you’re not sure if you’ve been affected, you can send them a message and they’ll get back to you.

Airlines are increasingly coming under attack from individuals with an eye for large pots of valuable customer data, and even their apps are considered fair game. Whether large fines or other consequences for Cathay Pacific emerge remains to be seen, but taking to the skies is anxiety-filled enough without having to worry about the safety of your data back on terra firma. One would hope this is the last major airline breach we’ll see for a while, but on the evidence we’ve seen so far, they’ll be a prime slice of hacker real estate for some time to come.

The post Huge breach affects 9 million Cathay Pacific customers appeared first on Malwarebytes Labs.

Go to Source
Author: Christopher Boyd

From Now On, Only Default Android Apps Can Access Call Log and SMS Data

A few hours ago the company announced its “non-shocking” plans to shut down Google+ social media network following a “shocking” data breach incident.

Now to prevent abuse and potential leakage of sensitive data to third-party app developers, Google has made several significant changes giving users more control over what type of data they choose to share with each app.

The changes are part of Google’s Project Strobe—a “root-and-branch” review of third-party developers access to Google account and Android device data and of its idea around apps’ data access.

Restricted Call Log and SMS Permissions for Apps

Google announced some new changes to the way permissions are approved for Android apps to prevent abuse and potential leakage of sensitive call and text log data by third-party developers.

While the apps are only supposed to request permission those are required for functioning properly, any Android app can ask permission to access your phone and SMS data unnecessarily.

To prevent users against surveillance and commercial spyware apps, Google has finally included a new rule under its Google Play Developer Policy that now limits Call Log and SMS permission usage to your “default” phone or SMS apps only.

“Only an app that you’ve selected as your default app for making calls or text messages will be able to make these requests. (There are some exceptions—e.g., voicemail and backup apps.),” Google said.

 

Restricted Gmail API for Limited Apps

Since APIs can allow developers to access your susceptible data from your Gmail email account, Google has now finally decided to limit access to Gmail API only for apps that directly enhance email functionality—such as email clients, email backup services and productivity services.

New Privacy Interface for Third-Party App Permissions

When third-party app prompts users to access their Google account data, clicking “allow” approve all requested permissions at once, leaving an opportunity for malicious apps to trick users into giving away powerful permissions.

But now Google has updated its Account Permissions system that asks for each requested permission individually rather than all at once, giving users more control over what type of account data they choose to share with each app.

Third-Party App Permissions

While the change went into effect today, the developers have been given 90 days (January 6th) update their apps and services. After that, the updated Developer Policy will get enforced on its own.

Besides these changes, in next few hours, at 11 AM ET, Google is going to announce some cool new gadgets and Pixel devices at its third annual “Made By Google” event in New York.

Go to Source

Maybe you shouldn’t use LinkedIn

For users in outward-facing professions like sales or marketing, social media—in particular, LinkedIn—is a highly popular means of connecting to new opportunities in the field and staying current with industry peers.

For the rest of us, LinkedIn is an outstanding means of aggregating personal information without significant safety controls, irritating all your email contacts, and providing an endless stream of phishes, honeytraps, and scams to security personnel.

While many of us do not have the luxury of severing from social media without a second thought, we do think it’s worth knowing what’s happening with your data at LinkedIn and making an informed decision—just as we suggested for users of Facebook confronting the Cambridge Analytica fiasco.

The privacy controls

Originally, LinkedIn started with not much concern for privacy. If you’re on a social media platform, you want to share, right? Well, typically that’s at your discretion, not an algorithm’s (and certainly not one that loves to share indiscriminately).

Today, LinkedIn has done quite a bit better at making their privacy controls accessible and easier to understand, but it still has a few problems baked into the core of the service.

Google indexing

Currently, new profiles default to allow search engine crawlers to access your name, title, current company, and picture. While you can switch that option off easily, if you don’t do it quick enough, that info will be indexed and public, irrespective of any subsequent privacy changes you make.

Searching within Linkedin based on information you get from Google Cache will often yield profiles of people who thought they set their data to private.

Relationship weighting yields increased access automatically

One thing the late, unlamented Google Plus did right is sever symmetric access levels for connections. While you could choose to disclose everything to a connection, the other person wasn’t obligated to do the same to communicate with you.

This is not the case with LinkedIn. Reducing trust to a “yes” or “no” question reduces the barrier to entry for information thieves. It’s a trivial matter to observe a target’s position within the network, join their peripheral interests or third-degree connections, then use the automatic increase in access to appear more trusted in a later attack. There really isn’t an effective defense against this sort of social network attack because it depends on every single member of the network being forever vigilant.

Relationship weighting is arbitrary with no user control

LinkedIn has three levels of relationship weighting: first-, second-, and third-degree connections. The end user has no control over who gets which category, and all three categories are based on network proximity.

Why is this a problem? Because relationships aren’t symmetric. You might have a need for a contact’s email without wanting to disclose full info on yourself. However, network proximity weighting presumes equivalence for all contacts in each class. On the contrary, a second-degree connection might be of only occasional interest, while a first-degree connection might have utility for only a limited time.

With cookie cutter policies determining the weight of all of your connections, it robs the end user of setting controls appropriate to each relationship. A great example of this is profile phishing. An attacker only needs to succeed once to become a first-degree connection and gain access to everything.  

A very successful honeytrap profile targeting infosec employees

LinkedIn’s security history

LinkedIn has an extensive history of breaches, vulnerabilities, and personal data leaks for both their web and iOS platforms going back many years. At present, they patch quite quickly upon disclosure, but the slow and steady drip of sometimes serious vulnerabilities over years raises some concerns as to whether or not the platform has a culture of security.

Breaches and vulnerabilities happen to everyone. But if they happen publicly and almost annually, the end user might want to think hard before trusting a third party with their data.

The data hoarding

Like some other third party services, LinkedIn doesn’t delete your account. You can “close” it, but the service retains the right to your information indefinitely. So if you “close” your account, and LinkedIn sustains a breach in the future, your data will be there.

Did you forget to opt out of ad targeting before closing? Your data might still be made available for third-party use. The hallmark of a reasonably secure social media platform is control over your own data, and LinkedIn falls short on this. If you’d like to know more about which services will actually delete your data, check out this list from Secured.fyi.

But I HAVE to use LinkedIn!

You probably don’t. Unless you belong to the handful of industries for which LinkedIn use is standard, a significant number of opportunities still redirect you toward proprietary recruiting platforms, same as always.

If, however, you’re stuck with the service, make sure to log out after each session. Logging out prevents scraping of your network activity, and limits tracking to what you do on the platform. As with most online irritations, ad blockers and anti-tracker extensions can help you keep control of most of your data.

Remember that one of the best defenses against a third-party breach is to think very hard on if you want to trust that third party with your data to begin with. In the case of LinkedIn, the choice is yours. We just want to make sure it’s an informed one.

The post Maybe you shouldn’t use LinkedIn appeared first on Malwarebytes Labs.

Go to Source
Author: William Tsing

Skygofree — a Hollywood-style mobile spy

Most Trojans are basically the same: Having penetrated a device, they steal the owner’s payment information, mine cryptocurrency for the attackers, or encrypt data and demand a ransom. But some display capabilities more reminiscent of Hollywood spy movies.

We recently discovered one such cinematic Trojan by the name of Skygofree (it doesn’t have anything to do with the television service Sky Go; it was named after one of the domains it used). Skygofree is overflowing with functions, some of which we haven’t encountered elsewhere. For example, it can track the location of a device it is installed on and turn on audio recording when the owner is in a certain place. In practice, this means that attackers can start listening in on victims when, say, they enter the office or visit the CEO’s home.

Another interesting technique Skygofree employs is surreptitiously connecting an infected smartphone or tablet to a Wi-Fi network controlled by the attackers — even if the owner of the device has disabled all Wi-Fi connections on the device. This lets the victim’s traffic be collected and analyzed. In other words, someone somewhere will know exactly what sites were looked at and what logins, passwords, and card numbers were entered.

The malware also has a couple of functions that help it operate in standby mode. For example, the latest version of Android can automatically stop inactive processes to save battery power, but Skygofree is able to bypass this by periodically sending system notifications. And on smartphones made by one of the tech majors, where all apps except for favorites are stopped when the screen is turned off, Skygofree adds itself automatically to the favorites list.

The malware can also monitor popular apps such as Facebook Messenger, Skype, Viber, and WhatsApp. In the latter case, the developers again showed savvy — the Trojan reads WhatsApp messages through Accessibility Services. We have already explained how this tool for visually or aurally impaired users can be used by intruders to control an infected device. It’s a kind of “digital eye” that reads what’s displayed on the screen, and in the case of Skygofree, it collects messages from WhatsApp. Using Accessibility Services requires the user’s permission, but the malware hides the request for permission behind some other, seemingly innocent, request.

Last but not least, Skygofree can secretly turn on the front-facing camera and take a shot when the user unlocks the device — one can only guess how the criminals will use these photos.

However, the authors of the innovative Trojan did not dispense with more mundane features. Skygofree can also to intercept calls, SMS messages, calendar entries, and other user data.

The promise of fast Internet

We discovered Skygofree recently, in late 2017, but our analysis shows the attackers have been using it — and constantly enhancing it — since 2014. Over the past three years, it has grown from a rather simple piece of malware into full-fledged, multifunctional spyware.

The malware is distributed through fake mobile operator websites, where Skygofree is disguised as an update to improve mobile Internet speed. If a user swallows the bait and downloads the Trojan, it displays a notification that setup is supposedly in progress, conceals itself from the user, and requests further instructions from the command server. Depending on the response, it can download a variety of payloads — the attackers have solutions for almost every occasion.

Forewarned is forearmed

To date, our cloud protection service has logged only a few infections, all in Italy. But that doesn’t mean that users in other countries can let their guard down; malware distributers can change their target audience at any moment. The good news is that you can protect yourself against this advanced Trojan just like any other infection:

  1. Install apps only from official stores. It’s wise to disable installation of apps from third-party sources, which you can do in your smartphone settings.
  2. If in doubt, don’t download. Pay attention to misspelled app names, small numbers of downloads, or dubious requests for permissions — any of these things should raise flags.
  3. Install a reliable security solution — for example, Kaspersky Internet Security for Android. This will protect your device from most malicious apps and files, suspicious websites, and dangerous links. In the free version scans must be run manually; the paid version scans automatically.

  1. We recommend that business users deploy Kaspersky Security for Mobile — a component of Kaspersky Endpoint Security for Business — to protect the phones and tablets employees use at work.

Go to Source
Author: Anna Markovskaya

Please don’t buy this: identity theft protection services

With an ever-increasing tempo of third-party breaches spilling consumer data all across the dark web, a natural impulse for a security-savvy user is to do something proactive to protect their sensitive information. After Equifax, there was an explosion of interest in credit monitoring and identity theft protection services. But most of these services offer limited value for the money, and in many cases, are subsidiaries of entities prone to leaking information in the first place. Sometimes doing something isn’t always the best option.

What do they do?

Before we get into the problems with identity theft protection services, let’s break down which services are actually offered, and in exchange for what. Identity protection services usually start by collecting your personal information, including the following:

  • your birthdate
  • your social security number
  • your address
  • your email address(es)
  • your phone number(s)

A company like Lifelock would then use “proprietary technology that searches for a wide range of threats to your identity.” (Sidenote: Subsuming an entire discussion of one’s product under “technology that searches” is usually a red flag, albeit a small one.) If any threats are found, they will notify you and provide some handholding to rectify the situation. In addition, they offer an insurance policy that provides reimbursement of any monetary losses. Starting price for these services runs around $109 per year.

IdentityWorks is another service run by one of the major credit bureaus, Experian. IdentityWorks has an introductory product for $9.99 per month that offers credit monitoring, a credit lock (something different from a freeze), identity theft insurance, and a customer service line for fraud resolution.

IdentityForce tends to be ranked higher in comparison to other services. They provide credit monitoring, bank account monitoring (not found in most other products), change of address monitoring, court record monitoring, as well as general personal information protection. Their recovery services are mostly the same though, including a customer service line for fraud resolution, identity theft protection insurance, and stolen funds replacement up to $1 million, depending on where you live. Standard cost is $17.95 per month.

Why shouldn’t I buy it?

Brian Krebs, a security researcher who’s arguably one of the biggest public targets for identity theft and financial crime, wrote a blog on credit monitoring services, stating that while some of these and other ID protection services are helpful for those who’ve already been snaked by ID thieves, they don’t do much to prevent the crime from happening in the first place.

Searching the darknet for your personal information is something advertised by almost all of these companies. What they don’t disclose is that a darknet site is almost always hosted on a “bulletproof” hosting service that will not respond to takedown requests or legal threats. So while essentially anybody can fire up the TOR browser and find your social security number on a dark website, almost nobody (including those in ID protection services) can actually do anything about it. All they can do is alert you.

Our big issue with paying for an identity theft protection service—besides the fact that the service doesn’t actually protect against identity theft—is that the insurance you would be forking out for is coverage most users already have under Visa and Mastercard zero liability rules. Another is the narrow focus on credit, typically to the exclusion of bank accounts, mortgage loans, and tax fraud. Lastly, account application notifications can’t actually prevent creditors from doing a “hard pull” on your credit, which dings your credit score.

Who else is looking at your data?

Somewhat more concerning is the lack of transparency concerning where these companies draw their data for analysis and alerting. Lifelock, in particular, outsources its credit monitoring services to… Equifax. In September of this year, the LA Times reported the relationship with Lifelock and Equifax, noting that in some instances, purchasing services would require the end user to give Equifax more information than it would otherwise have.

Does anyone, anywhere, want to give more personal data to Equifax?

How many competing companies also rely on the credit bureaus for monitoring services? While Equifax was the loudest and most recent breach in memory, odds are good that the other credit bureaus operating on an identical business model have identical security practices. As a reminder, Experian offers its own service, IdentityWorks, backed by data services it does not disclose and personal information you did not consent to give.

As well as the red flags above, there’s some slightly more ambiguous questions regarding these services that users should evaluate before purchase. For example: Is it a responsible threat model to protect against third-party data breaches by handing over, even more, data to a third party? Doesn’t that create ostensibly the biggest online target in the world?

And looking at the problem from another angle: If the biggest players in the industry rely on agreements with credit bureaus to do at least a portion of their monitoring, why aren’t the bureaus doing this for all of us? Given that Transunion, Equifax, and Experian took it upon themselves to collect our financial data without consent, don’t they have a responsibility to protect it with industry standard best practices? As a reminder, Equifax was not breached by an arcane APT attack. They were breached by negligence.

Conclusion

Identity theft monitoring services sound great on the surface. They’re not that expensive and seem to provide peace of mind against an avalanche of ever-more damaging breaches. But they don’t, at present, protect against the worst impacts of identity theft—the theft itself. Instead, they duplicate free services and, worst of all, let the credit bureaus off the hook for improving their security.

Please don’t buy this. Instead, you can stay relatively safe by learning about credit freezes and other steps to take in order to protect your identity when data is stolen or tax fraud is committed.

The post Please don’t buy this: identity theft protection services appeared first on Malwarebytes Labs.

Go to Source
Author: William Tsing

Facebook wants your nudes

We’ve all heard about cases when someone’s ex reveals their intimate photos online without their consent. Even celebrities are not immune, and their leaked images keep tabloids well fed.

For most users, the release of such private images — revenge porn — can feel like the end of the world, and in fact, a few resulting suicides have brought the issue very much into the mainstream news. It should go without saying that such leaks represent a huge privacy violation and have no place in a civilized society. However, leaks do happen.

That’s why Facebook came up with an interesting approach aiming to prevent intimate photos from being published without their subjects’ consent, at least on Facebook or Instagram, or over Facebook Messenger. The idea the social network came up with and is currently working on in collaboration with the Australian government is to suggest users send the photos they are concerned about to the company itself.

 

Wait … what?!

Yes, you got that right. Here are the details: Facebook’s plan is to encrypt private images using hashing, so that if someone sends or publishes that image through Facebook, Messenger, or Instagram, the service can detect the image by comparing its hash sum to those in Facebook’s database and interfere with its transmission.

Australia’s e-safety commissioner told ABC News how the scheme is supposed to work: Facebook will suggest that users send their intimate photos through Facebook Messenger — to themselves. The images, in being sent, will be hashed. Subsequently, if someone tries to upload an image that has the same hash value, it won’t be visible to anyone. Facebook claims that the end-to-end encryption used in Messenger (in the mobile app, not on desktops) ensures the photos will be secure, because it excludes intermediaries, and that the images themselves won’t be stored, making them immune to theft.

 

Will it really work?

Facebook has announced the pilot program in the United Kingdom, the United States, Australia, and Canada so far, so we don’t yet know how effective it will actually be. On the one hand, it has real potential as a solution to this privacy threat. On the other hand, questions remain about how we can be sure that it won’t become a way of encrypting someone else’s public photos. Because end-to-end encryption doesn’t allow Facebook to look at the photos, it won’t be able to use machine-learning algorithms to distinguish, for example, a nude photo from a nonnude one.

And moreover, a lot of people still have concerns about providing their photos to a third party, be it Facebook or any other company, and about the security of any technology they don’t know much about — especially in case of Facebook, where several users’ private photos have already been leaked.

Is there a better way? For most people, there is:

  1. Whether you take nude or otherwise potentially compromising pictures of yourself is none of our business. They are, however, a tempting target, and so well worth second thoughts. If the photos don’t exist, they can’t leak.
  2. If you do take pictures of that kind, store them offline, on an encrypted storage device.
  3. If you want to share something that can potentially be used to shame or otherwise harm you in the wrong hands — or hands that become wrong, say, after a breakup — be prepared; you may face difficult consequences. Once you’ve uploaded something, anything, to the Internet, it might become public, no matter how secure the online service. There’s also the human factor, and there is no such thing as an absolutely secure system.

Go to Source
Author: Yaroslava Ryabova

A look into the global drive-by cryptocurrency mining phenomenon

An important milestone in the history of cryptomining happened around mid-September when a company called Coinhive launched a service that could mine for a digital currency known as Monero directly within a web browser.

JavaScript-based mining is cross-platform compatible and works on all modern browsers. Indeed, just about anybody visiting a particular website can start mining for digital currency with eventual profits going to the owner’s wallet (in the best case scenario). In itself, browser-based cryptomining is not illegal and could be seen as a viable business model to replace traditional ad banners.

To differentiate browser-based mining from other forms of mining, many started to label these instances as JavaScript miners or browser miners. The simplicity of the Coinhive API integration was one of the reasons for its immediate success, but due to several oversights, the technology was almost instantly abused.

However, many web portals started to run the Coinhive API in non-throttled mode, resulting in cases of cryptojacking—utilizing 100 percent of the victims’ CPU to mine for cryptocurrency with no knowledge or consent given by the user.

We decided to call this new phenomenon drive-by mining, due to the way the code is delivered onto unsuspecting users, very much like drive-by downloads. There’s one important caveat, though: There is no malware infection at the end of the chain.

While the harm may seem minimal, this is not the kind of web experience most people would sign up for. To make matters worse, one does not always know if they are mining for the website owner or for criminal gangs that have found a new monetization tool for the hacked sites they control.

In our full reportA look into the global drive-by cryptocurrency mining phenomenon, we review the events that led to this new technology being abused and explore where users involved in cryptomining against their will are located.

To give you an idea of the scope of drive-by mining, Malwarebytes has been blocking the original Coinhive API and related proxies an average of 8 million times per day, which added up to approximately 248 million blocks in a single month.

With their new mandatory opt-in API, Coinhive hopes to restore some legitimacy to the technology and, more importantly, push it as a legal means for site owners to earn revenues without having to worry about ad blockers or blacklists. This could also benefit users who might not mind trading some CPU resources for an ad-free online experience.

Time will tell how criminals react, but in the meantime, drive-by mining continues unabated.

For more information on this latest trend in the cryptocurrency world, please download our report.

The post A look into the global drive-by cryptocurrency mining phenomenon appeared first on Malwarebytes Labs.

Go to Source
Author: Jérôme Segura

Please don’t buy this: smart locks

We all like buying the latest and greatest tech toy. It’s fun to get new and novel features on a product that used to be boring and predictable; a draw of the original BeBox (amongst many) was a layer of “das blinkenlights” across the front. But sometimes, the latest feature is not always the greatest feature. And sometimes, some things should not be on the Internet at all. For readers concerned with privacy, or who simply do not want to introduce additional hassle into their tech maintenance routine, we introduce the first entry in our series called “Please don’t buy this.”  Today’s feature: smart locks.

The cool new thing

Recently, Amazon announced a new service combining a selection of smart locks, a web-connected security camera, and a network of home service providers that work in concert to allow remote access to your home. Ignoring the question of allowing third-party contractors vetted by an unpublished standard unsupervised access, lets take a look at why smart locks might not be the best purchase.

Amazon’s program actually works with three different existing smart lock products, as seen here.

“Smart lock” is a bit of a catchall term covering a wide variety of technologies, so what are the Amazon locks dependent on, and what security vulnerabilities do those technologies include? It’s a bit of a mystery, as the Amazon sales pages don’t include that information, nor does the “technical specification” page of one of the manufacturers.

What we can surmise is that these locks will require replaceable batteries, and that at least one of the locks affords the user Wi-Fi access. While allowing remote unlocks to your home without any in-person authentication is a pretty transparently bad idea, a number of other smart locks have attempted a more secure approach using Bluetooth low energy, which affords some additional security features that the original protocol does not.

Unfortunately, while the protocol itself has a generally good security profile, implementation and associated companion apps put out by lock manufacturers aren’t quite as good. In tests at last year’s Defcon, 12 out of 16 smart lock models failed under sustained attack. Most of these failures concerned either encryption implementation, or shoddy code in associated apps.

Why it’s less cool than it appears

Setting aside poor security design and implementation, “smart” devices like these tend to come with fuzzy legal boundaries surrounding ownership and maintenance.  Last year, a home automation hub company called Revolv was shut down during acquisition. Rather than simply failing to provide updates, the devices were disabled.

This was an inconvenience for users, but what if it was your front door? Given the current state of mobile OS fragmentation, would it be that much of a surprise if a lock company simply declined to provide security updates? We couldn’t find any information on the means by which the new Amazon compatible locks are updated, how authorized delivery personnel will interact with the locks, and if any third party has access to data communicated by the lock and/or accompanying phone apps.

These are questions that would be concerning for any device. But when that device affords access to your home, considerably more transparency about the device’s underlying technology should be mandatory.

Conclusion

A physical deadbolt has security flaws as well. But deadbolts have a standardized design, commonly accepted standards that they are evaluated against, can be repaired or replaced by anybody, and are unequivocally owned by you. Can a smart lock’s EULA claim the same? Smart locks could achieve acceptable purchase status if they met the following criteria:

  • independent, industry-wide security standards in design
  • independent code auditing
  • no Wi-Fi
  • Conventional implementation of industry standard encryption
  • no third-party data storage
  • right to repair

Until smart locks can meet these standards, we respectfully suggest. . .Please don’t buy this.

The post Please don’t buy this: smart locks appeared first on Malwarebytes Labs.

Go to Source
Author: William Tsing

Are dating apps safe?

Searching for one’s destiny online — be it a lifelong relationship or a one-night stand — has been pretty common for quite some time. Dating apps are now part of our everyday life. To find the ideal partner, users of such apps are ready to reveal their name, occupation, place of work, where they like to hang out, and lots more besides. Dating apps are often privy to things of a rather intimate nature, including the occasional nude photo. But how carefully do these apps handle such data? Kaspersky Lab decided to put them through their security paces.

Our experts studied the most popular mobile online dating apps (Tinder, Bumble, OkCupid, Badoo, Mamba, Zoosk, Happn, WeChat, Paktor), and identified the main threats for users. We informed the developers in advance about all the vulnerabilities detected, and by the time this text was released some had already been fixed, and others were slated for correction in the near future. However, not every developer promised to patch all of the flaws.

Threat 1. Who you are?

Our researchers discovered that four of the nine apps they investigated allow potential criminals to figure out who’s hiding behind a nickname based on data provided by users themselves. For example, Tinder, Happn, and Bumble let anyone see a user’s specified place of work or study. Using this information, it’s possible to find their social media accounts and discover their real names. Happn, in particular, uses Facebook accounts for data exchange with the server. With minimal effort, anyone can find out the names and surnames of Happn users and other info from their Facebook profiles.

And if someone intercepts traffic from a personal device with Paktor installed, they might be surprised to learn that they can see the e-mail addresses of other app users.

Turns out it is possible to identify Happn and Paktor users in other social media 100% of the time, with a 60% success rate for Tinder and 50% for Bumble.

Threat 2. Where are you?

If someone wants to know your whereabouts, six of the nine apps will lend a hand. Only OkCupid, Bumble, and Badoo keep user location data under lock and key. All of the other apps indicate the distance between you and the person you’re interested in. By moving around and logging data about the distance between the two of you, it’s easy to determine the exact location of the “prey.”

Happn not only shows how many meters separate you from another user, but also the number of times your paths have intersected, making it even easier to track someone down. That’s actually the app’s main feature, as unbelievable as we find it.

Threat 3. Unprotected data transfer

Most apps transfer data to the server over an SSL-encrypted channel, but there are exceptions.

As our researchers found out, one of the most insecure apps in this respect is Mamba. The analytics module used in the Android version does not encrypt data about the device (model, serial number, etc.), and the iOS version connects to the server over HTTP and transfers all data unencrypted (and thus unprotected), messages included. Such data is not only viewable, but also modifiable. For example, it’s possible for a third party to change “How’s it going?” into a request for money.

Mamba is not the only app that lets you manage someone else’s account on the back of an insecure connection. So does Zoosk. However, our researchers were able to intercept Zoosk data only when uploading new photos or videos — and following our notification, the developers promptly fixed the problem.

Tinder, Paktor, Bumble for Android, and Badoo for iOS also upload photos via HTTP, which allows an attacker to find out which profiles their potential victim is browsing.

When using the Android versions of Paktor, Badoo, and Zoosk, other details — for example, GPS data and device info — can end up in the wrong hands.

Threat 4. Man-in-the-middle (MITM) attack

Almost all online dating app servers use the HTTPS protocol, which means that, by checking certificate authenticity, one can shield against MITM attacks, in which the victim’s traffic passes through a rogue server on its way to the bona fide one. The researchers installed a fake certificate to find out if the apps would check its authenticity; if they didn’t, they were in effect facilitating spying on other people’s traffic.

It turned out that most apps (five out of nine) are vulnerable to MITM attacks because they do not verify the authenticity of certificates. And almost all of the apps authorize through Facebook, so the lack of certificate verification can lead to the theft of the temporary authorization key in the form of a token. Tokens are valid for 2–3 weeks, throughout which time criminals have access to some of the victim’s social media account data in addition to full access to their profile on the dating app.

Threat 5. Superuser rights

Regardless of the exact kind of data the app stores on the device, such data can be accessed with superuser rights. This concerns only Android-based devices; malware able to gain root access in iOS is a rarity.

The result of the analysis is less than encouraging: Eight of the nine applications for Android are ready to provide too much information to cybercriminals with superuser access rights. As such, the researchers were able to get authorization tokens for social media from almost all of the apps in question. The credentials were encrypted, but the decryption key was easily extractable from the app itself.

Tinder, Bumble, OkCupid, Badoo, Happn, and Paktor all store messaging history and photos of users together with their tokens. Thus, the holder of superuser access privileges can easily access confidential information.

Conclusion

The study showed that many dating apps do not handle users’ sensitive data with sufficient care. That’s no reason not to use such services — you simply need to understand the issues and, where possible, minimize the risks.

Do’s:

  • Using a VPN;
  • Installing security solutions on all of your devices;
  • Sharing information with strangers only on a need-to-know basis.

Don’ts:

  • Adding your social media accounts to your public profile in a dating app; giving your real name, surname, place of work;
  • Disclosing your e-mail address, be it your personal or work e-mail;
  • Using dating sites on unprotected Wi-Fi networks.

Go to Source
Author: Alexandra Golovina

Snap Map security concerns

Do you use Snapchat? If so, you may want to take a deeper look at the Snap Map feature released earlier this week. As the company explains:

With the Snap Map, you can view Snaps of sporting events, celebrations, breaking news, and more from all across the world.

If you and a friend follow one another, you can share your locations with each other so you can see where they’re at and what’s going on around them! Plus, meeting up can be a cinch.

Only the people you choose can see your location — so if you’re friends with your boss, you can still keep your location on the down low during a “sick day”.

Snaps you submit to Our Story can still show up on the Map, though!

Snap Map Security and Privacy concerns

The feature sounds quite straightforward, but the setup is not clear about how your data is shared, only that you are giving access to the app.

Earlier today, The Verge penned a piece digging into the privacy aspect and discovered that the map feature was firing up each time the author’s friend opened the app:

Turned out, she didn’t know she had Snap Map enabled, and didn’t know it was showing her location every time she opened the app. When she updated Snap and went through the Snap Map introduction, she believed Snap was giving the option to geotag her Snaps for Our Story, as shown in the promotional video. Instead, she had inadvertently broadcasted where she lived to every one of her Snap contacts.

In a follow-up, the company noted some things that were not mentioned during signup:

  • If you tap on your friend, you will see when their location was updated (i.e., 1 hour ago, 2 hours ago). Their location reflects where they last opened Snapchat.
  • A friend’s location will remain on the Map for up to 8 hours if they do not open the app again, causing their location to update. If more than 8 hours have passed and a Snapchatter has not opened the app, their location will disappear from the Map entirely.

We know that everyone has their own threshold for sharing — and, in some instances, oversharing. This is why the company offers many settings.

If you are like me and value your privacy, avoid opting in to the service. If you are curious but don’t want to broadcast your location, opt in by using Ghost mode, which shares your location with you alone; from there, you can browse the map.

Given the demographics of Snapchat, this is also a good time for parents to take a minute and talk with their children about privacy. Kids could be unwittingly sharing where they are and how long they have been there.

Go to Source
Author: Jeffrey Esposito