Bank robbers 2.0: digital thievery and stolen cryptocoins

Imagine running down the street (and away from law enforcement) with 2,000 pounds of gold bars. Or 1,450 pounds in $100 bills. With both of these physical currencies amounting to roughly US$64 million, you’d be making quite a steal…if you could get away with it.

That’s exactly what the next generation of thieves—bank robbers 2.0—did in December 2017, when they stole more than $60 million in Bitcoin* from the mining marketplace NiceHash. It turns out stealing Bitcoin is a lot less taxing on the body.

*Disclaimer: I used the value of Bitcoins as they were at the time of the robbery. Current values are volatile and change from minute to minute.

Crime these days has gotten a technical upgrade. By going digital, crooks are better able to pull off high-stakes sting operations, using the anonymity of the Internet as their weapon of choice. And their target? Cryptocurrency.

Old-school bank robbers

The amount of money stolen from NiceHash is comparable to arguably the biggest physical heist to date, the theft of nearly $70 million from a Brazilian bank in 2005. Noted in the Guinness Book of World Records, the robbers managed to get away with 7,716 pounds of 50 Brazilian real notes. There were 25 people involved—including experts in mathematics, engineering, and excavation—who fronted a landscaping company near the bank, dug a 78-meter (256-foot) tunnel underneath it, and broke through 1 meter (about 3.5 feet) of steel-reinforced concrete to enter the bank vault.

The largest bank robbery in the United States, meanwhile, was at the United California Bank in 1972. The details of this bank robbery were described by its mastermind, Amil Dinsio, in the book Inside the Vault. A gang of seven, including an alarm expert, explosives expert, and burglary tool designer, broke into the bank’s safe deposit vault and made off with cash and valuables with an estimated value of $30 million US dollars.

What these robberies have in common is that, in order to pull them off, there were large groups of criminals involved with various special skills. Most of the criminals of these robberies were either caught or betrayed—physical theft leaves physical traces behind. Today’s physical robbers run the risk of getting hurt or hurting others, or leaving behind prints or DNA. And they are often tasked with moving large amounts of money or merchandise without being seen.

heavy loot

Bank robbers 2.0

So here comes the bank robbers 2.0. They don’t have to worry about transporting stolen goods, fleeing the crime scene, digging or blowing things up. They are in no—immediate—physical danger. And if they’re smart enough, they work alone or remain anonymous, even to their accessories. Their digital thievery has been proven successful through several methods used to obfuscate their identity, location, and criminal master plan.

Social engineering

One of the most spectacular digital crimes targeted 100 banks and financial institutions in 30 nations with a months-long prolonged attack in 2013, reportedly netting the criminals involved over $300 million. The group responsible for this used social engineering to install malicious programs on bank employees’ systems.

The robbers were looking for employees responsible for bank transfers or ATM remote control. By doing so, they were able to mimic the actions required to transfer money to accounts they controlled without alerting the bank that anything unusual was going on. For example, they were able to show more money on a balance than was actually in the account. An account with $10,000 could be altered to show $100,000 so that hackers could transfer $90,000 to their own accounts without anyone noticing anything.

The alleged group behind this attack, the Carbanak Group, have not yet been apprehended, and variants of their malware are still active in the wild.

Ponzi schemes

Bitcoin Savings & Trust (BST), a large Bitcoin investment firm that was later proved to be a pyramid scheme, offered 7 percent interest per week to investors who parked their Bitcoins there. When the virtual hedge fund shut down in 2012, most of its investors were not refunded. At the time of its closing, BST was sitting on 500,000 BTC, worth an estimated $5.6 million. Its founder, an e-currency banker who went by the pseudonym pirateat40, only paid back a small sum to some beneficiaries before going into default. It was later learned that he misappropriated nearly $150,000 of his clients’ money on “rent, car-related expenses, utilities, retail purchases, casinos, and meals.”


Even though details are still unclear, the NiceHash hack was reported as a security breach related to the website of the popular mining marketplace. Roughly 4,732 coins were transferred away from internal NiceHash Bitcoin addresses to a single Bitcoin address controlled by an unknown party. The hackers appear to have entered the NiceHash system using the credentials of one of the company’s engineers. As it stands now, it is unknown how they acquired those, although it’s whispered to be an inside job.

Stolen wallet keys

In September 2011, the MtGox hot wallet private keys were stolen in a case of a simple copied wallet.dat file. This gave the hacker access to not only a sizable number of Bitcoins immediately, but also the ability to redirect the incoming trickle of Bitcoins deposited to any of the addresses contained in the file. This went on for a few years until the theft was discovered in 2014. The damages by then were estimated at $450 million. A suspect was arrested in 2017.

Transaction malleability

When a Bitcoin transaction is made, the account sending the money digitally signs the important information, including the amount of Bitcoin being sent, who it’s coming from, and where it’s going. A transaction ID, a unique name for that transaction, is then generated from that information. But some of the data used to generate the transaction ID comes from the unsigned, insecure part of the transaction.As a result, it’s possible to alter the transaction ID without needing the sender’s permission. This vulnerability in the Bitcoin protocol became known as “transaction malleability.”

Transaction malleability was a hot topic in 2014, as researchers saw how easily criminals could exploit it. For example, a thief could claim that his transactions didn’t show up under the expected ID (because he had edited it), and complain that the transaction had failed. The system would then automatically retry, initiating a second transaction and sending out more Bitcoins.

Silk Road 2.0 blamed this bug for the theft of $2.6 million in Bitcoins in 2014, but it was never proven to be true.

Man-in-the-middle (by design)

In 2018, a Tor proxy was found stealing Bitcoin from both ransomware authors and victims alike. A Tor proxy service is a website that allows users to access .onion domains hosted on the Tor network without having to install the Tor browser. As Tor proxy servers have a man-in-the-middle (MitM) function by design, the thieves were able to replace the Bitcoin address that victims were paying ransom to and insert their own. This left the ransomware authors unpaid, which in turn left the victims without their decryption key.


Also known as drive-by mining, cryptojacking is a next-generation, stealthy robbing trick that covers all mining activities completed on third-party systems without the users’ consent. Stealing little amounts from many can amount to large sums. There are so many methods to achieve this that Malwarebytes’ own Jérôme Segura published a whitepaper about it.

Unlike drive-by downloads that push malware, drive-by mining focuses on utilizing the processing power of visitors’ computers to mine cryptocurrency, especially those that were designed to accommodate non-specialized processors. Miners of this kind come to us in advertisements, bundlers, browser extensions, and Trojans. The revenues are hard to guess, but given the number of blocks Malwarebytes records on Coinhive and similar sites daily, criminal profit margins could be potentially record-breaking.

Physical stealing of digital currency

This last one brings us full circle, as someone actually managed to steal Bitcoins the old-fashioned way. In January 2018, three armed men attempted to rob a Bitcoin exchange in Canada, but failed miserably as a hidden employee managed to call the police. However, others have had more success. The Manhattan District attorney is looking for the accomplice of a man that robbed his friend of $1.8 million in Ether at gunpoint. Apparently this “friend” got hold of the physical wallet and forced the victim to surrender the key needed to transfer the cryptocurrency into his own account.


As we can conclude from the examples above, there are many ways for cybercriminals to get rich quick. With a lot less risk of physical harm and even less hard labor, they can score larger amounts for less risk than the old-fashioned bank robbers. The only pitfall to robbing digital currency is how to turn it into fiat money without raising a lot of suspicion or losing a big chunk to launderers.

While the diminished use of violence is reassuring, it’s still beneficial to think about how we can avoid becoming a victim. Much of it has to do with putting too much trust in the wrong people. We are dealing with a very young industry that doesn’t have a lot of established names. So how can you avoid getting hurt by these modern thieves? Here are a few tips:

  • Don’t put all your eggs in one basket.
  • Use common sense when deciding who to do business with. A little background check into the company and its execs never hurt anyone.
  • Don’t put more money into cryptocurrencies than you can spare.

Additional links

The post Bank robbers 2.0: digital thievery and stolen cryptocoins appeared first on Malwarebytes Labs.

Go to Source
Author: Pieter Arntz

When you shouldn’t trust a trusted root certificate

Root certificates are the cornerstone of authentication and security in software and on the Internet. They’re issued by a certified authority (CA) and, essentially, verify that the software/website owner is who they say they are. We have talked about certificates in general before, but a recent event triggered our desire for further explanation about the ties between malware and certificates.

In a recent article by RSA FirstWatch, we learned that a popular USB audio driver had silently installed a root certificate. This self-signed root certificate was installed in the Trusted Root Certification Authorities store. Under normal circumstances, you would have to agree to “Always trust software from {this publisher}” before a certificate would be installed there.

However, the audio driver skipped this step of prompting for approval (hence “silently” installing).  The silent install was designed to accommodate XP users, but it had the same effect in every Windows operating system from XP up to Windows 10. The installer was exactly the same for every Windows version. Ironically enough, the certificate wasn’t even needed to use the software. It was just introduced to complete the installation on Windows XP seamlessly.

Why is this a bad thing?

Root certificates can be installed for purposes such as timestamping, server authentication, code-signing, and so on. But this particular driver installed a certificate valid for “All” purposes. So any system with these drivers installed from any of the vendors will trust any certificate issued by the same CA—for “All” purposes. Under normal circumstances, only a certificate issued by Microsoft would have “All” in the root certificates “Intended Purposes” field.

Having a certificate in the Trusted Root Certification Store for “All” intended purposes on a Windows system gives anyone that has the private key associated with the certificate the ability to completely own the system on which it is installed. The impact is the same as for any Certificate Authority (CA) behind certificates installed on Windows systems.


An exception is that in some instances large companies may choose to do the same with the intent to perform SSL decryption at the perimeter for outbound traffic. So, not only does silently adding a root certificate break the hierarchical trust model of Windows. It also gives any owner of the private key that goes with that certificate a lot of options to perform actions on a computer with that certificate installed.

How can they be abused?

An attacker who gets ahold of the private key that belongs to a root certificate can generate certificates for his own purposes and sign them with the private key. Any certificate with the root certificate already in their Trusted Root Certification Store on a Windows system will trust any certificate signed with the same private key for “All” purposes. This applies to software applications, websites, or even email. Anything from a Man-in-the-Middle (MitM) attack to installing malware is possible. And as if this wasn’t bad enough, security researchers at the University of Maryland found that simply copying an authenticode signature from a legitimate file to a known malware sample can cause antivirus products to stop detecting it, even though it results in an invalid signature.

Methods of abuse

There are several ways of abusing certificates by criminals. They can:

Of all these methods, it stands to reason that stolen certificates, especially those intended for “All” purposes, are the most dangerous. So introducing one of these just because you want to install a driver or to enable easier customer support, and not letting the user know, is inadvisable at best.

If you think that the number of certificates in use by malware authors can’t be that large, have a look at the suspects that have been reported at the CCSS forum.

How can I remove certificates I don’t need or trust?

A list of known signing certificates that are being abused by threat actors has been made available at As explained earlier, using signing certificates gives criminals a lot of options to bypass system protection mechanisms, which is why you might want to remove those from your machine. There is also a test site where you can check if any of the software programs that are open to an MitM attack are active on your system.

To delete a trusted root certificate:

  • Open the certificates snap-in for a user, computer, or service. You can do this by running certmgr.msc from your Run/Searchprograms box or from a command prompt.
  • Select Trusted Root Certification Authorities.
  • Under this selection, open the Certificates store.
  • In the details pane on the right-hand side, select the line of the certificate that you want to delete. (To select multiple certificates, hold down control and click each certificate.)
  • Right click the selection you made and in the action menu, click delete.
  • Confirm your choice by clicking yes if you are completely sure that you want to permanently delete the certificate.

Please note that user certificates can be managed by the user or by an administrator. Certificates issued to a computer or service can only be managed by an administrator or user who has been given the appropriate permissions.

You might want to back up the certificate by exporting it before you delete it. For the procedure to export a certificate, see export a certificate.

If you want to look at the Thumbprint, aka serial number, of the certificates, you can use this Powershell command to list the non-Microsoft certificates in the Trusted Root Certification Authorities:

Get-ChildItem -Path cert:currentuserAuthRoot -Recurse | select Thumbprint, FriendlyName, Subject | ConvertTo-Html | Set-Content c:userspublicdesktopcertificates.html

This will create a html file on the public desktop that shows the list by Thumbprint (in reverse order) and where you can look up the Friendly Name and Subject that belongs to a Thumbprint.

exported certificates list

For those that do like to keep an eye on things, there is a guide by Xavier Mertens for a piece of code that alerts you about changes in the certificate store.


Since root certificates are intended to heighten security, it should be clear to those issuing them that they should be treated as such, and not as something that they can install willy-nilly whenever it suits their needs. The whole point of prompting users is to establish a chain of trust that they should be able to rely on. And in this case, the prompt was bypassed only to enable installation on a no-longer-supported operating system. That both ruins user trust and introduces unnecessary security risk for a rather shallow reason.

The post When you shouldn’t trust a trusted root certificate appeared first on Malwarebytes Labs.

Go to Source
Author: Pieter Arntz

Analyzing malware by API calls

Over the last quarter, we’ve seen an increase in malware using packers, crypters, and protectors—all methods used to obfuscate malicious code from systems or programs attempting to identify it. These packers make it very hard, or next to impossible to perform static analysis. The growing number of malware authors using these protective packers has triggered an interest in alternative methods for malware analysis.

Looking at API calls, or commands in the code that tell systems to perform certain operations, is one of those methods. Rather than trying to reverse engineer a protectively packed file, we use a dynamic analysis based on the performed API calls to figure out what a certain file might be designed to do. We can determine whether a file may be malicious by its API calls, some of which are typical for certain types for malware. For example, a typical downloader API is URLDownloadToFile. The API GetWindowDC is typical for the screen-grabbers we sometimes see in spyware and keyloggers.

Let’s look at an example to clarify how this might be helpful.

Trojan example

Our example is a well-known Trojan called 1.exe with SHA256 0213b36ee85a301b88c26e180f821104d5371410ab4390803eaa39fac1553c4c

detection packed

The file is packed (with VMProtect), so my disassembler doesn’t really know where to start. Since I’m no expert in reverse engineering, I will try to figure out what the file does by looking at the API calls performed during the sandboxed execution of the file.

This is the list of calls that we got from the sandbox (Deepviz):

API list

For starters, let’s have a look at what all these functions do. Here’s what I found out about them on Microsoft:

GetModuleHandle function

Retrieves a module handle for the specified module. The module must have been loaded by the calling process. GetModuleHandleA (ANSI)

GetProcAddress function

Retrieves the address of an exported function or variable from the specified dynamic-link library (DLL).


Convert a string to integer.

CreateStreamOnHGlobal function

This function creates a stream object that uses an HGLOBAL memory handle to store the stream contents.  This object is the OLE-provided implementation of the IStream interface.

StrStr function

Finds the first occurrence of a substring within a string. The comparison is case-sensitive. StrStrA (ANSI)

wsprintf function

Writes formatted data to the specified buffer. Any arguments are converted and copied to the output buffer according to the corresponding format specification in the format string. wsprintfA (ANSI)

WinHttpOpen function

This function initializes, for an application, the use of WinHTTP functions and returns a WinHTTP-session handle.

GetModuleFileName function

Retrieves the fully qualified path for the file that contains the specified module. The module must have been loaded by the current process. GetModuleFileNameW (Unicode)

LoadLibrary function

Loads the specified module into the address space of the calling process. The specified module may cause other modules to be loaded. LoadLibraryA (ANSI)

LocalAlloc function

Allocates the specified number of bytes from the heap.

LocalFree function

Frees the specified local memory object and invalidates its handle.

GetModuleFileName function

Retrieves the fully qualified path for the file that contains the specified module. The module must have been loaded by the current process. GetModuleFileNameA (ANSI)

ExitProcess function

Ends the calling process and all its threads.

The key malicious indicators

Not all of the functions shown above are indicative of the nature of an executable. But the API WinHttpOpen tells us that we can expect something in that area.

Following up on this function, we used URL Revealer by Kahu Security to check the destination of the traffic and found two URLs that were contacted over and over again.



This POST is what the VirusTotal API expects when you want to submit a file for a scan.

The link to an old and abandoned Twitter handle was a bigger mystery, until I decided to use the Advanced Search in Twitter and found this Tweet that must have been removed later on.

removed tweet

In base64, this Tweet says: Unfortunately that site no longer resolves, but it used to be an underground board where website exploits were offered along with website hacking services around the same time the aforementioned Twitter profile was active.

This was a dead end on trying to figure out what the malware was trying to GET. So we tried another approach by figuring out what it was trying to scan at VirusTotal and used Wireshark to take a look at the packets.

VT packet

In the packet, you can see the API key and the filename that were used to scan a file at the VirusTotal site. So, reconstructing from the API calls and from the packets we learned that the malware was submitting copies of itself to VirusTotal, which is typical behavior for the Vflooder family of Trojans. Vflooder is a special kind of Flooder Trojan. Flooder Trojans are designed to send a lot of information to a specific target to disrupt the normal operations of the target. But I doubt this one was ever able to make a dent in the VirusTotal infrastructure. Or the one on Twitter for that matter.

The Vflooder Trojan is just a small and relatively simple example of analyzing API calls. It’s not always that easy: We’ve even seen malware that added redundant/useless API calls just to obfuscate the flow. But analyzing API calls is a method to consider for detecting malware trying to hide itself. Just keep in mind that the bad guys are aware of it too.

The post Analyzing malware by API calls appeared first on Malwarebytes Labs.

Go to Source
Author: Pieter Arntz

Explainer: Smart contracts, Ethereum, ICO

Investing in cryptocurrency-funded projects is as hot as ever, and the almost complete absence of success hasn’t seemed to dim investors’ hopes one bit. In 2017 — with more than a full quarter to go — various project ICOs (that’s initial currency offerings) have already raised about $1.7 billion.

There aren’t too many successful projects to speak of, but investors remain optimistic, and cryptocurrencies like Ethereum may help explain why.

TOP-5 cryptocurrency capitalization and prices. Source

As you can see from the capitalization table above, Ethereum is a distant second to Bitcoin but miles ahead of other altcoins. In June 2017, the upstart almost overtook the mighty Bitcoin. What makes Ethereum so special, and why is it at the heart of the vast majority of ICOs this year?

The idea of Ethereum

From a user’s point of view, Bitcoin is nothing more than a payment system: Users transfer money to one another, and that’s about it. Ethereum goes beyond the simple payment-system framework, giving users the ability to write wallet-based programs.

These programs can receive money from wallets automatically, decide how much to send and where, and so forth — with one important condition: Each program operates the same for all users. The programs act according to known principles that are predictable, equal, transparent, and unmodifiable. Ethereum wallets come in two types: those managed by people and those run autonomously by programs.

The programs, also known as smart contracts, are written to the blockchain. Thus, a contract is stored forever, all users have a copy of it, and it is executed equally for every network participant that deals with it.

This innovation has significantly expanded blockchain currencies’ scope of application.

Examples of smart contracts

What programs can be written? Any you like. Take, for example, a financial pyramid. A pyramid’s smart contract might use the following rules:

  1. If sum x arrives from the address of wallet A, log it.
  2. If after that, sum y > 2x arrives from address B, send 2x money to address A, and log the debt to participant B.

And so on for each user and transaction that follows. Optionally, a rule could send 5% of all incoming money to the author of the smart contract.

Or how about an auction?

  1. If the auction is not over, log the addresses and bidding amounts of each participant.
  2. When the auction is over, select the maximum bid, announce the winner, and return all other bids.

Endless combinations of other entities and applications are possible: wallets with multiple owners, financial instruments, self-placing bets, polls, lotteries, games, casinos, notaries, and more.

Because of the blockchain, everyone can be sure that no one is cheating; everyone sees the program’s code and can track it working exactly as written. It will not make off with anyone’s money or go bankrupt (assuming, of course, there are no bugs or gremlins in the code).

Limitations of smart contracts

The smart contracts do, however, have significant limitations. Here are some of them:

  1. It’s very difficult to produce random numbers in a blockchain-based program, which affects lotteries.
  2. It’s not easy to hide certain information in blockchain — for example, auction participants or their bids. Blockchain was designed for transparency and sometimes it turns into disadvantage.
  3. If the contract requires information that is missing in the blockchain (e.g., the current exchange rate of a particular currency), then you have to trust a person who is adding this information to the blockchain.
  4. To interact with the contracts, users need ethers — the internal currency of Ethereum. Users without money wallets cannot take part in polls or any other Ethereum-based activities.
  5. Smart contracts work slowly. About 3 to 5 transactions per second can be performed worldwide — in total, not per participant.
  6. Any error in a smart contract stays forever. The only way to fix an error is to switch to another smart contract. However, this option must be included in the initial program, which is rarely the case.
  7. Smart contracts can freeze or fail to work as expected because program code can be difficult to understand, so writers may make critical errors — and users may not be able to tell what the code will actually do.

A simple Ethereum smart contract. Can you see the error that makes it possible to steal all of the money? Neither would most people.

Ultimately, much depends on the capabilities of smart contracts’ authors.

The main use of smart contracts

Pyramids, polls, casinos, lotteries — what’s not to like? But what smart contracts have really facilitated is IPO-style fundraising.

First of all, a smart contract lets you automate accounting. The contract logs how much money comes in and from whom, computes and distributes “shares,” and enables each participant to transfer and sell those shares.

Second, there’s no need to mess around with e-mail addresses, credit cards, card verification, investor authorization, and the like.

Finally, everyone can see how many shares were issued and how they were distributed among participants. The blockchain protects participants from project owners issuing additional shares secretly or someone selling one share multiple times to different people.

ICO — initial coin offering

As of January 1, 2017, one ether was worth $8, and the value peaked (for now, at least) at $400 by June. Gains were thanks to the large number of ICOs held as the initial offering of shares in startups. The desire to speculate in projects stimulated demand for the cryptocurrency — in this case, Ethereum. And such projects are now legion.

Ethereum price chart. Source

The typical cryptostartup follows this pattern:

  1. You have an idea, typically something cryptocurrency or blockchain related.
  2. You need money to get things off the ground.
  3. You announce to the public that you’re accepting ethers in exchange for shares (or tokens or whatever) under a smart contract.
  4. You advertise your project and raise the required sum.

The amount raised is usually $10 to $20 million, and it takes somewhere from a few minutes to a few days to collect. Typically, the ICO is limited in terms of time or amount raised, which causes a feeding frenzy.

Sometimes the frenzy reaches comic proportions. For example, one project ICO raised $35 million in 24 seconds. To get their fingers in the pie, project diehards paid up to $6,600 commission per transaction; the high demand for Ethereum combined with its low throughput hiked commission fees.

Crypto-investment payback

What happens next with tokens issued to investors depends on the project. Someone might promise to pay dividends on future profits; someone else might plan to accept the tokens as payment for project-related services. Another entrepreneur might promise nothing at all, like the Useless Ethereum Token‘s creator did, explicitly declaring that nobody gets anything in return and raising about $100,000 nevertheless.

Generally speaking, the tokens find their way onto the crypto–stock exchange, where they are traded. Those who missed out on the ICO can buy them there on the exchange, usually at a markup. Those who took part in the ICO in hopes of reselling at a profit can offload them on the exchange, where the regular economics of supply and demand apply (despite there being no product). One difference, however, is that no regulators exist in the crypto-industry, so shady means of inflating prices run rampant.

As I said at the beginning, there seems to be no more reason to jump into the ICO trend than any other get-rich-quick scheme — but now you understand some of the tech wizardry behind the excitement.

Go to Source
Author: Alexey Malanov

All the world’s a context: Targeted ads go offline

Targeted ads are all over the Internet nowadays. One minute you’re searching for information about hair loss, the next you’re seeing offers for a remedy. Click the Like button on an article about genetic tests and you’ll see discounts for that kind of test. Advertisements on the Internet reflect the collected knowledge of everything the target — you! — have liked, searched for, and seen online, and this “diary” of your online life paints a pretty candid picture of you, too.

At this point, seeing ads based on your online history is hardly a surprise — at least, not while you’re online. So here’s something new to get used to: targeted ads on the street, in stores, and in our cars.

You’re easy to target

First of all, you should know that you are being counted. The owners of stores and shopping centers want to know how many people walk past a store and how many go inside. Those who pass by are counted by cameras, motion sensors, and floor-pressure sensors. People who stand in check-out lines are counted separately to help stores optimize the number of employees.

Second, your movements are tracked. Your own smartphone is used as a radio beacon. By measuring the signal strength to several access points, marketers can pinpoint your location to within several feet (the best algorithms are accurate to a few dozen inches). In online marketing parlance, it’s called the customer journey — and it’s important data for marketing purposes. Offline you can call it the same — but literally.

Third, salespeople are trying to find out as much as possible about you. General information about a person and their habits and interests can be acquired by studying their purchase history and social-network profile data (we’ll talk later in this article about how they access the profile). Modern image-recognition systems can guess the gender and approximate age of a visitor simply by taking a look at them through a camera lens.

A visitor’s immediate actions can be registered as well. For example, Intel’s AIM Suite technology is capable of determining the direction of a person’s gaze and calculating the number and duration of instances of a person looking at offline (physical) ads.

How marketers track and sell

One of the most popular and effective methods for tracking potential and existing customers uses wireless (Wi-Fi) networks.

Its technical side is rather intriguing. When searching for available networks, a Wi-Fi gadget broadcasts its own MAC address, which is a unique combination of characters assigned by the manufacturer. (If Wi-Fi is enabled on the device, that search is continuous.) By picking up this MAC address, someone can track the movement of a specific person and find out, for example, which shops they enter and how often they visit the shopping center.

Developers of mobile platforms have made it harder for marketers to use this method. Starting with iOS 8, Apple mobile devices periodically broadcast fake MAC addresses, covering their owners’ tracks. Android 6.0 and above and Windows 10 adopted a similar tactic.

Nevertheless, their approach does not guarantee protection. In 2016, researchers from several French universities showed that it’s possible to identify devices using other service data that a device broadcasts when searching for networks, even without a MAC address. In the course of the experiment, the researchers managed to track almost half of the tested devices.

Here is how the owner of a mobile device can be identified without a MAC address. When a smartphone is searching for a Wi-Fi hotspot, it also broadcasts the SSIDs of already known networks (any networks to which it has been connected). This list alone may tie the phone to a specific person. For example, a gadget owner’s home Wi-Fi network name can be unique if they changed the default name set up by the router manufacturer or the service provider.

As noted in the project mentioned above, modern devices, for reasons of privacy, do not typically rely on this mechanism. However, the technique will still pick up old devices. Also, broadcasting an SSID is the only way to connect to a hidden network. Therefore, cautious users who have hidden their home network names are actually more prone to being tracked in public areas than other people.

None of those tricks are needed, though, if the device actually connects to a network. In that case, the gadget sends its real MAC address to the access point.

You might point out that a MAC address makes a distinction between one device and another, not between users. That’s true. However, when attempting to use free Internet, guests may have to authorize themselves on the network, for example by entering credentials for a social-media account.

Is free Wi-Fi convenient? It is. Does anyone read the Terms of Service to find out how their social media information is being used? Some say such people exist, but have you ever met one? The upshot is, store owners typically receive user profile information, and this data is quite valuable for targeted advertising and other marketing tricks.

What happens to your data?

As an example, let’s take a look at the End User Privacy Policy of Purple, a Wi-Fi data collection and analytics outfit.

The list of data collected includes:

  • Information from a social media account and the credentials (for example, an e-mail address) used to connect to the network;
  • A history of websites visited over the wireless network;
  • The technical specifications of the visitor’s smartphone, including IMEI number and phone number;
  • The location of the visitor within the shopping center.

Who can use the data collected from visitors, and how?

  • The owners of the establishment, who can show offers with goods and services as well as study “how the venue is being used and by whom.”
  • Advertisers, which can receive generalized information about the visitors of the establishment for “consumer analysis.”
  • Third parties, which, again, can use it for targeted advertising.

In addition, the collected information is bound to change hands if the operator sells the entire business or some of its assets.

In the case of Purple, after 24 months the operator anonymizes collected information by deleting all of the information that identifies a specific person. It is unclear, however, when this period actually starts. If it begins at the time of the last visit, then, by connecting to the network every so often, you may perpetually extend the life of your data on the operator’s servers.

You can formally opt out of collection of such data. The aforementioned Purple, for example, allows users to add the MAC addresses of their devices to the list of MAC addresses that are ignored by the system. In practice, however, it is somewhat hard to figure out every similar company that operates in every place you visit or plan to visit. You’ll do better by turning off Wi-Fi.

The quiescent enemy

Another surveillance option, judging by the latest news, involves ultrasound, and it’s starting to gain popularity among advertisers. The essence of the technology is the placement of ultrasound beacons. Humans can’t hear the signal, but a smartphone’s microphone can pick it up and deliver it to an app on the phone.

Ultrasound beacons can be physical — placed in shopping centers to track customers’ movements — or virtual. Ultrasound signals added to an audio track of a television program can measure the audience, for example, and ultrasound audio files that a website plays back can register and track visitors across various platforms.

These methods may sound like part of a distant future — or at least like something still in an experimental phase. Alas, it is not so. Ultrasound tracking has been in commercial use for quite some time; we are just unable to hear it.

In April 2017, researchers at the Braunschweig University of Technology revealed that they had discovered more than 200 applications that track users with the help of ultrasound. They also mentioned that as many as three platforms in commercial use have been leveraging this technology: Shopkick, Lisnr, and SilverPush.

Moreover, the researchers also found operational Shopkick ultrasound beacons in several European shopping centers.

Smile: You’re on hidden camera

In November 2016, a smart billboard targeting drivers of certain cars was put into operation in Moscow. The experimental billboard, installed by Synaps Labs, displays an advertisement for a new Jaguar SUV whenever a BMW or Volvo crossover approaches.

To drive the billboard’s operation, a machine-vision system compares the image of an approaching vehicle to the pictures in its database and, taking into account additional factors such as time of day and weather conditions, shows the appropriate advertisement.

Billboards that address the drivers of passing cars first appeared in Australia two years ago: “Hey, white Evoque! It’s never too late to cross over. This is the new Lexus.”

Another method (also originating in Australia) was utilized in Porsche billboards, which address not the drivers of cars from competitors but those who drive Porsches. The advertising slogan praises the drivers: “It’s so easy to pick you out in a crowd.”

Tracking vehicle movements is even easier than tracking humans thanks to license plates. Fortunately, advertisers have not devised a way to use them yet.

Make yourself invisible

Online, you can use tools that block tracking and advertisements (for example, the Private Browsing module, which blocks data collection, and the Anti-Banner module in Kaspersky Internet Security 2017, which, you guessed it, gets rid of banner ads). But is there a way to hide from street advertising — which soon enough may address us by name?

If the idea of having your personal data collected over Wi-Fi troubles you, then try taking these steps:

1. Turn off Wi-Fi on your devices when you are not using it.
2. Do not connect to free Wi-Fi hotspots if you can manage without them.
3. Do not use social media accounts for authorization.
4. When connecting to public networks, use a VPN, which protects your connection from prying eyes.
5. Check which applications access the microphone on your smartphone. You can read more about this here.

Still, when targeted outdoor advertising starts to use biometric identification, for example, based on face recognition, protection may require taking more radical measures.

Go to Source
Author: Igor Kuksov

Powered by WPeMatico