macOS Zero-Day Flaw Lets Hackers Bypass Security Using Invisible Mouse-Clicks

Your Mac computer running the Apple’s latest High Sierra operating system can be hacked by tweaking just two lines of code, a researcher demonstrated at the Def Con security conference on Sunday.

Patrick Wardle, an ex-NSA hacker and now Chief Research Officer of Digita Security, uncovered a critical zero-day vulnerability in the macOS operating system that could allow a malicious application installed in the targeted system to virtually “click” objects without any user interaction or consent.

To know, how dangerous it can go, Wardle explains: “Via a single click, countless security mechanisms may be completely bypassed. Run untrusted app? Click…allowed. Authorize keychain access? Click…allowed. Load 3rd-party kernel extension? Click…allowed. Authorize outgoing network connection? click …allowed.”

Wardle described his research into “synthetic” interactions with a user interface (UI) as “The Mouse is Mightier than the Sword,” showcasing an attack that’s capable of ‘synthetic clicks’—programmatic and invisible mouse clicks that are generated by a software program rather than a human.

macOS code itself offers synthetic clicks as an accessibility feature for disabled people to interact with the system interface in non-traditional ways, but Apple has put some limitations to block malware from abusing these programmed clicks.

hacking with mac os

Wardle accidentally discovered that High Sierra incorrectly interprets two consecutive synthetic mouse “down” event as a legitimate click, allowing attackers to programmatically interact with security warnings as well that asks users to choose between “allow” or “deny” and access sensitive data or features.

“The user interface is that single point of failure,” says Wardle. “If you have a way to synthetically interact with these alerts, you have a very powerful and generic way to bypass all these security mechanisms.”

Although Wardle has not yet published technical details of the flaw, he says the vulnerability can potentially be exploited to dump all passwords from the keychain or load malicious kernel extensions by virtually clicking “allow” on the security prompt and gain full control of a target machine.

Wardle said that he found this loophole accidentally when copying and pasting the code and that just two lines of code are enough to completely break this security mechanism.

Unlike earlier findings, Wardle didn’t report Apple about his latest research and choose to publicly reveal details of the zero-day bug at DefCon hacker conference.

“Of course OS vendors such as Apple are keenly aware of this ‘attack’ vector, and thus strive to design their UI in a manner that is resistant against synthetic events. Unfortunately, they failed,” says Wardle.

However, the Apple’s next version of macOS, Mojave, already has mitigated the threat by blocking all synthetic events, which eventually reduces the scope of accessibility features on applications that legitimately use this feature.

Go to Source

Flaws in Pre-Installed Apps Expose Millions of Android Devices to Hackers

Bought a new Android phone? What if I say your brand new smartphone can be hacked remotely?

Nearly all Android phones come with useless applications pre-installed by manufacturers or carriers, usually called bloatware, and there’s nothing you can do if any of them has a backdoor built-in—even if you’re careful about avoiding sketchy apps.

That’s exactly what security researchers from mobile security firm Kryptowire demonstrated at the DEF CON security conference on Friday.

Researchers disclosed details of 47 different vulnerabilities deep inside the firmware and default apps (pre-installed and mostly non-removable) of 25 Android handsets that could allow hackers to spy on users and factory reset their devices, putting millions of Android devices at risk of hacking.

At least 11 of those vulnerable smartphones are manufactured by companies including Asus, ZTE, LG, and the Essential Phone, and being distributed by US carriers like Verizon and AT&T.

Other major Android handset brands include Vivo, Sony, Nokia, and Oppo, as well as many smaller manufacturers such as Sky, Leagoo, Plum, Orbic, MXQ, Doogee, Coolpad, and Alcatel.

Some vulnerabilities discovered by researchers could even allow hackers to execute arbitrary commands as the system user, wipe all user data from a device, lock users out of their devices, access device’s microphone and other functions, access all their data, including their emails and messages, read and modify text messages, sending text messages, and more—all without the users’ knowledge.

“All of these are vulnerabilities that are prepositioned. They come as you get the phone out the box,” Kryptowire CEO Angelos Stavrou said in a statement. “That’s important because consumers think they’re only exposed if they download something that’s bad.”

For example, vulnerabilities in Asus ZenFone V Live could allow an entire system takeover, allowing attackers to take screenshots and record user’s screen, make phone calls, spying on text messages, and more.

Kryptowire, whose research was funded by the U.S. Department of Homeland Security, explained that these vulnerabilities stem from the open nature of the Android’s operating system that allows third-parties like device manufacturers and carriers to modify the code and create completely different versions of Android.

Kryptowire is the same security firm that, in late 2016, uncovered a pre-installed backdoor in more than 700 Million Android smartphones that surreptitiously found sending all text messages, call log, contact list, location history, and app data to China every 72 hours.

Kryptowire has responsibly reported the vulnerabilities to Google and the respective affected Android partners, some of which have patched the issues while others are working diligently and swiftly to address these issues with a patch.

However, it should be noted that since the Android operating system itself is not vulnerable to any of the disclosed issues, Google can’t do much about this, as it has no control over the third apps pre-installed by manufacturers and carriers.

Go to Source

FBI Warns of ‘Unlimited’ ATM Cashout Blitz

The Federal Bureau of Investigation (FBI) is warning banks that cybercriminals are preparing to carry out a highly choreographed, global fraud scheme known as an “ATM cash-out,” in which crooks hack a bank or payment card processor and use cloned cards at cash machines around the world to fraudulently withdraw millions of dollars in just a few hours.

“The FBI has obtained unspecified reporting indicating cyber criminals are planning to conduct a global Automated Teller Machine (ATM) cash-out scheme in the coming days, likely associated with an unknown card issuer breach and commonly referred to as an ‘unlimited operation’,” reads a confidential alert the FBI shared with banks privately on Friday.

The FBI said unlimited operations compromise a financial institution or payment card processor with malware to access bank customer card information and exploit network access, enabling large scale theft of funds from ATMs.

“Historic compromises have included small-to-medium size financial institutions, likely due to less robust implementation of cyber security controls, budgets, or third-party vendor vulnerabilities,” the alert continues. “The FBI expects the ubiquity of this activity to continue or possibly increase in the near future.”

Organized cybercrime gangs that coordinate unlimited attacks typically do so by hacking or phishing their way into a bank or payment card processor. Just prior to executing on ATM cashouts, the intruders will remove many fraud controls at the financial institution, such as maximum ATM withdrawal amounts and any limits on the number of customer ATM transactions daily.

The perpetrators also alter account balances and security measures to make an unlimited amount of money available at the time of the transactions, allowing for large amounts of cash to be quickly removed from the ATM.

“The cyber criminals typically create fraudulent copies of legitimate cards by sending stolen card data to co-conspirators who imprint the data on reusable magnetic strip cards, such as gift cards purchased at retail stores,” the FBI warned. “At a pre-determined time, the co-conspirators withdraw account funds from ATMs using these cards.”

Virtually all ATM cashout operations are launched on weekends, often just after financial institutions begin closing for business on Saturday. Last month, KrebsOnSecurity broke a story about an apparent unlimited operation used to extract a total of $2.4 million from accounts at the National Bank of Blacksburg in two separate ATM cashouts between May 2016 and January 2017.

In both cases, the attackers managed to phish someone working at the Blacksburg, Virginia-based small bank. From there, the intruders compromised systems the bank used to manage credits and debits to customer accounts.

The 2016 unlimited operation against National Bank began Saturday, May 28, 2016 and continued through the following Monday. That particular Monday was Memorial Day, a federal holiday in the United States, meaning bank branches were closed for more than two days after the heist began. All told, the attackers managed to siphon almost $570,000 in the 2016 attack.

The Blacksburg bank hackers struck again on Saturday, January 7, and by Monday Jan 9 had succeeded in withdrawing almost $2 million in another unlimited ATM cashout operation.

The FBI is urging banks to review how they’re handling security, such as implementing strong password requirements and two-factor authentication using a physical or digital token when possible for local administrators and business critical roles.

Other tips in the FBI advisory suggested that banks:

-Implement separation of duties or dual authentication procedures for account balance or withdrawal increases above a specified threshold.

-Implement application whitelisting to block the execution of malware.

-Monitor, audit and limit administrator and business critical accounts with the authority to modify the account attributes mentioned above.

-Monitor for the presence of remote network protocols and administrative tools used to pivot back into the network and conduct post-exploitation of a network, such as Powershell, cobalt strike and TeamViewer.

-Monitor for encrypted traffic (SSL or TLS) traveling over non-standard ports.

-Monitor for network traffic to regions wherein you would not expect to see outbound connections from the financial institution.

Go to Source
Author: BrianKrebs

Snapchat Hack — Hacker Leaked Snapchat Source Code On GitHub

The source code of the popular social media app Snapchat was recently surfaced online after a hacker leaked and posted it on the Microsoft-owned code repository GitHub.

A GitHub account under the name Khaled Alshehri with the handle i5xx, who claimed to be from Pakistan, created a GitHub repository called Source-Snapchat with a description “Source Code for SnapChat,” publishing the code of what purported to be Snapchat’s iOS app.

The underlying code could potentially expose the company’s extremely confidential information, like the entire design of the hugely-successful messaging app, how the app works and what future features are planned for the app.

Snapchat’s parent company, Snap Inc., responded to the leaked source code by filing a copyright act request under the Digital Millennium Copyright Act (DMCA), helping it takedown the online repository hosting the Snapchat source code.

SnapChat Hack: Github Took Down Repository After DMCA Notice

how to hack snapchat source code

Though it is not clear precisely what secret information the leaked SnapChat source code contained, the company’s panic can be seen in the DMCA request (written in all-caps) which suggests the contents of the repository were legitimate.

“I AM [private] AT SNAP INC., OWNER OF THE LEAKED SOURCE CODE,” a reply from a Snap employee, whose name is redacted, on the DMCA notice reads.

Upon asking “Please provide a detailed description of the original copyrighted work that has allegedly been infringed. If possible, include a URL to where it is posted online,” the Snap employee responded:



Snap told several online news outlets that an iOS update in May exposed a “small amount” of its iOS source code.

Although the company identified and rectified the mistake immediately, it discovered that some of the exposed source code had been posted online.

However, Snap did confirm that the code has been subsequently removed and that the event did not compromise its application and had no impact on its community.

Pakistani Hacker Threatens to Re-Upload Snapchat’s Source Code

It appears that the online user behind the source code leak created the Github account with the sole purpose of sharing the Snapchat source code as nothing else was posted on the account before or after the Snapchat leak.

Moreover, some posts on Twitter by at least two individuals (one based in Pakistan and anotherin France) who appear to be behind the i5xx GitHub account suggest that they tried contacting Snapchat about the source code and expecting a bug bounty reward.

But when they did not get any response from the company, the account threatened to re-upload the source code until they get a reply from Snapchat.

The Snapchat source code has now been taken down by GitHub after the DMCA request, and will not be restored unless the original publisher comes up with a legal counterclaim proving he/she is the owner of the source code.

However, this does not rectify the issue completely. Since the Snapchat source code is still in the hands of outsiders, they could re-publish it on other online forums, or could use it for individual profit.

Go to Source

BIOS Boots What? Finding Evil in Boot Code at Scale!

The second issue is that reverse engineering all boot records is
impractical. Given the job of determining if a single system is
infected with a bootkit, a malware analyst could acquire a disk image
and then reverse engineer the boot bytes to determine if anything
malicious is present in the boot chain. However, this process takes
time and even an army of skilled reverse engineers wouldn’t scale to
the size of modern enterprise networks. To put this in context, the
compromised enterprise network referenced in our ROCKBOOT blog post had approximately
10,000 hosts. Assuming a minimum
of two boot records per host, a Master Boot Record (MBR) and a Volume
Boot Record (VBR), that is an average of 20,000 boot records to analyze! An
initial reaction is probably, “Why not just hash the boot records and
only analyze the unique ones?” One would assume that corporate
networks are mostly homogeneous, particularly with respect to boot
code, yet this is not the case. Using the same network as an example,
the 20,000 boot records reduced to only 6,000 unique records based on
MD5 hash. Table 1 demonstrates this using data we’ve collected across
our engagements for various enterprise sizes.

Enterprise Size (# hosts) Avg # Unique Boot Records (md5)
100-1000 428
1000-10000 4,738
10000+ 8,717

Table 1 – Unique boot records by MD5 hash

Now, the next thought might be, “Rather than hashing the entire
record, why not implement a custom hashing technique where only
subsections of the boot code are hashed, thus avoiding the dynamic
data portions?” We tried this as well. For example, in the case of
Master Boot Records, we used the bytes at the following two offsets to
calculate a hash:

md5( offset[0:218] + offset[224:440] )

In one network this resulted in approximately 185,000 systems
reducing to around 90 unique MBR hashes. However, this technique had
drawbacks. Most notably, it required accounting for numerous special
cases for applications such as Altiris, SafeBoot, and PGPGuard. This
required small adjustments to the algorithm for each environment,
which in turn required reverse engineering many records to find the
appropriate offsets to hash.

Ultimately, we concluded that to solve the problem we needed a
solution that provided the following:

  • A reliable collection of
    boot records from systems
  • A behavioral analysis of boot
    records, not just static analysis
  • The ability to analyze
    tens of thousands of boot records in a timely manner

The remainder of this post describes how we solved each of these challenges.

Collect the Bytes

Malicious drivers insert themselves into the disk driver stack so
they can intercept disk I/O as it traverses the stack. They do this to
hide their presence (the real bytes) on disk. To address this attack
vector, we developed a custom kernel driver (henceforth, our “Raw
Read” driver) capable of targeting various altitudes in the disk
driver stack. Using the Raw Read driver, we identify the lowest level
of the stack and read the bytes from that level (Figure 1).

Figure 1: Malicious driver inserts itself
as a filter driver in the stack, raw read driver reads bytes from
lowest level

This allows us to bypass the rest of the driver stack, as well as
any user space hooks. (It is important to note, however, that if the
lowest driver on the I/O stack has an inline code hook an attacker can
still intercept the read requests.) Additionally, we can compare the
bytes read from the lowest level of the driver stack to those read
from user space. Introducing our first indicator of a compromised
boot system: the bytes retrieved from user space don’t match those
retrieved from the lowest level of the disk driver stack.

Analyze the Bytes

As previously mentioned, reverse engineering and static analysis are
impractical when dealing with hundreds of thousands of boot records.
Automated dynamic analysis is a more practical approach, specifically
through emulating the execution of a boot record. In more technical
terms, we are emulating the real mode instructions of a boot record.

The emulation engine that we chose is the Unicorn project. Unicorn
is based on the QEMU emulator and supports 16-bit real mode emulation.
As boot samples are collected from endpoint machines, they are sent to
the emulation engine where high-level functionality is captured during
emulation. This functionality includes events such as memory access,
disk reads and writes, and other interrupts that execute during emulation.

The Execution Hash

Folding down (aka stacking) duplicate samples is critical to reduce
the time needed on follow-up analysis by a human analyst. An
interesting quality of the boot samples gathered at scale is that
while samples are often functionally identical, the data they use
(e.g. strings or offsets) is often very different. This makes it quite
difficult to generate a hash to identify duplicates, as demonstrated
in Table 1. So how can we solve this problem with emulation? Enter the
“execution hash”. The idea is simple: during emulation, hash the
mnemonic of every assembly instruction that executes (e.g.,
“md5(‘and’ + ‘mov’ + ‘shl’ + ‘or’)”). Figure 2 illustrates this
concept of hashing the assembly instruction as it executes to
ultimately arrive at the “execution hash”

Figure 2: Execution hash

Using this method, the 650,000 unique boot samples we’ve collected
to date can be grouped into a little more than 300 unique execution
hashes. This reduced data set makes it far more manageable to identify
samples for follow-up analysis. Introducing our second indicator of
a compromised boot system: an execution hash that is only found on a
few systems in an enterprise!

Behavioral Analysis

Like all malware, suspicious activity executed by bootkits can vary
widely. To avoid the pitfall of writing detection signatures for
individual malware samples, we focused on identifying behavior that
deviates from normal OS bootstrapping. To enable this analysis, the
series of instructions that execute during emulation are fed into an
analytic engine. Let’s look in more detail at an example of malicious
functionality exhibited by several bootkits that we discovered by
analyzing the results of emulation.

Several malicious bootkits we discovered hooked the interrupt vector
table (IVT) and the BIOS Data Area (BDA) to intercept system
interrupts and data during the boot process. This can provide an
attacker the ability to intercept disk reads and also alter the
maximum memory reported by the system. By hooking these structures,
bootkits can attempt to hide themselves on disk or even in memory.

These hooks can be identified by memory writes to the memory ranges
reserved for the IVT and BDA during the boot process. The IVT
structure is located at the memory range 0000:0000h to 0000:03FCh and
the BDA is located at 0040:0000h. The malware can hook the interrupt
13h handler to inspect and modify disk writes that occur during the
boot process. Additionally, bootkit malware has been observed
modifying the memory size reported by the BIOS Data Area in order to
potentially hide itself in memory.

This leads us to our final category of indicators of a compromised
boot system: detection of suspicious behaviors such as IVT hooking,
decoding and executing data from disk, suspicious screen output from
the boot code, and modifying files or data on disk.

Do it at Scale

Dynamic analysis gives us a drastic improvement when determining the
behavior of boot records, but it comes at a cost. Unlike static
analysis or hashing, it is orders of magnitude slower. In our cloud
analysis environment, the average time to emulate a single record is
4.83 seconds. Using the compromised enterprise network that contained
ROCKBOOT as an example (approximately 20,000 boot records), it would
take more than 26 hours to dynamically analyze (emulate) the records
serially! In order to provide timely results to our analysts we needed
to easily scale our analysis throughput relative to the amount of
incoming data from our endpoint technologies. To further complicate
the problem, boot record analysis tends to happen in batches, for
example, when our endpoint technology is first deployed to a new enterprise.

With the advent of serverless cloud computing, we had the
opportunity to create an emulation analysis service that scales to
meet this demand – all while remaining cost effective. One of the
advantages of serverless computing versus traditional cloud instances
is that there are no compute costs during inactive periods; the only
cost incurred is storage. Even when our cloud solution receives tens
of thousands of records at the start of a new customer engagement, it
can rapidly scale to meet demand and maintain near real-time detection
of malicious bytes.

The cloud infrastructure we selected for our application is Amazon
Web Services (AWS). Figure 3 provides an overview of the architecture.

Figure 3: Boot record analysis workflow

Our design currently utilizes:

  • API Gateway to provide a RESTful interface.
  • Lambda functions to do validation, emulation, analysis, as
    well as storage and retrieval of results.
  • DynamoDB to track progress of processed boot records through
    the system.
  • S3 to store boot records and emulation reports.

The architecture we created exposes a RESTful API that provides a
handful of endpoints. At a high level the workflow is:

  1. Endpoint agents in
    customer networks automatically collect boot records using FireEye’s
    custom developed Raw Read kernel driver (see “Collect the bytes”
    described earlier) and return the records to FireEye’s Incident
    Response (IR) server.
  2. The IR server submits batches of boot
    records to the AWS-hosted REST interface, and polls the interface
    for batched results.
  3. The IR server provides a UI for
    analysts to view the aggregated results across the enterprise, as
    well as automated notifications when malicious boot records are

The REST API endpoints are exposed via AWS’s API Gateway, which then
proxies the incoming requests to a “submission” Lambda. The submission
Lambda validates the incoming data, stores the record (aka boot code)
to S3, and then fans out the incoming requests to “analysis” Lambdas.

The analysis Lambda is where boot record emulation occurs. Because
Lambdas are started on demand, this model allows for an incredibly
high level of parallelization. AWS provides various settings to
control the maximum concurrency for a Lambda function, as well as
memory/CPU allocations and more. Once the analysis is complete, a
report is generated for the boot record and the report is stored in
S3. The reports include the results of emulation and other metadata
extracted from the boot record (e.g., ASCII strings).

As described earlier, the IR server periodically polls the AWS REST
endpoint until processing is complete, at which time the report is downloaded.

Find More Evil in Big Data

Our workflow for identifying malicious boot records is only
effective when we know what malicious indicators to look for, or what
execution hashes to blacklist. But what if a new malicious boot record
(with a unique hash) evades our existing signatures?

For this problem, we leverage our in-house big data platform engine
that we integrated into FireEye
following the acquisition of X15 Software. By loading the
results of hundreds of thousands of emulations into the engine X15,
our analysts can hunt through the results at scale and identify
anomalous behaviors such as unique screen prints, unusual initial jump
offsets, or patterns in disk reads or writes.

This analysis at scale helps us identify new and interesting samples
to reverse engineer, and ultimately helps us identify new detection
signatures that feed back into our analytic engine.


Within weeks of going live we detected previously unknown
compromised systems in multiple customer environments. We’ve
identified everything from ROCKBOOT and HDRoot!
bootkits to the admittedly humorous JackTheRipper,
a bootkit that spreads itself via floppy disk (no joke). Our system
has collected and processed nearly 650,000 unique records to date and
continues to find the evil needles (suspicious and malicious boot
records) in very large haystacks.

In summary, by combining advanced endpoint boot record extraction
with scalable serverless computing and an automated emulation engine,
we can rapidly analyze thousands of records in search of evil. FireEye
is now using this solution in both our Managed
and Incident


Dimiter Andonov, Jamin Becker, Fred House, and Seth Summersett
contributed to this blog post.

Go to Source
Author: Ryan Fisher

White hat, black hat, and the emergence of the gray hat: the true costs of cybercrime

This post was written by Michael Osterman of Osterman Research.

Osterman Research recently completed a major survey on behalf of Malwarebytes to determine the actual cost of cybercrime to businesses. Many studies have focused on the cost of lost reputation, lost future business, and other consequences of cybercrime—and while these are certainly valid considerations—we wanted to understand the direct costs of cybercrime. To do so, we surveyed mid-sized and large organizations on a variety of issues, but focused on three cost components:

  • Security budgets
  • The cost of remediating “major” events, e.g., events like a widespread ransomware infection or major data breach that would be highly disruptive to an organization and might take it offline for some period of time
  • The cost of cybercrime perpetrated by “gray hats;” those employees who dabble in cybercrime without giving up their day job as a security professional

Here’s what we discovered:

Cybercrime isn’t cheap

Organizations of all sizes can expect to spend significant amounts on various cybersecurity-related costs. For example, our research found that an organization of 2,500 employees in the United States can expect to spend nearly $1.9 million per year for cybersecurity-related costs (that’s nearly $760 per employee).

While the costs are lower in most of the other countries that we surveyed, the global average exceeds $1.1 million for a 2,500-employee organization.

Gray hats are a problem

Globally, one in 22 security professionals are perceived by their security-professional peers to be gray hats, but this figure jumps to one in 13 for organizations based in the United Kingdom. Mid-sized organizations (500 to 999 employees) are getting squeezed the hardest, and this is where the skills shortage, and the allure of becoming a gray hat, may be the greatest.

Underscoring the depth of the gray hat problem is the fact that 12 percent of security professionals admit to considering participation in black hat activity, 22 percent have actually been approached about doing so, and 41 percent either know or have known someone who has participated in this activity. This is by no means a rare or isolated problem!

Once more unto the breach

We found that the vast majority of organizations have suffered some type of security breach and/or attack during the 12 months preceding the survey. The most common avenue of attack was from phishing, but others that were experienced included adware/spyware, ransomware, spearphishing, accidental and intentional data breaches, nation-state attacks, and hacktivist attacks.

Only 27 percent of organizations reported no attacks during the 12 months leading up to the survey, and even that figure may underestimate the depth of the problem: some organizations can be infiltrated by stealthy attacks that may not be discovered for several months after the initial infiltration.

The middle child syndrome

Corroborating what Osterman Research has discovered in other research, mid-market companies—those with 500 to 999 employees—face the most difficult challenges from a security perspective. They encounter a higher rate of attack than smaller companies and similar rates of attack as their larger counterparts, but they have fewer employees over which to distribute the cost of the security infrastructure.

In short, mid-market organizations have big company problems and small company budgets with which to solve them.

Major attacks

We found that a “major” attack occurs with alarming frequency. Globally, we found that during 2017, such attacks occurred to the organizations we surveyed at an average of once every 15 months. But US organizations were the hardest hit in 2017, with an average of one attack every 6.7 months. These are highly disruptive events that can take a company off-line for days or weeks.

As just one example of such an attack, consider the City of Atlanta that was infected with ransomware in April 2018 and has spent more than $2.6 million on remediating the compromise. The attack impacted five of the City’s 13 departments and the police department’s records system, as well as causing other mayhem for city employees and the public.

The bottom line is that cybercrime costs enormous amounts that go well beyond the annual security budget. And if companies don’t find a way to put a stop to the cybercrime happening both inside and outside of their walls, they’ll have to pay the price.

White Hat, Black Hat, and the Emergence of The Gray Hat: The True Costs of Cybercrime

The post White hat, black hat, and the emergence of the gray hat: the true costs of cybercrime appeared first on Malwarebytes Labs.

Go to Source
Author: Malwarebytes Labs

Osiris dropper found using process doppelgänging

Process Doppelgänging, a new technique of impersonating a process, was published last year at Black Hat conference. After some time, a ransomware named SynAck was discovered that adopted it for malicious purposes. However, this technique is still pretty rare in wild. So, it was an interesting surprise to notice it in a dropper of the Osiris banking Trojan (a new version of the infamous Kronos).

The authors of this dropper were skilled, and they added several other tricks to spice the whole thing up. In this post, we will have a closer look at the loader’s implementation.

Analyzed sample

Osiris is loaded in three steps:


The dropper creates a new process and injects the content inside:

Interestingly, when we look into the modules loaded in the process space of the injector, we can see an additional copy of NTDLL:

This is a well-known technique that some malware authors use in order to evade monitoring applications and hide the API calls that they used.

When we examine closely what the functions are called from that additional NTDLL, we find more interesting details. It calls several APIs related to NTFS transactions. It was easy to guess that the technique of Process Doppelgänging, which relies on this mechanism, was applied here.

Loading additional NTDLL

NTDLL is a special, low-level DLL. Basically, it is just a wrapper around syscalls. It does not have any dependencies from other DLLs in the system. Thanks to this, it can be loaded conveniently, without the need to fill its import table.

Other system DLLs, such as Kernel32, rely heavily on functions exported from NTDLL. This is why many user-land monitoring tools hook and intercept the functions exported by NTDLL: to watch what functions are being called and check if the process does not display any suspicious activity.

Of course malware authors know about this, so sometimes, in order to fool this mechanism, they load their own, fresh and unhooked copy of NTDLL from the disk. There are several ways to implement this. Let’s have a look how the authors of the Osiris dropper did it.

Looking at the memory mapping, we see that the NTDLL is loaded as an image, just like other DLLs. However, it was not loaded by a typical LoadLibrary function, nor even by its low-level version from NTDLL, LdrLoadDll. Instead, the authors decided to load the file as a section, using following functions:

  • ntdll.NtCreateFile – to open the ntdll.dll file
  • ntdll.NtCreateSection – to create a section out of this file
  • ntdll.ZwMapViewOfSection – to map this section into the process address space

This was smart move because the DLL looks like it was loaded in a typical way, and yet, if we monitor the LdrLoadDll function, we see nothing suspicious.

Implementation of Process Doppelgänging

In order to make their injection more stealthy, the authors took the original implementation of Process Doppelgänging a step further and used only low-level APIs. So, instead of calling the convenient wrappers from Kernel32, for most of the functions they called their equivalents from NTDLL. Moreover, they used the aforementioned custom copy of this DLL.

First, they created a new suspended process. This is the process into which the payload will be injected. In this particular case, the function was called from kernel32.dll: CreateProcessInternal.

Process Doppelgänging then starts from creating a new transaction, within which a new file is created. The original implementation used CreateTransaction and CreateFileTransacted from Kernel32 for this purpose. But this is not the case here.

First, a function ZwCreateTransaction from a NTDLL is called. Then, instead of CreateFileTransacted, the authors open the transacted file by RtlSetCurrentTransaction along with ZwCreateFile (the created file is %TEMP%\Liebert.bmp). Then, the dropper writes the content of the new executable to the file—the second stage of the malware. Analogically, RtlSetCurrentTransaction with ZwWriteFile is used.

We can see that the buffer that is being written contains the new PE file: the second stage payload. Typically, for the Process Doppelgänging technique, the file is visible only within the transaction and cannot be opened by other processes, such as AV scanners.

After the file inside the transaction is created, it will be used to create a buffer in special format, called a section. The function that can do it is available only via low-level API: ZwCreateSection/NtCreateSection.

After the section is created, the file that was created is no longer needed. The transaction gets rolled back (by ZwRollbackTransaction), and the changes to the file are never saved on the disk.

Further, the created section will be used to load a PE file. After writing the payload into memory and setting the necessary patches, such as Entry Point redirection, the process is resumed:

Second stage loader

The next layer (8d58c731f61afe74e9f450cc1c7987be) is not the core yet, but the next stage of the loader. The way it loads the final payload is much simpler, yet still not trivial. The code of Osiris core is unpacked piece by piece and manually loaded along with its dependencies into a newly allocated memory area within the loader process.

After this self-injection, the loader jumps into the payload’s entry point:

The interesting thing is that the entry point of the application is different than the entry point saved in the header. So, if we dump the payload and try to run it interdependently, we will not get the same code executed. This is an interesting technique used to misguide researchers.

This is the entry point that was set in the headers is at RVA 0x26840:

The call leads to a function that makes the application go in an infinite sleep loop:

The real entry point, from which the execution of the malware should start, is at 0x25386, and it is known only to the loader.

Comparison with Kronos loader

A similar trick using a hidden entry point was used by the original Kronos (2a550956263a22991c34f076f3160b49).

In the case of Kronos, the final payload is injected into svchost. The execution is redirected to the core by patching the entry point in the svchost:

In this case, the entry point within the payload is at RVA 0x13B90, while the entry point saved in the payload’s header (d8425578fc2d84513f1f22d3d518e3c3) is at 0x15002.

The code at the real Kronos entry point displays similarities with the analogical point in Osiris. Yet, we can see they are not identical:


The implementation of Process Dopplegänging used in the first stage loader is clean and professional. The author used a relatively new technique and made the best out of it by composing it with other known tricks. The precision used here reminds us of the code used in the original Kronos. However, we can’t be sure if the first layer is written by the same author as the core bot. Malware distributors often use third-party crypters to pack their malware. The second stage is more tightly coupled with the payload, and here we can say with more confidence that this layer was prepared along with the core.

The post Osiris dropper found using process doppelgänging appeared first on Malwarebytes Labs.

Go to Source
Author: hasherezade


This is a guest post by independent security researcher James Quinn.

Continuing the 2018 trend of cryptomining malware, I’ve found another family of mining malware similar to the “massminer” discovered in early May.  I’m calling this family ZombieBoy since it uses a tool called ZombieBoyTools to drop the first dll.

ZombieBoy, like MassMiner, is a cryptomining worm that uses some exploits to spread. However, unlike MassMiner, ZombieBoy uses WinEggDrop instead of MassScan to search for new hosts. ZombieBoy is being continually updated, and I’ve been obtaining new samples almost daily.

An overview of ZombieBoy’s execution is below:


ZombieBoy uses several servers running HFS (http file server) in order to acquire payloads.  The URLs that I have identified are below:

  • ca[dot]posthash[dot]org:443/
  • sm[dot]posthash[dot]org:443/
  • sm[dot]hashnice[dot]org:443/

In addition, it appears to have a C2 server at dns[dot]posthash[dot]org.


ZombieBoy makes use of several exploits during execution:

  • CVE-2017-9073, RDP vulnerability on Windows XP and Windows Server 2003
  • CVE-2017-0143, SMB exploit
  • CVE-2017-0146, SMB exploit


ZombieBoy first uses the EternalBlue/DoublePulsar exploits to remotely install the main dll. The program used to install the 2 exploits is called ZombieBoyTools and appears to be of chinese origin. It uses Chinese simplified as its language, and has been used to deploy a number of Chinese malware families (such as the IRONTIGER APT version of Gh0stRAT) .

ZombieBoyTools screenshot

Once the DoublePulsar exploit is successfully executed, it loads and executes the first Dll of the malware. This downloads 123.exe from ca[dot]posthash[dot]org:443, saves it to “C:%WindowsDirectory%sys.exe”, and then executes it.

Set up

123.exe does several things on execution.  First, it downloads the module [1] from its file distribution servers.  According to code analysis of 123.exe, it refers to this module as “64.exe”, but saves it to the victim as “boy.exe”.   After saving the module, it executes it.  64.exe appears to be in charge of distributing ZombieBoy as well as holding the XMRIG miner.

In addition to downloading a module from its servers, 123.exe also drops and executes 2 modules.  The first module is referred to in the code as “74.exe”.  This is saved as “C:Program Files(x86)svchost.exe. This appears to be a form of the age-old Gh0stRAT.

The second module is referred to in the code as “84.exe”.  This is saved as “C:Program Files(x86)StormIImssta.exe” and appears to be a RAT of unknown origin.


64.exe is the first module downloaded by ZombieBoy. 64.exe uses some anti-analysis techniques that are quite formidable.  First, the entire executable is encrypted with the packer Themida, making reverse-engineering difficult.  Also, in current versions of ZombieBoy, it will detect a VM and subsequently not run.

64.exe drops 70+ files into C:WindowsIIS that consists of the XMRIG miner, the exploits, as well as a copy of itself that it names CPUInfo.exe.

64.exe obtains the ip of the victim by connecting to ip[dot]3222[dot]net.  It then uses WinEggDrop, a lightweight TCP scanner to scan the network to find more targets with port 445 open.  It uses the IP obtained above as well as the local IP to spread to the local network as well as the public ip netrange

64.exe uses the DoublePulsar exploit to install both a SMB backdoor as well as an RDP backdoor.

DoublePulsar screenshot

In addition, 64.exe uses XMRIG to mine for XMR.  Prior to shutting down one of its addresses on, ZombieBoy was mining at around 43KH/s. This would earn the attackers slightly over $1,000 per month at current Monero prices.

A new address has been found, however, ZombieBoy no longer uses to mine.

Known Addresses:

  • 42MiUXx8i49AskDATdAfkUGuBqjCL7oU1g7TsU3XCJg9Maac1mEEdQ2X9vAKqu1pvkFQUuZn2HEzaa5UaUkMMfJHU5N8UCw
  • 49vZGV8x3bed3TiAZmNG9zHFXytGz45tJZ3g84rpYtw78J2UQQaCiH6SkozGKHyTV2Lkd7GtsMjurZkk8B9wKJ2uCAKdMLQ

Using strace, I found that 64.exe was obtaining information about the victim, such as enumerating the OS architecture.


74.exe is the first module dropped by 123.exe, and the second module overall.  In its base form, 74.exe is in charge of downloading, decrypting, and executing a Gh0stRat dll named NetSyst96.dll.  In addition, 74.exe decrypts a series of arguments to be passed to Netsyst96.dll.

The arguments are as follows:

  3. 5742944442
  4. YP_70608
  5. ANqiki cmsuucs
  6. Aamqcygqqeqkia
  7. Fngzxzygdgkywoyvkxlpv ldv
  8. %ProgramFiles%/
  9. Svchost.exe
  10. Add
  11. Eeie saswuk wso

Decryption Screenshot

Once 74.exe has decrypted the arguments, it checks if NetSyst96.dll has been downloaded and saved to C:Program FilesAppPatchmysqld.dll.  It does this by calling CreateFileA with the CreationDisposition set to Open_Existing.  If mysqld.dll is not found, 74.exe opens a connection to ca[dot]posthash[dot]org:443/ and downloads NetSyst96.dll, saving it as C:Program FilesAppPatchmysqld.dll.

NetSyst96.dll has 2 exported functions, DllFuUpgraddrs, and DllFuUpgraddrs1.  After saving NetSyst96.dll as mysqld.dll, 74.exe locates DllFuUpgraddrs in NetSyst96.dll before calling it.


NetSyst96.dll is the called dll of 74.exe.  Typically encrypted, an analysis of the decrypted files returns some interesting strings which can be used to identify it, such as “Game Over Good Luck By Wind”, “jingtisanmenxiachuanxiao.vbs”.

Strings screenshot showing some of the dropped files

NetSyst96.dll can capture the users screen, record audio, and even edit the clipboard.  Also, a strings analysis revealed that it imports keyboard keys, typical of a keylogger.  First, Netsyst96.dll obtains the Environment Strings path and uses that to create the path C:Program files (x86)svchost.exe. Next, using CreateToolhelp32Snapshot, NetSyst96.dll searches the running processes for Rundll32.exe in order to determine if it is the first time running the dll.

For first time run throughs, NetSyst96.dll does a couple things to maintain persistence

  • Saves a copy of 74.exe as C:Program Files(x86)svchost.exe
  • Registers “ANqiki cmsuucs” as a service using System/CurrentControlSet/Services/ANqiki cmsuucs
    • When the service is launched, runs svchost.exe
  • Adds MARKTIME to the registry key, appending the time it was last launched.
  • Use a snapshot from CreateToolhelp32Snapshot to search the running processes for svchost.exe
    • If not found, launch it and loop back to searching for svchost.exe
    • If one is found, Save svchost.exe to Run
    • If more than one is found, Call a function to create a vbs script to delete the extra svchost.exe

On Consecutive Run throughs, NetSyst96.dll is more concerned with connecting to the C2 server:

  1. Locate and verify that “System/CurrentControlSet/Services/ANqiki cmsuucs” exists
    1. If it doesn’t exist, create the key like above
    2. If it does exist, continue on to step 2
  2. Create event named “Eeie saswuk wso”
  3. Enumerate and change the input desktop
  4. Pass the C2 server Ip to C2URL (dns[dot]posthash[dot]org)
  5. Start WSA (winsock 2.0)
  6. Connect to www[dot]ip123[dot]com[dot]cn and obtain the ip of dns[dot]posthash[dot]org
    1. The actual IP is subject to change, however, it currently is 211.23.47[dot]186
  7. Reset Event
  8. Connect to C2 Server and await commands

While the command that triggers this function is unknown, I did uncover a 31 option switch-case that seems to be the command options for NetSyst96.dll.  See the Appendix for more indepth analysis of some of the 31 options.


84.exe is the second module dropped by 123.exe, and the third module overall.  Just like 74.exe, it appears to be a RAT.  However, that is where the similarities stop.  Unlike 74.exe, 84.exe does not need to download any additional libraries and instead decrypts and executes Loader.dll from its own memory.  In addition, 84.exe uses a function to decrypt Loader.dll that involves throwing exceptions for every character that needs to be decrypted.

Additional run through information:

  • Sets the user’s environment strings to C:Program Files(x86)StormII

In addition, once Loader.dll is called, 84.exe passes a series of variables to Loader.dll through a function called ‘Update’


  1. ChDz0PYP8/oOBfMO0A/0B6Y=
  2. 0
  3. 6gkIBfkS+qY=
  4. dazsks fsdgsdf
  5. daac gssosjwayw
  6. |_+f+
  7. fc45f7f71b30bd66462135d34f3b6c66
  8. EQr8/KY=
  9. C:Program Files(x86)StormII
  10. Mssta.exe
  11. 0
  12. Ccfcdaa
  13. Various integers

Of the strings passed to Loader.dll, 3 are encrypted.  The decrypted strings are as follows

  1. [ChDz0PYP8/oOBfMO0A/0B6Y=] = “dns[dot]posthash[dot]org”
  2. [6gkIBfkS+qY=] = “Default”
  3. [EQr8/KY=] = “mdzz”


Loader.dll is a RAT with some interesting features, like the ability to search for the CPU write speed, as well as search the system for antiviruses.

Launched by 84.exe, the first thing Loader.dll does is obtain the variables from ‘Update’ in 84.exe.  At this point, Loader.dll creates several important runtime objects:

  • Uninheritable, non-signaled, auto-reset event named Null, handle: 0x84
  • Thread to execute a function that manipulates DesktopInfo
  • An input Desktop with the handle 0x8C and the flag DF_ALLOWOTHERACCOUNTS, which is set as the desktop of the calling thread.

Loader.Dll then searches the system for “dazsks fsdgsdf” in SYSTEM/CurrentControlSet/Services/Dazsks Fsdgsdf, which is used to determine if this is the first time running the malware.

First Time Run:

  • Loader.dll creates the service Dazsks Fsdgsdf with ImagePath = C:Program Files(x86)StormIImssta.exe
  • Loader.dll attempts to run the newly created service.  If the attempt is successful, continue to main loop.  If not, exit.

Consequent run throughs:

  • Start services.exe with the argument Dazsks Fsdgsdf to start the service.
  • Continue to main loop mentioned in First Time Run

After checking for run through number, Loader.dll enters the main loop of the program.

Main loop run through:

  • Creates an uninheritable, auto-reset, nonsignaled event named ‘ccfcdaa’ with a handle of 0x8C.
  • Decrypt ChDz0PYP8/oOBfMO0A/0B6Y= to ‘dns[dot]posthash[dot]org’
  • Start the WinSock object
  • Create an uninheritable, unsignaled, manual-reset event object named null with the handle 0x90
  • Assembles Get Request: “Get /?ocid = iefvrt HTTP/1.1”
  • Connects to dns[dot]posthash[dot]org:5200
  • Obtains information about the OS using GetVersionEx
  • Load ntdll.dll and call RtlGetVersionNumbers
  • Saves SystemCurrentControlSetServices(null) to the registry
  • Obtain socket name
  • Obtain the CPU refresh speed using HardwareDescriptionSystemCentralProcessor
  • Calls GetVersion to obtain the system info
  • Calls GlobalMemoryStatusEx to obtain the status of the available global memory
  • Enumerate all available disk drives starting at ‘A:/’ using GetDriveTypeA
  • Obtain the total amount of free space available on each enumerated drive
  • Initialize the COM library
  • Appends the current time to the service ‘dazsks fsdgsdf’ with the marktime function
  • Obtain the system info of a system running under WOW64
  • Using a list of majority chinese AV software filenames and CreateToolHelp32Snapshot, to create a snapshot of running processes and then identify any running AV programs.
  • Decrypt EQr8/KY= to “mdzz”
  • Sends all the data obtained above to the C2 server at dns[dot]posthash[dot]org:5200


The best way to mitigate being hit by ZombieBoy is as always, avoidance in general, which is why I recommend updating your systems to their most recent update.  Specifically, MS17-010 will fix the malware’s spreading capabilities.

If you are infected by ZombieBoy however, the first thing you should do is take a couple deep breaths.  Next, I’d recommend scanning your system with an A/V software of your choice.

Once the scan has finished, you should find and end any open processes currently being run by ZombieBoy such as:

  • 123.exe
  • 64.exe
  • 74.exe
  • 84.exe
  • CPUinfo.exe
  • N.exe
  • S.exe
  • Svchost.exe (Note the file location.  End any processes not originating from C:WindowsSystem32)

In addition, delete the following registry keys:

  • SYSTEM/CurrentControlSet/Services/Dazsks Fsdgsdf

Also, delete any files dropped by the malware such as:

  • C:%WindowsDirectory%sys.exe
  • C:windows%system%boy.exe
  • C:windowsIIScpuinfo.exe
  • All of the 70+ files dropped in IIS
  • C:Program Files(x86)svchost.exe
  • C:Program FilesAppPatchmysqld.dll
  • C:Program Files(x86)StormIImssta.exe
  • C:Program Files(x86)StormII*

Indicators of Compromise

Samples MD5 Size IP IOC
ZombieBoy[Main Dll] 842133ddc2d57fd0f78491b7ba39a34d 82.4kb
123.exe 7327ef046fe62a26e5571c36b5c2c417 782.3kb Downloaded From:


[Injector123] 785a7f6e1cd40b50ad788e5d7d3c8465 437.9kb
64.exe 79c6ead6fa4f4addd7f2f019716dd6ca 6.4MB Mining Server:

Downloaded From





Necessary files for exploits and WinEggDrop into C:windowsIIS

74.exe 38d7d4f6a712bff4ab212848802f5f9c 9.7kb C2 server:


C:Program Files(x86)svchost.exe


Netsyst96.dll 6de21f2fd11d68b305b5e10d97b3f27e 1.0MB Downloaded From


C2 server:


C:Program FilesAppPatchmysqld.dll,
84.exe 91ebe2de7fcb922c794a891ff8987124 334.7kb C2 Server:


C:Program Files(x86)StormIImssta.exe

SYSTEM/CurrentControlSet/Services/Dazsks Fsdgsdf

C:Program Files(x86)StormII*

Loader.dll 9a46a3ae2c3762964c5cbb63b62d7dee 135.2kb C2 Server:



Files Queried:

HardwareDescriptionSystemCentralProcessor ;  SYSTEM/CurrentControlSet/Services/BITS;

Go to Source

Credit Card Issuer TCM Bank Leaked Applicant Data for 16 Months

TCM Bank, a company that helps more than 750 small and community U.S. banks issue credit cards to their account holders, said a Web site misconfiguration exposed the names, addresses, dates of birth and Social Security numbers of thousands of people who applied for cards between early March 2017 and mid-July 2018.

TCM is a subsidiary of Washington, D.C.-based ICBA Bancard Inc., which helps community banks provide a credit card option to their customers using bank-branded cards.

In a letter being mailed to affected customers today, TCM said the information exposed was data that card applicants uploaded to a Web site managed by a third party vendor. TCM said it learned of the issue on July 16, 2018, and had the problem fixed by the following day.

Bruce Radke, an attorney working with TCM on its breach outreach efforts to customers, said fewer than 10,000 consumers who applied for cards were affected. Radke declined to name the third-party vendor, saying TCM was contractually prohibited from doing so.

“It was less than 25 percent of the applications we processed during the relevant time period that were potentially affected, and less than one percent of our cardholder base was affected here,” Radke said. “We’ve since confirmed the issue has been corrected, and we’re requiring the vendor to look at their technologies and procedures to detect and prevent similar issues going forward.”

ICBA Bancard is the payments subsidiary of the Independent Community Bankers of America, an organization representing more than 5,700 financial institutions that has been fairly vocal about holding retailers accountable for credit card breaches over the years. Last year, the ICBA sued Equifax over the big-three credit bureau’s massive data breach that exposed the Social Security numbers and other sensitive data on nearly 150 million Americans.

Many companies that experience a data breach or data leak are quick to place blame for the incident on a third-party that mishandled sensitive information. Sometimes this blame is entirely warranted, but more often such claims ring hollow in the ears of those affected — particularly when they come from banks and security providers. For example, identity theft protection provider LifeLock recently addressed a Web site misconfiguration that exposed the email addresses of millions of customers. LifeLock’s owner Symantec later said it fixed the flaw, which it blamed on a mistake by an unnamed third-party marketing partner.

Managing third-party risk can be challenging, especially for organizations with hundreds or thousands of partners (consider the Target breach, which began with an opportunistic malware compromise at a heating and air conditioning vendor). Nevertheless, organizations of all shapes and sizes need to be vigilant about making sure their partners are doing their part on security, lest third-party risk devolves into a first-party breach of customer trust.

Go to Source
Author: BrianKrebs