Wednesday, September 6, 2017

September 05, 2017 Key-logging malware, dubbed EHDevel, found intelligence gathering This article originally appeared on SC Media UK. Security researchers have found a sophisticated malware framework, EHDevel, which started with more vulnerable individuals in bid to reach ultimate objective, targeting several Pakistani individuals. Key-logging malware, dubbed EHDevel, found intelligence gathering The malware, dubbed EHDevel, has been used by attackers, thought to be nation-state hackers, to gather intelligence. According to a Reuters report, a cyber-spying campaign is currently being waged against Indian and Pakistani entities. The malware allows hackers to log keystokes, identify a victim's location and steal personal data. The malware also uses a complex mix of transitions from one programming language to another, code under active development, and bugs that were not spotted during the QA process. In a white paper, security researchers from Bitdefender said that a year ago they came across a suspicious document called News.doc.  However, unlike most potentially malicious documents that get processed in its labs, this file displayed similarities with a set of files known to have been used in separate attacks targeted at different institutions. Further investigations found that is used a malware framework that uses a handful of novel techniques for command and control identification and communications, as well as a plugin-based architecture, a design choice increasingly being adopted among threat actor groups in the past few years. According to Bitdefender, this current operation has an identical mode of operation. “Another important discovery lies in the fact that this specialised framework that has been used to gather field intelligence for years in different shapes and forms, and our threat intelligence suggests a connection with the 2013 Operation Hangover APT as well,” said researchers. The researchers said that the payload is embedded at the end of the RTF file, together with the decoy document. Once the RTF file is open, the payload is decrypted and dropped on the disk in the %LOCALAPPDATA% folder. The executable file contains all the tools required to carry out its mission. Chris Doman, security researcher at AlienVault, told SC Media UK that plugin-based malware is typically seen in attackers employing a group of people that are active against many targets.  “BitDefender points to potential links to a set of attacks previously exposed as Operation Hangover. In that case the attackers were shown to be towards the bottom-end of APT groups - the operators mistakenly registered domains under their own names, and even used one of their company file shares as an open command and control server. That meant the attackers were exposing their own company documents whilst they were attacking other people,” he said. “It's possible these attacks continue to be executed by the same organisation, or some of their former employees. Whilst they have been known to attack western companies most of the attacks seem to be in the context of the India-Pakistan relations. They are an interesting example of how attackers, even if lowly skilled, can compromise networks if they are persistent enough. It's key to be able to detect such attackers once they've got past perimeter defences, and to be aware if you are a target.” Josh Mayfield, platform specialist, Immediate Insight at FireMon, told SC Media UK that we are dealing with a taxonomy of malware that will not trigger any alerts.  “Organisations who adopt an assumption of compromise can protect themselves by regularly hunting for threats, using discovery methods to find previously unknown tactics specific to their environments.  It is within this mindset that we can explore the potential problems we have not modeled,” he said. Anton Cherepanov, senior malware researcher at ESET, told SC Media UK that his firm has documented many similar cases, with BlackEnergy malware being probably one of the most prominent. “It used a core component and modules, that allowed the attackers to take control of the targeted machines, spy on their activity or damage them. Particularly thanks to the modularity, the functionality of the malware was not always the same. In some cases – such as the attack on Ukrainian media or energy sector at the end of 2015 – a destructive component was present, while in other cases – where information extraction seemed to be the primary goal - spyware capabilities dominated,” he said.

September 05, 2017

Key-logging malware, dubbed EHDevel, found intelligence gathering

This article originally appeared on SC Media UK.
Security researchers have found a sophisticated malware framework, EHDevel, which started with more vulnerable individuals in bid to reach ultimate objective, targeting several Pakistani individuals.
Key-logging malware, dubbed EHDevel, found intelligence gathering
Key-logging malware, dubbed EHDevel, found intelligence gathering
The malware, dubbed EHDevel, has been used by attackers, thought to be nation-state hackers, to gather intelligence. According to a Reuters report, a cyber-spying campaign is currently being waged against Indian and Pakistani entities.
The malware allows hackers to log keystokes, identify a victim's location and steal personal data. The malware also uses a complex mix of transitions from one programming language to another, code under active development, and bugs that were not spotted during the QA process.
In a white paper, security researchers from Bitdefender said that a year ago they came across a suspicious document called News.doc.  However, unlike most potentially malicious documents that get processed in its labs, this file displayed similarities with a set of files known to have been used in separate attacks targeted at different institutions.
Further investigations found that is used a malware framework that uses a handful of novel techniques for command and control identification and communications, as well as a plugin-based architecture, a design choice increasingly being adopted among threat actor groups in the past few years.
According to Bitdefender, this current operation has an identical mode of operation.
“Another important discovery lies in the fact that this specialised framework that has been used to gather field intelligence for years in different shapes and forms, and our threat intelligence suggests a connection with the 2013 Operation Hangover APT as well,” said researchers.
The researchers said that the payload is embedded at the end of the RTF file, together with the decoy document. Once the RTF file is open, the payload is decrypted and dropped on the disk in the %LOCALAPPDATA% folder. The executable file contains all the tools required to carry out its mission.
Chris Doman, security researcher at AlienVault, told SC Media UK that plugin-based malware is typically seen in attackers employing a group of people that are active against many targets.
“BitDefender points to potential links to a set of attacks previously exposed as Operation Hangover. In that case the attackers were shown to be towards the bottom-end of APT groups - the operators mistakenly registered domains under their own names, and even used one of their company file shares as an open command and control server. That meant the attackers were exposing their own company documents whilst they were attacking other people,” he said.
“It's possible these attacks continue to be executed by the same organisation, or some of their former employees. Whilst they have been known to attack western companies most of the attacks seem to be in the context of the India-Pakistan relations. They are an interesting example of how attackers, even if lowly skilled, can compromise networks if they are persistent enough. It's key to be able to detect such attackers once they've got past perimeter defences, and to be aware if you are a target.”
Josh Mayfield, platform specialist, Immediate Insight at FireMon, told SC Media UK that we are dealing with a taxonomy of malware that will not trigger any alerts.
“Organisations who adopt an assumption of compromise can protect themselves by regularly hunting for threats, using discovery methods to find previously unknown tactics specific to their environments.  It is within this mindset that we can explore the potential problems we have not modeled,” he said.
Anton Cherepanov, senior malware researcher at ESET, told SC Media UK that his firm has documented many similar cases, with BlackEnergy malware being probably one of the most prominent.
“It used a core component and modules, that allowed the attackers to take control of the targeted machines, spy on their activity or damage them. Particularly thanks to the modularity, the functionality of the malware was not always the same. In some cases – such as the attack on Ukrainian media or energy sector at the end of 2015 – a destructive component was present, while in other cases – where information extraction seemed to be the primary goal - spyware capabilities dominated,” he said.

Tuesday, September 5, 2017

7 Cloud Security Best Practices for Amazon Web Services

7 Cloud Security Best Practices for Amazon Web Services

AWS cloud security best practices
Temporary and permanent storage of data in the cloud has grown in popularity over the years. Companies like Land O’Lakes and Boeing moved their information to the cloud last year to simplify the technology they used. Video-streaming behemoth Netflix finished their journey to the cloud in early 2016 after seven years of moving systems and customer services to Amazon Web Services (AWS).
What inspired this change from on-premises storage to the cloud? Ease of use and implementation, the cost-effectiveness of the cloud over having to maintain physical servers, and worldwide access to cloud storage without being dependent on a single network or location are just a few of the positives that encourage companies to migrate. Some cloud providers, like AWS, can even scale in either direction to support growing business needs—meaning you only pay for what you use.
This transition to the cloud brings a new set of security risks to the table, though. According to Digital Guardian, you lose some control over sensitive company data once you put it in the cloud, since that data is transferred to the cloud provider, versus stored on-premises. To prevent interception of data while stored or transferred within the cloud, companies should ensure they are encrypting files during storage and transit using a managed file transfer solution like GoAnywhere MFT. The cloud also allows personal devices to connect to and interact with data, and this has its own positives (flexibility in cloud use) and negatives (compromised information if a connected device is stolen or hacked).
Amazon Web Services markets itself as a “secure cloud services platform, offering compute power, database storage, content delivery and other functionality to help businesses scale and grow.” As companies move to AWS for their cloud storage needs, they’ll have the opportunity to increase their productivity and reliability as long as they maintain best practices for cloud security.
If you’re getting ready to move your data to Amazon Web Services or already have, here are seven best practices for AWS we recommend to get the most out of your cloud security.

1. Document your AWS processes and procedures, then keep them updated

Imagine you have a very specific file structure set up in the cloud, complete with categorical folders that are protected by different levels of permission. You know that all company sales data should go in a specific folder, but a coworker, though meaning well, doesn’t know and decides to transfer sales data to a different, unprotected folder. Chaos ensues.
To avoid this type of confusion, create consistent cloud practices everyone can follow. Document your AWS processes and procedures. Store them in a common space that the organization can access, like a shared drive on the internal network. And update the document every time something changes in your cloud approach to help coworkers, stakeholders, third party vendors, and trading partners remain on the same page.

2. Use AWS CloudTrail to track your AWS usage

Understanding what actions users take in the cloud is an important step toward keeping your data secure and in the hands of those you trust. Use an AWS service like Amazon CloudTrail to anticipate and prevent security vulnerabilities in the cloud through “governance, compliance, operational auditing, and risk auditing of your AWS account”.
AWS CloudTrail can do the following tasks, and more:
  • Create API call history logs
  • Record when objects or data are created, read, or modified
  • Calculate and give you risk reports on your cloud storage account
  • Determine who makes changes to your cloud storage infrastructure
  • Track who logs in to your accounts (including successful and failed login attempts)

3. Complete risk assessments as often as possible

Even though the cloud is run by Amazon Web Services, both AWS and your organization are responsible for making sure nothing falls through the cracks. This includes maintaining “adequate governance over the entire IT control environment regardless of how IT is deployed” and having “an understanding of required compliance objectives and requirements,” among other things.
AWS completes and publishes risk assessments for their services, and you should do the same for the data you’ve stored in the cloud. Each time you give a new key player (including third party vendors and trading partners) access to your AWS cloud storage, walk through the following steps:
  1. Review the risks you currently know about and ensure they’re still being addressed
  2. Identify and add new risk scenarios to your list. Plan for how to tackle them
  3. Identify the key players who have access to AWS and ensure they’re following standard security hygiene
  4. Assess your AWS account. Make sure your settings, policies, and security are still relevant
  5. Consider the steps you should take next to manage your data and prevent future risk
Remember, risk assessment is an ongoing process that allows you to find and address security concerns in your infrastructure. Since storing data in the cloud takes away some of your control over sensitive company information by not being on-premises, it’s vital you complete assessments often to keep on top of potential security gaps and vulnerabilities.

4. Follow standard security hygiene for host and guest systems

Practicing standard security hygiene is one of the easiest ways to keep your data protected. These habits should become second nature, just like washing your hands or brushing your teeth, and will benefit you immensely without requiring much time or resources.
Enable multi-factor authentication for all accounts
Amazon Web Service’s MFA requires a user to provide two pieces of information to prove they’re authentic. The first piece is knowledge (something you know, your login credentials), the second is possession (something you have, an authentication code sent to an AWS MFA enabled device). Just enable multi-factor authentication for your AWS accounts to get an immediate boost in security.
Remove privileges from defunct accounts
When an employee, trading partner, or third party vendor leaves the relationship, clean out their account and delete any privileges they were given. This removes the temptation for a renegade player—or a hacker guessing at passwords and emails—to return at a later date and compromise sensitive company information.
Disable password-only access for guests
Even guest accounts should use multi-factor authentication wherever possible, even if they have limited authorities and privileges.

5. Manage and review AWS accounts, users, groups, and roles

Every so often, we recommend you review your AWS accounts, users, groups, and roles to gain a proper overview of the privileges and permissions they have. Are any of these stagnant or similar to other setups? Consider combining them. Are any of them no longer necessary? Limit the clutter. The less overlap there is, the better.
Administrators of Amazon Web Services accounts should pay special attention to the permissions listed for their S3 buckets. Several different types of access can be given to users, including list, upload, delete, view, and edit. A bucket can also be set to viewable for AWS account holders or anonymous users, which may cause high risk depending on the files in the bucket, so make sure to review your S3 buckets and permissions to avoid potential security pitfalls.
The bottom line? Provide your accounts, users, groups, and roles with the least amount of privileges they need to function. If someone needs temporary access, it’s better to add them in as they’re required and remove them right after to avoid information falling into the wrong hands.

6. Protect your access and encryption keys

If you’re using AWS to store your data in the cloud, you’re bound to have access keys and encryption keys. Access keys help AWS verify your identity against your login attempt and give you access to the resources you’ve been given. Users with different access keys may not be able to see the same things you do, so it’s imperative you keep your keys safe.
Similarly, encryption keys are used to encrypt and decrypt data. Since they unlock sensitive information, keep them separate from your data. This best practice is especially important for companies who need to comply with regulations like HIPAA, FISMA, and PCI DSS. “Essentially, the compliance requirements all say the same thing,” writes Luke Probasco for Pantheon, “encryption keys should never reside in the same environment or server as the encrypted data. This is a technical way of saying, don’t leave your key under the doormat a hacker walks in over.”
Here are just a few ways to keep your access and encryption keys safe:
  • Periodically delete any unused keys
  • Use temporary access keys instead of permanent ones wherever possible. This way, if an attacker compromises an account or discovers a user’s credentials, their access will be time-sensitive
  • Watch the encryption key life cycle and make sure new ones are properly saved and secured
  • Create procedures for worst case scenarios in the event a key is lost or tampered with
An easy way to protect your keys is to use AWS Key Management Services, the service Amazon offers that “makes it easy for you to create and control the encryption keys used to encrypt your data.” AWS KMS even integrates with AWS CloudTrail, Amazon’s log auditing service, so you can view logs of your key usage.

7. Secure your data at rest and in transit

When moving data between your network and the cloud, always encrypt your files and protect your communication using SFTP, FTPS, or SCP. Furthermore, keep them encrypted even when they’re at rest, sitting in an AWS S3 bucket or on a server. You can choose to encrypt single files or entire folders depending on your needs.
A managed file transfer solution can encrypt your files both ways using modern encryption methods. Good MFT software will help you stay up-to-date as encryption standards change over time, while also making sure your data transfers are easy to manage and audit.
GoAnywhere MFT, our managed file transfer solution, integrates with Amazon Web Services in a variety of ways. To learn how GoAnywhere MFT can meet your cloud needs, check out our Amazon EC2 platform page or request a demo.

Thursday, July 13, 2017

Here's what happens inside Amazon when its massive AWS hosting service goes down

Werner Vogels attends the Digital Life Design (DLD) conference on January 27, 2009 in Munich, Germany. DLD brings together global leaders and creators from the digital world.Sean Gallup/Getty Images for Burda Media
In late February 2017, a number of large websites across the internet abruptly went down.
Community-question-site Quora crashed, as did product management tool Trello, and Amazon's artificial intelligence assistant Alexa also struggled



The outage lasted several hours — and Amazon was to blame. This is because all the affected sites made use of Amazon Web Services (AWS), the cloud web hosting service from the Seattle-based technology giant that now underpins vast swathes of the modern web and hit $12 billion (£9.3 billion) in revenue last year.
The outage lasted several hours, and highlighted the unique vulnerabilities of our digital world: A handful of companies are responsible for maintaining huge swathes of the internet — and when there's a problem with one of them, thousands of businesses and millions of people can be left unable to work.
So what happens inside Amazon when there's a tech failure of this magnitude? Business Insider sat down with Werner Vogels, the chief technology officer of AWS at the AWS Summit in London in late June to discuss how the company handles it.
"We are so, so aware of the fact for many businesses their livelihoods are dependent on Amazon operating, on AWS really operating well, and that's a heavy responsibility," he said. "We're happy to take it."

Step 1: Find the problem — and console the customers

"[The] first thing that happens is a load of alarms start going off even before your customers are experiencing something," the Dutch-born executive explained.
The Amazon Web Services team then has two urgent tasks: Triage the problem and figure out just what's going on, while trying to calm the freaking-out customers whose businesses have just gone offline.
"You see the symptoms, but you do not necessarily see the root cause of it ... you immediately fire off a team whose task is to actually communicate with the customers ... making sure that everyone is aware of exactly what is happening."
Meanwhile, "internal teams of course immediately start going off and trying to find what's the root cause of this is, and whether we can repair or restore it, or what other kinds of actions we can start taking."
Vogels then dropped in a sly humble-brag: AWS goes down so rarely that when it does, it can be difficult to work out what's going on because there's little frame of reference. "Remember, this is a service that has not gone down in 12 years, so it's not that ... we could rely on some sort of previous experience on this."
The time of day shouldn't make a difference to repair efforts: AWS teams work "round the sun," and there are always demanding customers expecting uptime, whether it's late-night gaming in Seattle or early-morning financial services firms in Zurich.
If there's a major outage, though, Vogels said "of course" he would expect to be woken up immediately, and the senior management team will continuously track developments.

Step 2: Fix it

The issue behind the fault in February? Human error. The short version is that an engineer typed the wrong number — causing a chain reaction that ultimately led to a major failure.
Once diagnosed, Amazon's engineers have to go about fixing the problem, while also ensuring other systems do not also buckle under the sudden strain. "You have to sort of start protecting customers, start protecting system, because what happens is so many customers are still using this system, can't get access to the system, and while you're trying to repair this you're still overwhelmed with customers that are still retrying and retrying and retrying.
"And so you then start to block the traffic to make sure the system can come back online and become healthy again before you can stat accepting traffic again."
Jeff Bezos, chief executive officer of Amazon.Drew Angerer/Getty Images
Throughout all of this, you have anxious customers seeking guidance. "Customers don't like advice that says 'sit still, don't do anything.' No, that's not what they want, and for that you need to give them really good information, make them understand what's happening, given an expectation of when the service will be coming back online if you have such information."
Some of AWS' big customers have systems and failsafes in place to try and anticipate these kind of failures and prepare for them. Netflix has a system called ChaosMonkey, for example: "A whole set of tools to sort of simulate these extreme failures ... they take away a whole availability zone or a whole region and see what happens, and things like that."
But why a monkey? As Netflix previously explained: "The name comes from the idea of unleashing a wild monkey with a weapon in your data center (or cloud region) to randomly shoot down instances and chew through cables—all the while we continue serving our customers without interruption."

Step 3: Learn from it

Vogels places the blame not on the engineer directly responsible, but Amazon itself, for not having failsafes that could have protected its systems or prevented the incorrect input. "I think we can blame ourselves, in terms of not having turned this into sort of a procedure or something that was automated, where we could've had total good control over what the number could be."
This is a key point for Vogels: As you grow and develop, introducing too many points that require human intervention result in points of possible failure. Where possible, automate.
"Internally it triggers a whole set of new operational procedures. The minimum thing you have to do from this is learn from it understand really what are the things ... realising there may be still organically growing operational procedures where there is too much human decision-making in the path which could be automated, and so you then go do a review of your overall business to see if there are other places in your organisation ... where there might be operational vulnerabilities."

Servers server data centreGetty Images News

Because of what's at stake, the stakes are far higher for AWS and other cloud providers — Microsoft, Google, IBM, and so on — than ordinary businesses, and the tolerance for major failure is much lower.
"I will never be satisfied until our services are what I call 'indistinguishable from perfect,'" Vogels said. "Even though stuff happens and in this case it's human, other things can happen, major natural disasters can happen, things like that. So we see we're prepared for most of these kind of things and we help customers build architectures that can protect themselves from this as well."

P.S. Here's precisely what caused the February outage

In the aftermath of the outage in February, Amazon Web Service published a public postmortem explaining what went wrong, and some of the changes it was making as a result of it. You can read the full thing here, and an extract is below:
"We’d like to give you some additional information about the service disruption that occurred in the Northern Virginia (US-EAST-1) Region on the morning of February 28th. The Amazon Simple Storage Service (S3) team was debugging an issue causing the S3 billing system to progress more slowly than expected. At 9:37AM PST, an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended. The servers that were inadvertently removed supported two other S3 subsystems. One of these subsystems, the index subsystem, manages the metadata and location information of all S3 objects in the region. This subsystem is necessary to serve all GET, LIST, PUT, and DELETE requests. The second subsystem, the placement subsystem, manages allocation of new storage and requires the index subsystem to be functioning properly to correctly operate. The placement subsystem is used during PUT requests to allocate storage for new objects. Removing a significant portion of the capacity caused each of these systems to require a full restart. While these subsystems were being restarted, S3 was unable to service requests. Other AWS services in the US-EAST-1 Region that rely on S3 for storage, including the S3 console, Amazon Elastic Compute Cloud (EC2) new instance launches, Amazon Elastic Block Store (EBS) volumes (when data was needed from a S3 snapshot), and AWS Lambda were also impacted while the S3 APIs were unavailable."
Disclosure: Jeff Bezos is an investor in Business Insider through his personal investment company Bezos Expeditions.
Den Originalartikel gibt es auf Business Insider UK. Copyright 2017. Und ihr könnt Business Insider UK auf Twitter folgen.