cyber Security5

                                                                PAPER DSE 603(B) :CYBER SECURITY

UNIT-I: INTRODUCTION TO CYBER SECURITY, CYBER SECURITY VULNERABILITIES AND CYBER SECURITY SAFEGUARDS:  INTRODUCTION TO CYBER SECURITY: OVERVIEW OF CYBERSECURITY, INTERNET GOVERNANCE – CHALLENGES AND CONSTRAINTS, CYBERTHREATS:- CYBER WARFARE-CYBER CRIME-CYBER TERRORISM-CYBER ESPIONAGE, NEED FORA COMPREHENSIVE CYBER SECURITY POLICY, NEED FOR A NODAL AUTHORITY, NEEDFOR AN INTERNATIONAL CONVENTION ON CYBERSPACE. CYBER SECURITYVULNERABILITIES: OVERVIEW, VULNERABILITIES IN SOFTWARE, SYSTEM ADMINISTRATION,COMPLEX NETWORK ARCHITECTURES, OPEN ACCESS TO ORGANIZATIONAL DATA, WEAKAUTHENTICATION, UNPROTECTED BROADBAND COMMUNICATIONS, POOR CYBER SECURITYAWARENESS. CYBER SECURITY SAFEGUARDS: OVERVIEW, ACCESS CONTROL, AUDIT,AUTHENTICATION, BIOMETRICS, CRYPTOGRAPHY, DECEPTION, DENIAL OF SERVICEFILTERS, ETHICAL HACKING, FIREWALLS, INTRUSION DETECTION SYSTEMS,RESPONSE, SCANNING, SECURITY POLICY, THREAT MANAGEMENT.

UNIT-II: SECURING WEB APPLICATION, SERVICES AND SERVERS: INTRODUCTION, BASICSECURITY FOR HTTP APPLICATIONS AND SERVICES, BASIC SECURITY FOR SOAPSERVICES, IDENTITY MANAGEMENT AND WEB SERVICES, AUTHORIZATION PATTERNS,SECURITY CONSIDERATIONS, CHALLENGES. 

UNIT-III: INTRUSION DETECTION AND PREVENTION: INTRUSION, PHYSICAL THEFT, ABUSE OF PRIVILEGES, UNAUTHORIZEDACCESS BY OUTSIDER, MALWARE INFECTION, INTRUSION DETECTION AND PREVENTIONTECHNIQUES, ANTI-MALWARE SOFTWARE, NETWORK BASED INTRUSION DETECTIONSYSTEMS, NETWORK BASED INTRUSION PREVENTION SYSTEMS, HOST BASED INTRUSIONPREVENTION SYSTEMS, SECURITY INFORMATION MANAGEMENT, NETWORK SESSIONANALYSIS, SYSTEM INTEGRITY VALIDATION.

UNIT-IV: CRYPTOGRAPHY AND NETWORK SECURITY: INTRODUCTION TO CRYPTOGRAPHY, SYMMETRIC KEY CRYPTOGRAPHY, ASYMMETRIC KEY CRYPTOGRAPHY, MESSAGE AUTHENTICATION, DIGITAL SIGNATURES, APPLICATIONS OF CRYPTOGRAPHY. OVERVIEW OF FIREWALLS- TYPES OF FIREWALLS, USER MANAGEMENT, VPN SECURITY SECURITY PROTOCOLS: - SECURITY AT THE APPLICATION LAYER- PGP AND S/MIME, SECURITY AT TRANSPORT LAYER- SSL AND TLS, SECURITY AT NETWORK LAYER-IPSEC.

UNIT-V: CYBERSPACE AND THE LAW, CYBER FORENSICS: CYBERSPACE AND THE LAW: INTRODUCTION, CYBER SECURITYREGULATIONS, ROLES OF INTERNATIONAL LAW, THE STATE AND PRIVATE SECTOR INCYBERSPACE, CYBER SECURITY STANDARDS. THE INDIAN CYBERSPACE,NATIONAL CYBER SECURITY POLICY 2013. CYBER FORENSICS: INTRODUCTION TOCYBER FORENSICS, HANDLING PRELIMINARY INVESTIGATIONS, CONTROLLINGAN INVESTIGATION, CONDUCTING DISK-BASED ANALYSIS, INVESTIGATINGINFORMATION-HIDING, SCRUTINIZING E-MAIL, VALIDATING E-MAIL HEADER INFORMATION,TRACING INTERNET ACCESS, TRACING MEMORY IN REAL-TIME.

Unit-V

CYBERSPACE AND THE LAW

CYBERSPACE

Cyberspace can be defined as an intricate environment that involves interactions between people, software, and services. It is maintained by the worldwide distribution of information and communication technology devices and networks. With the benefits carried by the technological advancements, the cyberspace today has become a common pool used by citizens, businesses, critical information infrastructure, military and governments in a fashion that makes it hard to induce clear boundaries among these different groups. The cyberspace is anticipated to become even more complex in the upcoming years, with the increase in networks and devices connected to it.

 

Cyber Security Regulations

Below  are four individual laws or types of laws to understand cyber security regulations.

Federal Cyber security Laws

This law tells various rules followed by contractors working in Department of Defense .This law introduced  on  December 31, 2017,it tells that all contractors working for the Department of Defense (DoD) must abide by requirements set by the organization. Failing to do so could mean losing a contract or having to cease the fulfillment of work orders until the contractor is verifiably in compliance.

In January 2018, the General Services Administration (GSA) also announced planned new regulations for contractors, including demand  for handling data and reporting breaches quickly.

State-Specific Security Regulations

Businesses are also responsible for knowing the applicable state-specific cyber security laws. Many of them relate to data collection practices and the need to notify customers within strict timeframes and through specified methods if data gets compromised.

The General Data Protection Regulation (GDPR)

The General Data Protection Regulation (GDPR) applies to all European Union member states, as well as any companies operating elsewhere that market or provide services to people in the European Union. Many items in the GDPR are part of California's law, too.

California's SB-327 Bill for IoT Security

The Internet of Things (IoT) encompasses internet-connected devices, and some people have rightly criticized the manufacturers of those gadgets for not being sufficiently concerned about cyber security. However, California recently passed a bill to change things. California's SB-327 IoT bill goes into effect on January 1, 2020, the same day as the state's data privacy bill mentioned above.

 

Roles of International Law

In various countries, areas of the computing and communication industries are regulated by governmental bodies :

 There are specific rules on the uses to which computers and computer networks may be put, in particular there are rules on unauthorized access, data privacy and spamming

  There are also limits on the use of encryption and of equipment which may be used to defeat copy protection schemes

 There are laws governing trade on the Internet, taxation, consumer protection, and advertising

 There are laws on censorship versus freedom of expression, rules on public access to government information, and individual access to information held on them by private bodies

 Some states limit access to the Internet, by law as well as by technical means.

International Law For Cyber Crime

Cybercrime is "international" that there are ‘no cyber-borders between countries’

 The complexity in types and forms of cybercrime increases the difficulty to fight back

 fighting cybercrime calls for international cooperation

 Various organizations and governments have already made joint efforts in establishing global standards of legislation and law enforcement both on a regional and on an international scale

The state and Private Sector in Cyberspace

Protecting the private sector has drawn less attention, and even some resistance. Yet protecting the private sector is increasingly critical, because the United States, more than most if not all other nations, depends  heavily on private corporations for ensuring national security. Corporations manufacture most of the nation’s arms. Corporations produce most of the software and hardware for the computers the government uses. And corporations, under contract with the government, carry out many critical security functions, including the collection and processing of intelligence and the conduct of covert operations.

Many of the crimes committed in cyberspace, such as electronic monetary theft, impose considerable costs on private companies. The same holds for industrial espionage, especially from other countries, which deprives U.S. corporations of the fruits of long investments in R&D and grants major advantages to unfair competitors. In addition, if cyber warfare were to break out, many of the assets that would probably be damaged belong to private corporations.

 

Cyber Security Standards

These are steps to protect cyber environment of an organization. 10 steps to an effective approach to cyber security.

1. Risk management regime

Assess the risks to your organization’s information and systems by embedding an appropriate risk management regime. This should be supported by the Board and senior managers. Ensure that all employees, contractors and suppliers are aware of the approach and any applicable risk boundaries.

2. Secure configuration

Having an approach to identify baseline technology builds and processes for ensuring configuration management can greatly improve the security of systems. 

You should develop a strategy to remove or disable unnecessary functionality from systems, and to quickly fix known vulnerabilities, usually via patching. Failure to do so is likely to result in increased risk of compromise of systems and information.

3. Network security

Reduce the chances of your systems and technologies being attacked by creating and implementing simple policies and appropriate architectural and technical responses.

4. Managing user privileges

All users should be provided with a reasonable (but minimal) level of system privileges and rights needed for their role. The granting of highly elevated system privileges should be carefully controlled and managed.

5. User education and awareness

Users have a critical role to play in their organization’s security. It is important to educate staff on the potential cyber risks, to ensure users can do their job as well as help keep the organization secure.

6. Incident management

All organizations will experience security incidents at some point.  Investment in creating effective incident management policies and processes will help to improve resilience, support business continuity, improve customer and stakeholder confidence and potentially reduce any impact. You should identify recognized sources (internal or external) of specialist incident management expertise. 

7. Malware prevention

Any exchange of information carries with it a degree of risk that malware might be exchanged, this could seriously impact your systems and services. The risk may be reduced by developing and implementing appropriate anti-malware policies.

8. Monitoring

System monitoring aims to detect actual or attempted attacks on systems and business services. Good monitoring is essential in order to effectively respond to attacks.

9. Removable media controls

Produce a policy to control all access to removable media. Limit media types and use. Scan all media for malware before importing onto the corporate system.

10. Home and mobile working

Mobile working and remote system access offers great benefits, but exposes new risks that need to be managed. Train users on the secure use of their mobile devices in the environments they are likely to be working in.

Various Cyber Security Standers

Cyber security standards are generally applicable to all organizations regardless of their size or the industry and sector in which they operate. Following are some standards currently in use in the industry:

PAS 555 PAS 555 takes the approach of describing the appearance of effective cyber security. That is, rather than specifying how to approach a problem, it describes what the solution should look like.

ISO/IEC 27001 It is a rigorous and comprehensive specification for protecting and preserving your information under the principles of confidentiality, integrity and availability.

ISO/IEC 27032 This Standard recognizes the vectors that cyber attacks rely upon, including those that originate outside cyber space itself. Further, it includes guidelines for protecting your information beyond the borders of your organization, such as in partnerships, collaborations or other information-sharing arrangements with clients and suppliers.

CCM The Cloud Security Alliance’s Cloud Controls Matrix (CCM) is a set of controls designed to maximize the security of information for organizations that take advantage of Cloud technologies. The benefits of Cloud technologies are well known, but there has been resistance to the uptake from some organizations due to the perceived risks of storing and processing data beyond their own physical and logical perimeter.

ISO/IEC 27035 Incident management forms the crucial first stage of cyber resilience. While cyber security management systems are designed to protect your organization, it is essential to be prepared to respond quickly and effectively when something does go wrong.

ISO/IEC 27031 This Standard bridges the gap between the incident itself and general business continuity, and forms a key link in the chain of cyber resilience.

ISO/IEC 22301 This Standard not only focuses on the recovery from disasters, but also on maintaining access to, and security of, information, which is crucial when attempting to return to full and secure functionality. 

THE INDIAN CYBERSPACE

Indian cyberspace was born in 1975 with the establishment of National Informatics Centre (NIC) with an aim to provide govt with IT solutions. Three networks (NWs) were set up between 1986 and 1988 to connect various agencies of govt. These NWs were, INDONET which connected the IBM mainframe installations that made up India’s computer infrastructure, NICNET (the NIC NW) a nationwide very small aperture terminal (VSAT) NW for public sector organizations as well as to connect the central govt with the state govts and district administrations, the third NW setup was ERNET (the Education and Research Network), to serve the academic and research communities.

New Internet Policy of 1998 paved the way for services from multiple Internet service providers (ISPs) and gave boost to the Internet user base grow from 1.4 million in 1999 to over 150 million by Dec 2012. Exponential growth rate is attributed to increasing Internet access through mobile phones and tablets. Govt is making a determined push to increase broadband penetration from its present level of about 6%1. The target for broadband is 160 million households by 2016 under the National Broadband Plan.

Various Cyber acts in India

·         Information Technology Act, 2000 

·         Information Technology (Amendment) Act, 2008

·         Indian Penal Code, 1860

·         Indian Evidence Act, 1872

·         Companies Act of 2013

·         The Banker’s Book Evidence Act 1891

Information Technology Act, 2000 

The Indian cyber laws are governed by the Information Technology Act, penned down back in 2000. The principal impetus of this Act is to offer reliable legal inclusiveness to eCommerce, facilitating registration of real-time records with the Government. But with the cyber attackers getting sneakier, topped by the human tendency to misuse technology, a series of amendments followed. The ITA, enacted by the Parliament of India, highlights the grievous punishments and penalties safeguarding the e-governance, e-banking, and e-commerce sectors. Now, the scope of ITA has been enhanced to encompass all the latest communication devices. The IT Act is the salient one, guiding the entire Indian legislation to govern cyber crimes rigorously:

Section 43 - Applicable to people who damage the computer systems without permission from the owner. The owner can fully claim compensation for the entire damage in such cases.

Section 66 - Applicable in case a person is found to dishonestly or fraudulently committing any act referred to in section 43. The imprisonment term in such instances can mount up to three years or a fine of up to Rs. 5 lakh.

Section 66B - Incorporates the punishments for fraudulently receiving stolen communication devices or computers, which confirms a probable three years imprisonment. This term can also be topped by Rs. 1 lakh fine, depending upon the severity.

Section 66C - This section scrutinizes the identity thefts related to imposter digital signatures, hacking passwords, or other distinctive identification features. If proven guilty, imprisonment of three years might also be backed by Rs.1 lakh fine.

Section 66 D - This section was inserted on-demand, focusing on punishing cheaters doing impersonation using computer resources.

National Cyber Security Policy 2013

It is a policy framework by Department of Electronics and Information Technology (DeitY), which aims at protecting the public and private infrastructure from cyber attacks.

VISION

To build a secure and resilient cyberspace for citizens, business and government.

MISSION

To protect information and information infrastructure in cyberspace, build capabilities to prevent and respond to cyber threat, reduce vulnerabilities and minimize damage from cyber incidents through a combination of institutional structures, people, processes, technology and cooperation.

OBJECTIVE

·         To create a secure cyber ecosystem in the country, generate adequate trust and confidence in IT system and transactions in cyberspace and thereby enhance adoption of IT in all sectors of the economy.

·    To create an assurance framework for design of security policies and promotion and enabling actions for compliance to global security standards and best practices by way of conformity assessment (Product, process, technology & people).

·  To strengthen the Regulatory Framework for ensuring a SECURE CYBERSPACE ECOSYSTEM.

·   To enhance and create National and Sectorial level 24X7 mechanism for obtaining strategic information regarding threats to ICT infrastructure.

·   To improve visibility of integrity of ICT products and services by establishing infrastructure for testing & validation of security of such product. 

·         To provide fiscal benefit to businesses for adoption of standard security practices and processes.

·         To enable effective prevention, investigation and prosecution of cybercrime .

·         Developing a culture of cyber security and privacy.

·         Safeguarding of the privacy of citizen’s data and reducing economic losses due to cybercrime or data theft.

·         Creating a workforce of 500,000 professionals skilled in cyber security in the next 5 years.

·         Developing suitable indigenous security technologies to address requirements in this field.

STRATEGIES

  •          Creating a secure Ecosystem.
  •          Creating an assurance framework.
  •          Encouraging Open Standards.
  •          Strengthening the regulatory Framework.
  •          Creating mechanism for Security Threats Early Warning, Vulnerability management and response to security threat.
  •          Securing E-Governance services.
  •          Protection and resilience of Critical Information Infrastructure.
  •          Promotion of Research and Development in cyber security.
  •         Reducing supply chain risks
  •          Human Resource Development (fostering education and training programs both in formal and informal sectors to support Nation’s cyber security needs and build capacity.
  •          Creating cyber security awareness.
  •          Developing effective Public Private Partnership.
  •          To develop bilateral and multilateral relationship in the area of cyber security with other country.
  •          Prioritized approach for implementation.
  •        Operationalization of Policy

CYBER FORENSICS

INTRODUCTION TO CYBER FORENSICS

Cyber forensics and incident response go hand in hand. Cyber forensics reduces the occurrence of security incidents by analyzing the incident to understand, mitigate, and provide feedback to the actors involved. To perform incident response and related activities, organizations should establish an incident plan, a computer security incident response team (CSIRT) or a computer emergency response team (CERT) to execute the plan and associated protocols.

Responding to Incidents

Generally, incidents are events that violate an organization’s security policies, end user agreements, or terms of use.

Example:

Denial-of-service (DoS) attacks, unauthorized probing, unauthorized entry, destruction or theft of data, and changes to firmware or operating systems (Oss).

Generally, incident response handling is composed of incident reporting, incident analysis, and incident response. This is followed by the creation of a detailed  report about the incident.

Applying Forensic Analysis Skills

Forensic analysis is usually applied to determine who, what, when, where, how, and why an incident took place. The analysis may include investigating crimes and inappropriate behavior, reconstructing computer security incidents, troubleshooting operational problems, supporting due diligence for audit record maintenance, and recovering from accidental system damage.

The incident response team should be trained and prepared to be able to collect and analyze the related evidence to answer these questions.

The incident responder needs to have the necessary skills and experience to be able to meet the collection requirements.

Forensic analysis is the process where the collected data is reviewed. It may involve extracting email attachments, building timelines based on file times, review of browser history, in-memory artifact review, decryption of encrypted data, and malware reverse engineering. Once the analysis is complete, the incident responder will produce a report describing all the steps taken starting from the initial incident report until the end of the analysis.

One of the most important skills a forensic analyst can have is note-taking and logging, which becomes very important during the reporting phase and, if it ever comes to it, in court. These considerations related to forensics should be addressed in organizational policies. The forensic policy should clearly define the responsibilities and roles of the actors involved. The policies should also address the types of activities that should be undertaking under certain circumstances and the handling of sensitive information.

Distinguishing Between Unpermitted Corporate and Criminal Activity

The incident response team should also be aware of several federal laws that can help them to identify criminal activity to ensure that the team does not commit a crime while responding to the incident. Some of these federal laws include:

  •          The Foreign Intelligence Surveillance Act of 1978
  •          The Privacy Protection Act of 1980
  •          The Computer Fraud and Abuse Act of 1984
  •          The Electronic Communications Privacy Act of 1986
  •          Health Insurance Portability and Accountability Act of 1996 (HIPAA)
  •          Identity Theft and Assumption Deterrence Act of 1998
  •          The USA PATRIOT Act of 2001

When an incident response team comes across incidents relevant to these laws, they should consult with their legal team. They should also contact appropriate law enforcement agencies.

HANDLING PRELIMINARY INVESTIGATIONS

An organization should be prepared beforehand to properly respond to incidents and mitigate them in the shortest time possible. An incident response plan should be developed by the organization and tested on a regular basis.

Planning for Incident Response

The planning for incidents response involves the following activities,

Communicating with Site Personnel

All departments and staff that have a part in an incident response should be aware of the incident response plan and should be regularly trained on its content and implementation.

The plan should include the mode of communication with the site personnel. The site personnel should clearly log all activity and communication, including the date and time in a central repository that is backed up regularly. This information should be reviewed by all of the incident response team members to assure all players are on the same page. Continuity and the distribution of information within the team is critical in the swift mitigation of an incident.

Knowing Your Organization’s Policies

An organization’s policies will have an impact on how incidents are handled. Generally policies should allow the incident response team to monitor systems and networks and perform investigations for reasons  described in the policies. Policies may be updated frequently to keep up with the changes to laws and regulations, court rulings, and jurisdictions.

Forensics policies define the roles and responsibilities of the staff involved including users, incident handlers, and IT staff. The policy indicates when to contact internal teams or reach out to external organizations. It should also discuss how to handle issues arising from jurisdictional conflicts. Policies also discuss the valid use of anti forensics tools and techniques . How to maintain the confidentiality of data and the retention time of the data is also governed by organizational policies.

Minimizing Impact on Your Organization

Incident response teams should minimize the down times of business critical systems once the evidence has been gathered and the systems have been cleared of the effects of the incident. Incident response teams should also identify an organization’s risks and work with appropriate teams to continuously test and eliminate any vulnerability. Red team ,blue team exercises, where a team plays the role of malicious people and the other team as incident responders, can provide good training for the staff and expose previously unknown risks and vulnerabilities.

Identifying the Incident Life Cycle

SANS (sans.org) defines the phases of the incident life cycle

Preparation

When an incident happen it should be top priority for an organization to be prepared for an incident. An organization must establish security plans and controls, make sure these plans and controls are continuously reviewed and updated to keep up with the evolving threats, and make sure they are enforced in case of an incident. Organizations should be prepared to act swiftly to minimize the impact of any incident to maintain business continuity. Incident response teams should continuously train, test, and update the incident response plan to keep their skills honed.

Detection, Collection, and Analysis

The detection of an incident involves the observance and reporting of security or IT department staff members, customers of irregularities, or suspicious activities. Once an event has been reported and escalated to the incident response team, the event is evaluated to determine if it warrants classification as an incident. If the event has been classified as an incident, the incident response team should move in to perform data collection on the affected systems that will later be used for analysis. During collection, it is important to work in accordance with the organization’s policies and procedures and preserve a valid chain of custody. The person involved in collecting the data should make sure that the integrity of the data is maintained on both the original and working copies of the evidence.

Once the relevant information has been captured, the incident response team should analyze the data to determine who, what, when, where, how, and why an incident took place.

Containment, Eradication, and Recovery

Once the involved systems have been analyzed, the incident response team should move in to contain the problem and eradicate it. It is crucial to contain an incident as fast as possible to minimize its impact on the business. Containment and eradication should strive to protect service integrity, sensitive data, hardware, and software.

The recovery phase depends on the extent of the incidence.

For example, an  intrusion that was detected while it was affecting a single user is easier to recover from in comparison to an intrusion where the lateral movement of the intruder is extensive. Most of the time, recovery involves backing up the unaffected data to use on the new systems.

Post-incident Activity

The post-incident phase involves documenting, reporting, and reviewing the incident. Documentation actually starts as soon as an event has been classified as an incident. The report should include all of the documentation compiled during the incident, the analysis methods and techniques, and all other findings. The person writing the report should keep in mind that the report might someday be used by law enforcement or in court. Finally, the incident response team should go over the report with the IT department and other involved parties to discuss how to improve he infrastructure to prevent similar incidents.

Capturing Volatile Information

Computer systems contain volatile data that is temporarily available either till a process exits or a system is shutdown. Therefore, it is important to capture this data before making any physical or logical changes to the system to avoid tampering with evidence. Many incident responders have destroyed memory-only resident artifacts by shutting down a system in the name of containment.

Volatile data is available as system memory (including slack and free space), network configuration, network connections and sockets, running processes, open files, login sessions, and OS time. System memory can be captured by using sampling tools (MoonSols Windows Memory Toolkit, GMG Systems’ KnTDD) as a file and analyzed with the Volatility Framework to obtain the volatile data previously mentioned. The volatile data can also be captured individually with tools that are specific for each data type. The Microsoft Windows Sys internals suite  provides an extensive set of tools that can capture volatile data, such as login sessions, Registry, process information, service information, shares, and loaded dynamic-link libraries (DLLs).

CONTROLLING AN INVESTIGATION

To control an investigation, the incident response team should have a forensics investigation plan, a forensics toolkit, and documented methods to secure the affected environment. An investigator should always keep in mind that the evidence collected, and the analysis performed might be presented in court or used by law enforcement.

Related documentation should be detailed and contain dates and times for each activity performed. To avoid challenges to the authenticity of evidence investigators should be able to secure the suspect infrastructure, log all activity, and maintain a chain of custody.

Collecting Digital Evidence

It is important to an investigator to preserve data related to an incident as soon as possible to avoid the rapid degradation or loss of data in digital environments. Once the affected systems have been determined, volatile data should be captured immediately followed by nonvolatile data, such as system users and groups, configuration files, password files and caches, scheduled jobs, system logs, application logs, command history, recently accessed files, executable files, data files, swap files, dump files, security software logs, hibernation files, temporary files, and complete file listing with times.

Chain of Custody and Process Integrity

The incident response team should be committed to collect and preserve evidence using methods that can support future legal or organizational proceedings. A clearly defined chain of custody is necessary to avoid allegations of tampering evidence. To accomplish this task the team should keep a log of every entity who had physical custody of the evidence, document all of the actions performed on the evidence with the related date and time, make a working copy of the evidence for analysis, verify the   integrity of the original and working copy, and store the evidence in secured location when not in use . Also before touching a physical system, the investigator should take a photograph of it. To ensure the integrity of the process a detailed log should be kept of all the collection steps, information about every tool used in the incident response process.

Advantages of Having a Forensics Analysis Team

It has become evident to organizations that maintaining capabilities to perform forensic analysis has become a business requirement to satisfy organizational and customer needs. While it may make sense for some organizations to maintain an internal team of forensic analysts, some might find it more beneficial to  hire outside parties to carry out this function. Organizations should take cost, response time, and data sensitivity into consideration before making this decision . Keeping an internal forensic analysis team might reduce cost depending on the scale of the incident, provide faster response due to familiarity with the infrastructure, and prevent sensitive data from being viewed by third parties.

Legal Aspects of Acquiring Evidence: Securing and Documenting the Scene

Securing the physical scene and documenting it should be one of the first steps an incident responder should take. The incident response team should keep an inventory of evidence-handling supplies (chain of custody forms, notebooks, evidence storage bags, evidence tape, photographing the system setup, cabling, general area), blank media, backup devices, and forensics workstations.

Processing and Logging Evidence

The goal of an investigation is to collect and preserve evidence that can be used for internal proceedings or court of law. To properly process and log evidence, investigators should keep the evidence within a secured and controlled environment where all access is logged, and should document the collected evidence and its circulation among  investigative entities. We cannot stress how important it is to associate each activity with a date and time.

CONDUCTING DISC-BASED ANALYSIS

To be able to process evidence in a manner that is admissible in a court of law, a lab and accompanying procedures should be established. This will ensure that the data integrity is not breached and the data remains confidential: in other words, the evidence remains forensically sound.

Forensics Lab Operations

A forensic lab becomes a necessity. The lab should be established in a physically secure building that is monitored 24/7, should have a dedicated staff, should have regularly upgraded and updated workstations dedicated to forensic analysis with related software installed, and should have a disaster recovery plan in place.

Acquiring a Bit-Stream Image

Acquiring a bit-stream image involves producing a bit-by-bit copy of a hard drive on a separate storage device. By creating an exact copy of a hard drive, an investigator preserves all data on a disc, including currently unused and partially overwritten sectors. The imaging process should not alter the original hard drive to preserve the copy’s admissibility as evidence. Selecting a proper imaging tool is crucial to produce a forensically sound copy. The National Institute of Standards and Technology (NIST) lists the requirements for a drive imaging tool as follows :

· The tool shall make a bit-stream duplicate or an image of an original disc or a disc partition on fixed or removable media.

·  The tool shall not alter the original disc.

· The tool shall be able to access both integrated development environment (IDE) and small computer standard interface (SCSI) discs.

· The tool shall be able to verify the integrity of a disc image file.

· The tool shall log input/output (I/O) errors.

· The tool’s documentation shall be correct.

The imaging of a hard drive can be performed using specialized hardware tools or by using a combination of computers and software.

Specialized Hardware

The Image MASSter Solo series hard drive duplicators generally support serial advanced technology attachment, IDE, Universal Serial Bus (USB), external serial advanced technology attachment, universal serial advanced technology attachment, serial-attached SCSI hard drives and flash memory devices. They can hash the disc images besides providing write-blocking to ensure the integrity of the copies.

Software: Linux

The dd or dcfldd has been fully tested and vetted by NIST as a forensic imaging tool. It is a freeware utility for any Linux based system and can copy every sector of hard drives.

Windows

The AccessData Forensic Toolkit (FTK) Imager tool is a commercial disc-imaging tool distributed by AccessData. FTK supports storage of disc images in EnCase’s file format, as well as in bit-by-bit (dd) format.

Enabling a Write Blocker

Write blockers are hardware- or software-based tools that allow the acquisition of hard drive images while preventing any data from being written to the source hard drive, therefore ensuring the integrity of the data involved. Write blockers can do this by only allowing read commands to pass through by blocking write commands or by letting only specific commands through.

Some hardware write blockers that are used in the industry is as follows:

  • Tableau Forensic Bridges
  • WiebeTech WriteBlocker

Establishing a Baseline

It is important to maintain the integrity of the data being analyzed throughout the investigation. When dealing with disc drives, to maintain integrity, calculating the hashes of the analyzed images becomes crucial. Before copying or performing any analysis, the investigator should take a baseline hash of the original drives involved. The hash could be either just MD5 or a combination of MD5, SHA-1, and SHA-512. The baseline hash can be compared with hashes of any copies that are made thereafter for analysis or backup to ensure that the integrity of the evidence is maintained.

Physically Protecting the Media

After making copies of the original evidence hard drives, they should be stored in a physically secure location, such as a safe in a secured storage facility. These drives could be used as evidence in the event of prosecution. The chain of custody should also be maintained by labeling the evidence and keeping logs of date, time, and persons the evidence has come in contact with. During transportation, the hard drives should be placed in antistatic bags and should not be exposed to harsh environmental conditions. If possible, photographs of the evidence should be taken whenever they are processed, starting from the original location to the image acquisition stages.

Disc Structure and Recovery Techniques

There are different kinds of storage media: hard disc drives (HDD), solid state drives (SSD), digital video discs (DVD), compact discs (CD), flash memory, and other kinds. An investigator needs to be mindful about how each media stores data differently. For example, while data in the unused space on an HDD is stored as long as new data is not written, the data in the unused space of an SSD is destroyed within minutes of switching it on.

Disc Geometry Components


With regards to HDD geometry: The surface of each HDD platter is arranged in concentric magnetic tracks on each side. To make accessing data more efficient, each track is divided into addressable sectors or blocks as seen in Fig.  This organization is known as formatting. Sectors typically contain 512 bytes or 2048 bytes of data in addition to the address information. Newer HDDs use 4096 byte sectors. The HDD controller uses the format and address information to locate the specific data processed by the OS. Now,

with regards to SSD geometry: Compared to HDDs, SSDs store data in 512 kilobyte sectors or blocks, which are in turn divided into 4096 byte long pages. These structures are located in arrays of NAND (Negated AND or NOT AND) transistors.

Inspecting Windows File System Architectures

A file system provides the way of organizing a drive. File systems can be defined in six layers:

·         Physical (Absolute Sectors),

  • Data Classification (Partitions),
  • Allocation Units (Clusters),
  •  Storage Space Management (File Allocation Table [FAT] Or Master File Table [MFT]),
  •  Information Classification (Folders),
  • Application Level Storage (Files).

Knowing these layers will guide the investigator as to what tool is needed to extract information from the file system. Windows file systems have gone through an evolution starting from FAT and continuing to New Technology File System (NTFS).

File Allocation Table (FAT)

A file allocation table (FAT) is a table maintained by the Operating System on a hard disk that provides a map of the clusters in which a file has been stored. The operating system creates a FAT entry for every new file that records each cluster’s location and its sequential order. When we read a file, the operating system (OS) reassembles the file into clusters and then places it as an entirely new file where we want to read it. FAT was designed to help the hard drives and their subdirectories.

This file system has incarnations, such as FAT12, FAT16, FAT32, and exFAT.

New Technology File System (NTFS)

As the name suggests, NTFS was developed to overcome the limitations inherent in the FAT file system. These limitations were the lack of access control lists (ACLs) on file system objects, journaling, and compression, encryption, named streams, rich metadata, and many other features.

The journaling features of NTFS make it capable of recovering itself by automatically restoring the consistency of the file system when an error takes place . It also should be noted that NTFS file times are stored in the Universal Coordinated Time (UTC) compared to FAT where the OS’s local time is used.

 There are mainly two artifacts in NTFS that interests a forensics investigator:

MFT and alternate data stream (ADS).

Master File Table (MFT)

The NTFS file system contains a file called the Master File Table, or MFT. There is at least one entry in the MFT for every file on an NTFS file system volume, including the MFT itself. All information about a file, including its size, time and date stamps, permissions, and data content, is stored either in MFT entries, or in space outside the MFT that is described by MFT entries.

As files are added to an NTFS file system volume, more entries are added to the MFT and the MFT increases in size. When files are deleted from an NTFS file system volume, their MFT entries are marked as free and may be reused. However, disk space that has been allocated for these entries is not reallocated, and the size of the MFT does not decrease.

Alternate Data Streams (ADS)

Alternate Data Streams are used to store additional information with a file, such as file access/modification times.  Some applications legitimately use them to store metadata about a file .

NTFS supports multiple data streams for files and folders. Files are composed of unnamed streams that contain the actual file data besides additional named streams (mainfile.txt:one-stream).

Since the file size does not change with the addition of ADSs, it becomes difficult to detect their existence. Open source forensics tools, such as The Sleuth Kit (TSK) can be used to parse MFT entries and reveal the existence of ADSs. Specifically, the TSK command fls can be used to list the files and the associated ADSs.

Locating and Restoring Deleted Content

Files can be fully or partially recovered depending on the method of deletion, time elapsed since the deletion, and drive fragmentation. A deleted or unlinked file is one whose MFT entry has been marked as unallocated and is no longer present in the user’s view. The file can be recovered based on the metadata still present in the MFT entry given that too much time has not passed since the deletion.

TSK can be used to parse the MFT to locate and recover these files. The investigator would need to execute the command fls to get a listing of the deleted file’s inode and use that inode to extract the file data with the command icat.

Overwritten files’ MFT entries and content have been reallocated or reused. Complete recovery would not be possible.

INVESTIGATING INFORMATION HIDING TECHNIQUES

Hidden data can exist due to regular OS activities or deliberate user activities. This type of data includes ADS,  information obscured by malicious software, data encoded in media (steganography), hidden system files, and many others.

Uncovering Hidden Information

Collection of hidden data can be a challenge for an investigator. The investigator needs to be aware of the different data hiding techniques to employ the proper tools.

Scanning and Evaluating Alternate Data Streams

Open source forensics tools, such as TSK can be used to parse MFT entries and reveal the existence of ADSs. Specifically, the TSK command fls can be used to list the files and the associated ADSs as seen in Table A.     In this example, we can see that the file ads-file.txt  contains two streams named suspicious.exe and anotherstream.

The numbers seen in the beginning of each listing is the inode. This value identifies each file and folder in the file system. We should note that 63 bytes were skipped starting from the beginning of the drive since that data belongs to the master boot record (MBR). To extract the file data from the file system, the TSK command icat can be used in combination with the inode values as seen in Table B.

Executing Code From a Stream

Malicious software can attempt to hide its components in ADSs to obscure themselves from investigators. Such components could be executable files. Executable ADSs can be launched with the Windows start command or by other scripting languages, such as VBScript or Perl by referring to the ADS file directly: start ads-file.jpg:suspicious.exe.

Executable hidden in ADSs can be automatically launched on system startup by defining it to do so in the Windows registry key “HKEY_LOCAL_MACHINE\- Software\Microsoft\Windows\CurrentVersion\Run” by creating a string value containing the full path of theADSfile.

Steganography Tools and Concepts

Steganography is the science of hiding secret messages in nonsecret messages or media in a manner that only the person who is aware of the mechanism can successfully find and decode. Messages can be hidden in images, audio files, videos, or other computer files without altering the actual presentation or functionality. While steganography is about hiding the message and its transmission, cryptography only aims to obscure the message content itself through various algorithms. Steganography can be performed by using the least significant bits in image files, placing comments in the source code, altering the file header, spreading data over a sound file’s frequency spectrum, or hiding encrypted data in pseudorandom locations in a file.

There are several tools that perform steganography:

  •  S-Tools is a freeware steganography tool that hides files in BMP, GIF, and WAV files.
  • Spam Mimic is a freeware steganography tool that embeds messages in spam email content.
  • Snow is a freeware steganography tool that encodes message text by appending white space characters to the end of lines.
  • OutGuess is an open source tool that hides messages in the redundant bits of data sources.

Detecting Steganography

During an incident, an investigator might suspect that steganography has been used by the suspect due to an admission, a discovery of specific tools, or other indicators.

Traces of the use of steganography tools can be found in the recently used files (MRU) key, the USERASSIST key, and the MUICache key in the Windows registry; prefetch files, web browser history, and deleted file information in the file system; and in the Windows Search Assistant utility.File artifacts generated by these tools can also be a good indicator of the tools’ use.

Steganalysis tools can also be used to detect the presence of steganography:

·     Stegdetect is an open source steganalysis tool that is capable of detecting steganographic content in images 

·    StegSpy is a freeware steganalysis tool

Scavenging Slack Space

File slack  or Slack space is the leftover storage that exists on a computer's hard disk drive when a computer file does not need all the space it has been allocated by the operating system. The examination of slack space is an important aspect of computer forensics.

Slack space is a source of information leak, which can result in password, email, registry, event log, database entries, and word processing document disclosures.

File slack space can also be used to hide information by malicious users or software, which can get challenging if the investigator is not specifically looking for such behavior.

Volume slack is the space that remains on a drive when it’s not used by any partition.

Inspecting Header Signatures and File Mangling

Users or malware with malicious intent can alter file names or the files themselves to hide files that are used to compromise systems or contain data that has been gathered as a result of their malicious actions. These techniques include but are not limited to renaming files, embedding malicious files in regular files (PDF, Doc, Flash), binding multiple executables in a single executable, and changing file times to avoid event timeebased analysis.

For example, a malicious Windows executable “bad.exe” can be renamed to “interesting.pdf” and be served by a web page to an unsuspecting user. Depending on the web browser, the user would get prompted with a dialog that asks them if they would like to run the program and most of the time the user will dismiss the dialog by clicking the OK button.

To analyze a file disguised in different file extensions, a header based file type checker, such as the Linux “file” command or the tool TrID (also available in Windows) can be used. Table A provides a sample of the malware Trojan.Spyeye hidden in a file with an Acrobat PDF document extension being detected by the tool file.

Combining Files

Combining files is a very popular method among malware creators. Common file formats, such as Microsoft Office files, Adobe PDF, and Flash files can be used as containers to hide malicious executables. One example is a technique where a Windows executable is embedded in a PDF file as an object stream and marked with a compression filter. The Metasploit Framework provides several plugins to generate such files for security professionals to conduct social engineering in the form of phishing attacks.

To discover such embedding, an investigator can use Didier Stevens’s tool PDF-parser to view the objects in a PDF file.

Binding Multiple Executable Files

Binding multiple executable files provides the means to pack all dependencies and resource files a program might need while running into a single file. This is advantageous since it permits a malicious user to leave a smaller footprint on a target system and makes it harder for an investigator to locate the malicious file.

Certain tools, such as the WinZip Self-Extractor, nBinder, or File Joiner can create one executable file by archiving all related files whose execution will be controlled by a stub executable. When executed, the files will be extracted and the contained program will be launched automatically. Some of these file binders can produce files that can’t be detected by some  antiviruses and if downloaded and ran by an unsuspecting user, it can result in a system compromise.

File Time Analysis

File time analysis is one of the most used techniques by investigators. File times are used to build a story line that could potentially reveal how and when an event on a system caused a compromise. The file time of a malicious executable could be linked to a user’s browser history to find out which sites were visited before the compromise occurred.

The problem with this type of analysis is that sometimes the file times can be tampered with and can’t be relied upon as evidence.

SCRUTINIZING EMAIL

While many noncommercial users are favoring webmail nowadays, most corporate users are still using local email clients, such as Microsoft Outlook or Mozilla Thunderbird. Therefore, we should still look at extracting and analyzing email content from local email stores. Email message analysis might reveal information about the sender and recipient, such as email addresses, IP addresses, data and time, attachments, and content.

Investigating the Mail Client

An email user will generally utilize a local client to compose and send their message. Depending on the user’s configuration, the sent and received messages will exist in the local email database. Deleted e-mails can be also stored locally for some time depending on the user’s preferences.

Most corporate environments utilize Microsoft Outlook. Outlook will store the mail in a portable storage table (PST) or offline storage table (OST) format. Multiple PST files can exist in various locations on the user’s file system and can provide valuable information to an investigator about the user’s email-specific activity.

Interpreting Email Headers

Generally speaking, email messages are composed of three sections: header, body, and attachments. The header contains source and destination information (email and IP addresses), date and time, email subject, and the route the email takes during its transmission. Information stored in a header can either be viewed through the email client or through an email forensics tool such as libpff (an open source library to access email databases), FTK, or EnCase.

The “Received” line in Table A   email header shows that the email was sent from IP address 1.1.1.1. An investigator should not rely on this information as concrete evidence because it can be easily changed by a malicious sender (email spoofing). The time information in the header might also be incorrect due to time zones, user system inaccuracies and tampering.

Recovering Deleted E-mails

While most users treat e-mails as transient, the companies they work for have strict data retention policies that can enforce the storage of email, sometimes indefinitely. User emails usually are stored in backup archives or electronic discovery systems to provide means for analysis in case there is an investigation. The email servers also can keep messages in store although the users remove them from their local systems. Therefore, it has become somewhat difficult for a corporate user to delete an email permanently.

Recovery is usually possible from various backup systems. In cases where there is no backup source and users delete an email from their local system, we need to perform several steps on the user’s storage drive depending on the level of deletion:

·        If the user deletes the message, but does not empty the deleted messages folder, the user can move the messages from the deleted folder to the original folder quite easily.

·     If the user deletes the email message and removes it from the deleted messages folder, then the investigator needs to apply disc forensics techniques to recover the email.

In case of a Microsoft Outlook PST file, when a message is deleted it is marked as deleted by Outlook and the data remains on the disc unless the location on the drive is overwritten by new data. Commercial tools, such as AccessData’s FTK or Guidance’s EnCase can be used to recover deleted messages.

·    Another approach would be to use Microsoft’s “Scanpst.exe” tool. To apply this technique, the investigator should first backup the PST and then deliberately corrupt the PST file with the command “DEBUG <FILE.pst> -f 107 113 20 -q.” If the file is too large and there is insufficient system memory, then the investigator should use a hex editor to make the changes marked in red in the PST file shown in Fig B

VALIDATING EMAIL HEADER INFORMATION

Email header information can be tampered with by users that wish to not disclose their source information or by malicious users that would like to fake the origin of the message to avoid detection and being blocked by spam filters. Email header information can be altered by spoofing by using an anonymizer (removes identifying information), and using a mail relay server.

Detecting Spoofed Email

·         A spoofed email message is a message that appears to be from an entity other than the actual sender entity. This can be accomplished by altering the sender’s name, email address, email client type, and/or the source IP address in the email header.

·         Spoofing can be detected by looking at the “Received” and “Message-ID” lines of the header. The “Received” field will have each email server hop the message that was taken before it had been received by the email client.

·         An investigator can use the email server IP addresses in the header to get their host names from their Domain Name System (DNS) records and verify them by comparing to the actual outgoing and incoming email servers’ information.

·         The “Message-ID” field uniquely identifies a message and is used to prevent multiple deliveries. The domain information in the “Message-ID” field should match the domain information of the sender’s email address.

·         If this is not the case, the email is most probably spoofed. An investigator should also look out for different “From” and “Reply-To” email addresses, and unusual email clients displayed in the “X-Mailer” field.

Verifying Email Routing

Email routing can be verified by tracing the hops an email message has taken. This can be accomplished by verifying the “Received” field information through DNS records and if possible obtaining email transaction logs from the email servers involved. The “Message-ID” information can be searched for in the logs to make sure that the message has actually traveled the route declared in the “Received” field.

TRACING INTERNET ACCESS

Knowing the path a perpetrator has taken becomes very valuable when an investigator is building a case to present in court. It adds credibility to the claim and solidifies the storyline by connecting the events. For example, knowing  the path an attacker has taken to steal a company’s source code can reveal the extent of the compromise (loss of domain credentials, customer information leakage, and intellectual property loss), show intent, and prevent the same attack from happening. Tracing Internet access can also be valuable in the case of employees viewing content not compliant with work place rules.

Inspecting Browser Cache and History Files

An investigator can use various data points to trace a perpetrator’s activity by analyzing the browser cache and web history files in the gathered evidence. Every action of a user on the Internet can generate artifacts. The browser cache contains files that are saved locally as a result of a user’s web browsing activity. The history files contain a list of visited URLs, web searches, cookies, and bookmarked websites. These files can be located in different folders depending on the OS, OS version, and browser type.

Exploring Temporary Internet Files

A browser cache stores multimedia content (images, videos), and web pages (HTML, JavaScript, CSS) to increase the load speed of a page when viewed the next time.

·         For the Internet Explorer web browser on Windows XP and 2003, the cache files can be located in the folder “Documents and Settings\%username%\Local Settings\ Temporary Internet Files,” in Windows Vista/7/2008 they are located in the folder “Users\%username%\AppData\ Local\Microsoft\Windows\Temporary Internet Files”:

·   On Windows XP/2003, Mozilla Firefox stores the cached files in the folder “C:\Documents and Settings\%username\Local Settings\ Application Data\Mozilla\Firefox\Profiles,” and for Windows Vista/7/2008 in “C:\Users\%username%\AppData\Roaming\Mozilla\Firefox\ Profiles.”

·         On Windows XP/2003 Google Chrome web browser stores the cached files in the folder “C:\Documents and Settings\%username\Application Data\ Google\-Chrome\Default\Cache,” and for Windows Vista/7/

Visited URLs, Search Queries, Recently Opened Files

The Internet Explorer web browser stores the visited URL,search query, and opened file information in the file “index.dat” accompanied by last modified and last accessed, and expiration times.

This file on Windows XP/ 2003 systems can be located in the folder “Documents and Settings\%username%\Local Settings\Temporary Internet Files\Content.IE5,” and in Windows Vista/7/2008 systems it is located in the folder “Users\%username%\AppData\Local\Microsoft\Windows\Temporary Internet Files\Content.IE5.” The “index.dat” file contains a LEAK record, which is a record that remains when it’s marked as deleted, but can’t be deleted due to a related temporary internet file (TIF) still being used.

Google chrome also stores its user activity data in SQLite 3 database files.

Mozilla Firefox stores the URL, search, and open filesrelated history in a SQLite 3 database file Places.sqlite

Reconstructing Cleared Browser History

It is possible to come across cleared browser histories during an investigation. The user could have deliberately deleted the files to hide their web browsing activity or a malware could have removed its traces to avoid detection and analysis. Nevertheless, an investigator will look into various locations on the suspect system to locate the deleted browser history files.

The possible locations are unallocated clusters; cluster slack, page files, system files, hibernation files, and system restore points. Using AccessData’s FTK Imager on the suspect drive or drive image, an investigator could promptly locate the orphaned files and see if the browser files are present there.

Auditing Internet Surfing

Knowing what employees are browsing on the web while they are at work has become necessary to prevent the employees from visiting sites that host malicious content (sites with exploits and malware), content that is not compliant with work place rules, and content that is illegal.

Employees can use the web to upload confidential corporate information, which can cause serious problems for the employer.

Tracking User Activity

User activity can be tracked by using tools that monitor network activity, DNS requests, local user system activity, and proxy logs. Network activity, on the other hand, can be monitored by looking at netflows. A netflow is a network protocol developed by Cisco Systems for monitoring IP traffic. It captures source and destination IP addresses, IP protocol, source and destination ports, and IP type of service.

Local user system activity can be monitored by installing specific agents on the users’ systems that can report their activity back to a centralized server. Spector- Soft offers a product called Spector 360 that can be installed on user system and a central server. The agents on the user systems can track user browser activity by hooking into system application program interfaces (APIs) and enforce rules set by the employer.

Uncovering Unauthorized Usage

Unauthorized web usage can take multiple forms, such as downloading or viewing noncompliant or illegal content, uploading confidential information, launching attacks on other systems, and more. Once the unauthorized usage has been detected by the previously mentioned means, an investigator can focus on the user’s system to corroborate the unauthorized activity. This can be done by analyzing browser history files and related file system activities.

Building a “super” timeline with the tool log2timeline can become very useful to find the created cache and cookie files and the browser history entries around the same time the unauthorized activity was detected.

Tracing memory in real time.

Analyzing memory in real time can provide very crucial information about activities of malware or a hacker that would be otherwise unavailable if only looking at a system’s drives. This information can be network connections and sockets, system configuration settings, collected private information (user names, passwords, credit card numbers), memory-only resident executables, and much more. Realtime analysis involves analyzing volatile content and therefore requires swift action by the investigator. The investigator has to quickly act to capture an image of the memory using tools, such as MoonSols Windows Memory Toolkit, GMG Systems’ KnTDD, or F-response.

Comparing the Architecture of Processes

Generally speaking, Windows architecture uses two access modes, which are user and kernel modes. The user mode includes application processes, such as programs and protected subsystems.

The kernel mode is a privileged mode of functioning in which the application has direct access to the virtual memory. This includes the address spaces of all user mode processes and applications and the associated hardware. The kernel mode is also called as the protected mode, or Ring 0:

Employing Advanced Process Analysis Methods

Processes can be analyzed using tools, such as the Windows Management Instrumentation (WMI), and walking dependency trees.

Evaluating Processes with Windows Management Instrumentation (WMI)

WMI is a set of extensions to the Windows Driver Model that provides an OS interface where components can provide information and notifications. The WMI classes Win32_Process can help collect useful information about processes. The Windows command wmic extends WMI for operation from several command-line interfaces and through batch scripts without having to rely on any other

programming language. The command wmic uses class aliases to query related information. It can be executed remotely as well as locally by specifying target node or host name and credentials. Various commands that can be used to extract various process-related information through wmic are shown in Table A

WMI output can be used to get a clean baseline of a system to periodically run comparisons. The comparisons can show any new process that has appeared on the system, and can help to update the baseline if the new process is a known one.

Walking Dependency Trees

Viewing the dependencies of a process can provide valuable information about the functionality a process contains. A process’s dependencies may be composed of various Windows modules, such as executables, DLLs, object linking and embedding control extension (OCX) files; SYS files (mostly real-mode device drivers). Walking a dependency tree means to explore a process’s dependencies in a hierarchal view, such as a tree. The free tool Dependency Walker provides an interface that presents such a view, which is shown in Fig. B

It lists all of the functions that are exported by a given Windows module, and the functions that are actually being called by other modules. Another view displays the minimum set of required files, along with detailed information

No comments:

Post a Comment