Friday, September 4, 2009

Securing Application Infrastructure: The analysis of Application Security Methodologies

The trend of security threats has recently gained a prominent attention in media and industry reports. This article will briefly examine the methodologies and approaches that most organizations follow to address security issues by giving examples, test cases, strengths and weaknesses. Today's widely known solutions involve vulnerability scanning, static code analysis, penetration testing, binary analysis, fuzzing etc. Which of them are more or less reliable and which of them can address specific type of application problems, is mainly discussed here.

As many software vendors think that 'security issues' may never laid them out of business but in reality it does affect the sales as well as market reputation. Deploying proper application security not only rest assure the clients but also lead to increase the productivity. Let us take an example of interesting equation:

X=Applications developed
Y=Vulnerabilities exist in those applications
Z=Cost of repair (patch and fixes)
Now; X.Y.Z=A (answer)

If 'A' is less than the cost of third-party QA auditor, cost of training the developers and conducting additional security audits then it make more sense to write an insecure code.

Application vulnerabilities (in broad sense) can be divided into following sections but not limited to:

Operation/Platform Vulnerabilities
-Asset information disclosure
-Buffer Overflows
-Misconfigurations
-Error Handling
-Resource specific threats

Design Vulnerabilities
-Logic Flaws
-Access Control (Authentication/Authorization

Implementation Vulnerabilities
-Code Injection
-Information Disclosure
-Command Execution
-Functionality Abuse
-Input Validation
-Time and State

Now to test the security of the application, one may apply either of these methodologies:

Automated
-Automated Dynamic Tests (Fuzz Testing, Vulnerability Scanning)
-Automated Static Tests (Source or Binary Code Scanning)

Manual
-Manual Dynamic Tests (Parameter Tampering and Social Engineering)
-Manual Static Tests (Source or Binary Code Auditing)

Although each of these methods have their own strengths and weaknesses. Thus, we assume not the best, but atleast more efficient and reliable method can be judged by looking into their specific testing process.


Automated Dynamic Testing
While approaching to disclose application vulnerabilities under this method, the complexity ratio increases when moving from vulnerability scanning to the fuzz testing.

Strengths
-Less false positives (inherent benefits of run-time analysis)
-Programmatic approach to ensure reliable and consistent tests output

Weaknesses
-Threat assurance, No Fault != No Flaw
-Only the part of code audit may provide baseline for measurement.
-Unexpected conditions cannot be tested without additional programming.

Use Cases
-Fuzz Testing (complex input, informal SDLC, observable indicators)
-Application Scanning (strongly typed flaw classes, deterministic and observable behavior, known inputs only)
-Vulnerability Scanning (known transaction sequences, one to one mapping of triggers to specific conditions)


Automated Static Testing
This method can disclose the set of vulnerabilities present in the application by examining the code (source/binary) without user interaction. Several commercial and open source tools are available to perform automated static analysis. The complexity of such tools increases from normal flaw identification to the formal verification process.

Strengths
-Assessment of low-context flaws (parameters, DB query statements, etc)
-Automated scans required little or no human interaction
-Can get good placement during development lifecycle

Weaknesses
-Applications without presence of their source code.
-High ratio in false postives or negatives, tuning is harder.
-Critical issues with formal verification
  1. Developing and correctly expressing a set of security invariants.
  2. Developing an interpretation of the application that lends itself to proving/disproving invariants.
Use Cases
-Timely and resource-specific detection of simple flaws
-Detection of regression as a part of development lifecycle
-False assumption on strong assurance of the critical application
-In the hands of a developer who cannot interpret or filter the results correctly


Manual Dynamic Testing
The manual dynamic assessment apporach can be achieved by human-navigated application usage followed by assurance validation process and fuzz testing. A critical background information on application design can be provided by the developers. The complexity of manual dynamic testing process increases with its level of common criteria, assurance validation to parameter tampering.

Strengths
-Parallel capacity in execution of tests
-Pattern recognition
-Testing the live implementation may reduce false positives
-Capable of emulating the malicious attack process

Weaknesses
-Time consuming for large and complex applications
-May require the tester to hold a steep learning curve
-Test envrionment may not mirror production

Use Cases
-High risk applications require highly experienced security auditor to understand and scope the attack surface
-Wrong application type or the wrong tester background
-A case where the requirements of assessment does not match the expected risk profile of an application


Manual Static Testing
This process involves the interaction of human reviews, understanding application design and architecture documentation, use of offline toolset (such as, disassemblers, code browsers, etc).

Strengths
-Known data and code points
-Without any resource specific considerations
-Adaptability with skills and toolset

Weaknesses
-Accuracy issues (falst positives, human mistakes)
-High resource requirements
-Inconsistency in interpretation of same flaw in different ways

Use Cases
-Manual code audit (skilled resources, minor findings before automated tests, custom-coded scripts)
-Configuration review (low risk in changing values at runtime, known data sources and formatings)


Thus, from the application security assessment methods mentioned above and the statistics from "WASC Statistics Project" prove that the probability in detection of high risk vulnerabilities can be higher if combined set of methodologies are used. And this combined approach is almost 12.5% higher than automated scanning (specific to web applications).

Source: EthicalHacker.

No comments:

Post a Comment