Teatime: Intro to Security Testing

2016-05-01 8 min read Teatime

Welcome back to Teatime! This is a weekly feature in which we sip tea and discuss some topic related to quality. Feel free to bring your tea and join in with questions in the comments section.

Tea of the week: It was my birthday this past Sunday; for my birthday some years ago, my grandmother bought me some Dragon Pearls from Teavana, which instantly made my all-time favorites list. They’re expensive, but soooo worth it.

teaset2pts

Today’s topic: Intro to Security Testing

This information was mostly adapted from the OWASP Testing Guide version 4, freely available at owasp.org. OWASP (which stands for the Open Web Application Security Project) is the go-to resource for all things web security, and they provide a ton of free information. I highly recommend browsing their site.

Principles

The first section of their guide talks about some basic principles of security testing. Some of these are familiar to me as a QA professional, but some of them are a little more specialized. Here’s the ones I felt were important to discuss when I gave this talk in person:

  • There is no silver bullet No firewall setting or security scanner is foolproof; security testing is testing like anything else, and it requires a customized test plan for your needs and your application just like functional testing does. Security assessment software makes a great first-pass, but it is not in-depth or nearly as effective as a multi-part strategy. Security is a process, not a product.
  • Think strategically, not tactically The patch-and-penetrate model generally results in failure; there’s always a window of vulnerability while you develop a patch for what you’ve found. Users may not be aware of the patch and may not apply it even if they are aware; often, they feel patching may break functionality in the application, especially if it’s heavily customizable like most ERP systems. We need to be more proactive about security, finding and patching issues before release instead of after.
  • Test early, test often This applies to security as much as it does functional testing: integrating testing into the entire software life cycle is better than trying to squish it in near the end. Plan software with security in mind and you’ll end up with a more secure end product.
  • Think outside the box This is where testing skill comes in handy. A good functional tester will find alternate paths and error paths that a developer didn’t consider when they just tested the happy path; a good security tester will look at the application like a hacker rather than like an end user, finding odd sequences that may lead to exploits. Try to figure out what the developers may not have thought of or what assumptions they may have made.
  • Use The Source, Luke! Black-box testing is only ever going to be so effective. This is one place where white-box testing really shines: you can find potential exploit sources much easier if you have some understanding of the source code, because exploits are usually very technical in nature. Where possible, use static analysis on the code itself to search for vulnerabilities.
  • Develop Metrics Metrics can be a very controversial topic, I know. Good metrics will show you if more training/education about security is required, if there is a security mechanism that is not well understood by the team, and if the number of problems is going up or down over time; they shouldn’t be directly tied to individual compensation or promotion decisions. OWASP has a project about developing good security metrics that can be helpful
  • Document test results A formal record of the results is crucial. The record must make it clear to the business owner where the risks exist and what was done to mitigate them; it should also be clear to the developer where the exact location of the vulnerability is and what was the recommendation. Finally, the record should make it easy for another tester to replicate the results, much like a good scientific paper.

Techniques

Here are some techniques that OWASP recommends for security testing:

  • Human review This can be done as part of a code review, or on its own. It can be simple and informal: simply ask the code author how something works and why it was done that way. This should follow the principle “trust but verify”: don’t assume that what you’re told is correct, but don’t be antagonistic either. This is the best way to find out if people understand the security implications of what they’re writing. OWASP has a good code review guide for this sort of thing.
  • Threat Modelling is one of my favorites from this paper, but I’m an old-school “model first” kind of tester. The idea is that while creating a tech design, you can create a threat model using the following three steps. First, decompose the application: create use-cases, identify entry points, define and classify the assets that are at risk, and identify trust levels that represent access rights external entities should be granted. Secondly, Determine and rank threats: Rank the assets in order from most vulnerable to least vulnerable; identify possible threats and vulnerabilities; and rank the threats using a security risk model. Finally, create mitigation strategies and countermeasures for the threats identified.
  • Penetration testing, or “Hack yourself first” as they say. Generally, you will try to hack your own application before release using black-box strategies. This is very useful for general networking threats, like firewall openings, but less so for web applications. It can be very cheap and fast, but it’s hard to customize this sort of thing enough to catch serious issues.

OWASP Top 10

Finally, I wanted to run down the 2013 Top Ten list; this provides a guide to low-hanging fruit. Every developer in your organization should be familiar with what these vulnerabilities are and how to prevent against them, as they are the most common exploits.

  1. Injection. This covers SQL Injection, but also LDAP, NoSQL, and XML injection, as well as similar types of injection attacks.
  2. Broken authentication and session management This covers guessable credentials, session IDs in the URL that can be guessed, session IDs that don’t time out or rotate, and credentials that are sent unencrypted so anyone can read them.
  3. Cross-Site Scripting (XSS) In the simplest form, this vulnerability occurs when user input is output to the page without being escaped first, which can lead to javascript execution if the user types certain data. This is the most widespread web application security flaw.
  4. Insecure direct object references This is where an authorized user can get access to data that they should not have access to by accessing it directly. For example, if your user has access to widget foo but not widget bar, and they see that the URL says /edit?widget=foo, when they change that URL to read /edit?widget=bar, they should be denied access. Many applications do not check the authorization directly, instead relying on the fact that the menu does not offer widget bar to protect their data.
  5. Security Misconfiguration This is a broad category covering things like default accounts to the web server allowing remote management, unpatched flaws at any level, unprotected files or directories, et cetera. This can cover platform-level flaws, app-level flaws, database-level flaws, or even custom code that was misconfigured
  6. Sensitive Data Exposure This covers items like failing to encrypt sensitive data as it goes over the wire, allowing man-in-the-middle attacks to gain data without even having to hack the site. Always remember to use SSL protocol when transmitting sensitive data!
  7. Function-level access control This is like number 4 above, but for functions. If you can see widget foo at /view?widget=foo, you don’t necessarily have access to /edit?widget=foo; again, this needs to be checked at the function level. AJAX requests need to be secured as well; it’s easy enough to capture the response to a “GET” packet and re-submit it as a “POST” or a “DELETE” to see if the server allows editing.
  8. Cross-site Request Forgery An authenticated user visits a malicious site, which uses an image beacon to submit a request to your site. Because of the way browsers authenticate requests, the browser will check if your site has a cookie (which it does) and attach that to the request. This allows the specially-crafted image beacon to submit a malicious request and capture the data.
  9. Components with known vulnerabilities This covers things like using an old version of a web framework or failing to patch it, introducing the vulnerability into your site. This is very common with Java or Wordpress, as they are always releasing patches for security vulnerabilities.
  10. Unvalidated redirects and/or forwards Sometimes after the user performs an action, you want to forward them to another page; for example, if they were trying to access a secured resource, you might direct them to a login page, then back to the page they were trying to get to. If you do not validate the redirect, an attacker can craft a request to your login page that redirects the user to their own site after they log into what they are sure is a valid login page hosted on your domain. They may not realize they have left your site after they log in!

This was sort of a whirlwind tour of security testing, but hopefully some of you found it helpful. Did you learn anything new? Let me know!