Nontechnical Issues

Now that we've covered that available web application security scanning technologies in depth, what role do people and process play in contributing to the successful deployment of these tools in a typical enterprise environment?

Process

At its essence, any automated security assessment methodology is a process, so careful process design is critical to long- term success. In this section, we'll catalog some of the critical steps in designing a sound "security workflow."

One of the first things we've learned to avoid in our many travels in the IT industry is the "build from scratch" syndrome. In any competent mid- to large- sized enterprise IT shop, some support infrastructure almost surely already exists. Our primary advice to those wishing to build an automated web security assessment program is thus: leverage what's already there!

This involves careful research up front. Learn about how your current organizational application development quality assurance (QA) process works, and where the most efficient integration points lie (see Chapter 12 for more details). Equally important for automated scanners that will be integrated into the live production application support process, you'll need to understand how the current ops support infrastructure works, from the "smart hands" contractors in the datacenter who physically touch the servers, to the Tier 1 support contractors working at a phone bank in India, through the on-staff Tier 2 and 3 system engineers , all the way to the "Tier 4" development team members (and their management!) who will ultimately receive escalations when necessary. Think hard about how your assessment methodology and toolset will integrate into this existing hierarchy, and where you might need to make some serious adjustments to the existing process.

In our experience, the important issues to consider include

  • Management awareness and support   Executives should understand the relationship of the automated assessment process to the overall business risk management program, and be supportive of the overall direction (not necessarily intimately aware of the implementation details).

  • Roles and accountability   Management should also clearly understand organizational accountability for issues uncovered by the assessment program. It's probably wisest to follow the accountability model outlined above, from Tier X operational staff all the way up to the senior-most executive "owner" of a given application.

  • Security policy   It should be simple, widely understood within the organization, and practically enforceable. At a minimum, it should describe computing standards, criticality criteria for identified policy violations, and an expected remediation process. It should also consider relevant regulatory standards like the Payment Card Industry Data Security Standard (PCI). If a good policy doesn't exist, you'll need to write it!

  • Integration with existing SDLC   There should be a well-documented path from web security scanner alerts to the developer's desktop for bugs of appropriate type and severity. You should also consider the applicability of scans at different points in SDLC (e.g., preproduction versus production).

  • The IT trouble ticketing system   If your choice of automation tool doesn't integrate well here, your project is dead before it even starts. DO NOT plan on implementing your own "security" ticketing systemyou will regret this when you discover that you'll have to hire the equivalent of a duplicate Tier 1 support desk to handle the volume of alerts. Test and tune thoroughly before deploying to production.

  • Incident response process   If there isn't a disciplined organizational incident escalation process already in existence, you'll need to engage executive management pronto. Otherwise, the security team will look foolish when alerts overwhelm the existing process (or lack thereof).

  • Post-mortem analysis   We've seen too many orgs fail to learn from incidents or process failures; make sure you include a robust post-mortem process in your overall program.

  • Process documentation   In our experience, the most common external audit finding is lack of process documentation (and we've got the scars to prove it!). Don't make it this easy for the bean-countersallocate appropriate resources to create a living repository of standard operating manuals for the organization, if one does not already exist.

  • Education   Just as placing a "secure coding" book on a software developer's bookshelf does not constitute a security SDLC, installing the latest application security scanner on one system engineer's desktop is also highly ineffective . Make sure to provide ongoing training on how to use the system for all levels of users, document attendance, test understanding, and hold managers accountable.

Obviously, these are really brief overviews of potentially quite complex topics. We hope this gives you a start toward further research into these areas.

Technology Evaluation and Procurement

Once the lay of the land has been assessed, one of the first questions facing an incipient security scanning program is "build or buy"? Overall, our advice is "buy," based on our general experience that the blood and treasure spilled in the name of developing in-house security apps isn't worth it in the long run (we've even worked at some large, sophisticated software development firms where this still held true). This means that you'll have to devise a process for evaluating new technology on an ongoing basis to ensure that your scanning program remains up-to-snuff.

We recommend you explicitly staff this effort, define crisp goals so it doesn't get too "blue sky" or turn into a wonky "skunk works" project, and ensure that you have allocated appropriate budget to execute on the technology selections made by the team. Our previous "bakeoff" discussion in this chapter should've provided a glimpse of how to develop technical criteria for evaluating web application security scanners. Beyond this, generic technology evaluation and procurement processes are outside of the scope of this book.

People

Once the program is defined, it is important to fit people into the program in a manner commensurate with their capabilities. Finding a good "fit" requires delicate balancing of chemistry, skills, and well-designed roles. We can't help you with the intangibles of chemistry , but here are some pointers to help you get the other stuff right.

Skills Needed

Enterprises commonly underestimate the complex analytical requirements of a successful application security automation program, and frequently have trouble finding the right type of person to fill roles on the team. In our view, there are several important qualities for such individuals:

  • Deep passion about and technical understanding of common software security threats and mitigations, as well as historical trends related to same.

  • Moderately deep understanding of operational security concepts (e.g., TCP/IP security, firewalls, IDS, security patch management, and so on).

  • Software development experience (understanding of how business requirements, use-case scenarios, functional specifications, and the code itself are developed).

  • Strong project management skills, particularly the ability to multitask across several active projects at once.

  • Technical knowledge across the whole stack of organizational infrastructure and applications.

  • The ability to prioritize and articulate technical risk in business terms, without raising false alarms over the inevitable noise generated by automated application assessment tools.

Obviously, finding this mix of skills is challenging. Don't expect to hire dozens of people like this overnightbe conservative in your staffing estimates and tying your overall program goals to them.

In our experience, finding this mixture is practically impossible , and most hiring managers will need to make compromises. Our advice is to look for potential hires that have both a software development and a security background, as opposed to a purely operational security background. We've found it easier to teach security to experienced software developers than it is to teach software development to operational security professionals. Another easy way to achieve the best of both worlds is to staff separate teams for infrastructure/operational security, and another for application security. This also provides a viable career ladder starting with basic trouble ticket response, and leading to more strategic interaction with application development teams.

Organizational Structure and Roles

As we noted earlier, it is our experience that the most effective implementations of an automated application assessment program integrate tightly into existing development QA and operational support processes. The challenge here is aligning the goals of diverse teams that potentially report through different arms of the organization: IT operations, security/risk management, internal audit, and software development (which may itself be spread through various business units).

Our experience has taught us that the greater the organizational independence you can create between the fox and the chickens (metaphorically speaking), the better. Practically, this means separating security assessment from application development and operational support.

Alternatively, we've seen organizational structures where security accountability lived within the software QA organization, or within IT operations. We don't recommend this in most instances because of the potential conflict of interest between delivering applications and delivering secure applications (akin to the fox guarding the chicken coop). Time and again we've seen the importance of providing external checks and balances to the software development/support process (which typically operates under unrealistic deadlines that were set well before security entered the picture).

To avoid alienating the software development group by setting up an external dependency for their success, we again strongly recommend providing security resources with software development backgrounds. This goes a long way towards avoiding a culture of "security avoidance " in the development process.



Hacking Exposed Web Applications
HACKING EXPOSED WEB APPLICATIONS, 3rd Edition
ISBN: 0071740643
EAN: 2147483647
Year: 2006
Pages: 127

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net