From: Andre Gironda (andreg@gmail.com)
Date: Sun Dec 02 2007 - 02:06:34 EST
On Nov 30, 2007 12:54 PM, Gleb Paharenko <gpaharenko@gmail.com> wrote:
> Please, can members of security consulting firms share their
> experience about business model (set of their servises).
>
> What should be agreen between client and tester before the beginning of work (
> - what is vulnerability
> - what is compomise of the system
> - perhaps others).
For a vulnerability assessment, I think that two documents: a
statement of work (SOW) and a master services agreement (MSA) should
be agreed upon before work begins.
The SOW should be an example of what the deliverable(s) will look
like. This makes it easy for people to know what they are getting as a
result in the end. At first glance, a SOW will look like it already
has all the information a company needs to know - so they don't need
to hire you. But after looking at the level of customization and
detail that can be provided - the customer should realize the value
that you would add if you do a penetration-test. How win-win is that?
The MSA is all the negotiable terms: a) how long/hours, b) pricing, c)
contractual terms. Put these in separate sections (or even separate
pages) to allow ease of changes.
An incident response SOW/MSA is going to look completely different
than one for a penetration-test / secure code review / vulnerability
assessment (which should be relatively similar). If a comprise/breach
is detected during assessment work, then a new SOW/MSA should be
drafted to cover that work, should the client prefer you to do it.
In some cases, such as a company under PCI compliance - only a
Qualified Incident Response Company (QIRC) should perform a forensic
investigation, and note that QIRC's are different than the ASV or QSAC
lists of approved companies.
> Do it really nessesarry to sign some documents, so later the owner of
> the site you have test do not call you to the court for hacking.
Yes, you put that into the MSA.
> Is somebody have experience of getting money for founded
> vulnerabilites (perhaps white an black box testing can have different
> price). Do you have different rates for different kinds of issues (one
> price for XSS, another for CSRF, etc.).
I read a good blog post on the matter -
http://securitybuddha.com/2007/08/22/the-art-of-scoping-application-security-reviews-part-1-the-business/
Well worth the read. The author also states in a different blog post
- http://securitybuddha.com/2007/03/07/top-ten-tips-for-hiring-security-code-reviewers/
about typical (read: industry standard) pay rates.
Typically black-box (web app pen-testing) pays US $250/hour ($2k/day,
assuming an 8 hour day). Often these are 2-week assessments that are
billed as $20k/project.
White-box, or secure code review, is typically at US $350/hour.
Different applications and organizations have very different testing
requirements, which would have different testing methodologies and
practices applied to them.
In the case of web applications (you mentioned XSS and CSRF, so I
assume you're learning towards these types of applications), I would
rather see some sort of strategy consulting take place first (usually
$7k/day).
Usually vulnerabilities in web applications are due to issues with the
software development life cycle - many shops do not even have a formal
one. Instead of providing a long-drawn out 2-week vulnerability
assessment (which is going to find mostly low-hanging fruit
vulnerabilities such as SQLi, XSS, HRS, etc) - it is probably best to
determine the root-cause of such issues. If developers aren't working
with a formal coding standard, using static code analysis, and
implementing continuous integration - then there is little point to
performing a penetration-test (i.e. it's going to find things).
A strategy consulting engagement could be 1-2 days and be followed by
a vulnerability assessment at a later time planned in the future
(typically 3 or 6 months). The result could be a SWOT analysis with
an action plan to get development teams to utilize coding standards,
SCA, and continuous integration along with a formal software life
cycle. If the company already has those, then process improvements
such as CMM (& SSE-CCM) or integration of security testing with the
life cycle (e.g. Microsoft SDL, Cigital/DHS BuildSecurityIn program
(based on Gary McGraw's Touchpoints), OWASP CLASP, etc) could be
suggested and a tailored solution with roadmap provided to the client.
Again, for the situation where web application vulnerability
assessment is performed, this is best done using the source code after
performing a code walkthrough and secure design inspection. The
walkthrough gives an overview from the perspective of the developers,
while the design inspection can provide information that can be used
to improve the security testing/inspection.
Probably the best way to perform a web application vulnerability
assessment is to first run a basic static bytecode analyzer as a
build-server integrated (e.g. Ant) task (or use the Veracode
SecurityReview service). Follow that up with a build-server
integrated secure static source code analyzer (preferably always using
tools that are CWE-Compatible/Effective, or candidates to these
programs), again improving the test-cases from earlier test results.
Then, an automated fault-injection security scanner (combined with
knowledge from all earlier test-case data) should also be integrated
with the build-server. However, I don't think any such solution
exists at this time, except possibly IBM/Watchfire AppScan Tester and
HP/SPI Dynamics QAInspect and DevInspect, which integrate with
Microsoft TFS. Instead, it might be easier to load AppScan DE or
DevInspect into the IDE, unless it can be combined with existing
commercial functional testing suites that are supported by AppScan
Tester or QAInspect. For home-built open-source solutions, I'd look
at combining fault-injection with Canoo WebTest, Jameleon, Windmill
(or possibly Selenium RC, Wati[rnj], or Sahi). As a final step, some
or all components can be manually inspected. All security-related
issues should be submitted to the issue tracking system, with
potential false positives receiving low priority, and verified
vulnerabilities usually tracked with a priority of "$". When defects
are fixed, they should turn into a unit test that asserts the behavior
of the fix, in order to find and fix as many related vulnerabilities
as possible (and also to provide a regression test against the
original bug). Some classes of vulnerabilities should also be used in
a final secure architectural review - to be turned into filters that a
web application firewall (WAF) can use, provided as a packet that can
be sampled for by a rate-based IPS (RBIPS), sent to operators as an
error/log-event message (e.g. IDS), or used by the application to
logout/disable a user. Other architecture changes such as
database/service encryption, configuration lock-downs, system
hardening, etc - can be submitted to a change management system as
tasks/changes to be completed.
There's also a nice checklist provided here -
http://portswigger.net/wahh/tasks.html
In the case that the client is seeking help with compliance, then I'm
sure you can add some zeros to the dollar amounts, provide less
documentation, and make sure you meet all of the input controls
without providing any output besides a shiny report with the stamp of
approval.
Cheers,
Andre
------------------------------------------------------------------------
This list is sponsored by: Cenzic
Need to secure your web apps NOW?
Cenzic finds more, "real" vulnerabilities fast.
Click to try it, buy it or download a solution FREE today!
http://www.cenzic.com/downloads
------------------------------------------------------------------------
This archive was generated by hypermail 2.1.7 : Sat Apr 12 2008 - 10:58:14 EDT