As a Product Owner,
I don't want my product
being 0wned

Antti Vähä-Sipilä, F-Secure
antti.vaha-sipila@f-secure.com / avs@iki.fi
Twitter: @anttivs
Available online at https://fokkusu.fi/agile-security-slides/ Hey! This was last updated in 2015! That's a long time ago.

This online presentation is subject to change and may not be the same one that was presented at any single conference. Related material has been presented under various guises (at least) at Topconf Tallinn 2012, BSIMM Community Conference 2013, SAP Security Expert Summit 2014, and Scan-Agile 2015.

Also, did I already note that this was written in 2012 .. 2015, so this is old stuff.

This presentation has extra notes for non-interactive viewing. You can access the notes by pressing the arrow down, or by clicking on the arrow in the right bottom corner.

You are standing in the basement. There is an arrow pointing up. You hear the muffled noise of the audience upstairs. There is a sign here that reads:

Congratulations! You found the explanations track. Each slide has a 'down' arrow, and pressing it will show some more detail of what I will be talking about. This is intended for those who are reading the slides outside the live presentation. Press the 'up' arrow to get back on the main slides track.

Software security
activities

Software security
activities

  • Things aiming at more secure software
  • Analysis, testing, training, measuring...
  • BSIMM (bsimm.com): 12 practice areas, 112 activities

Software security (or a closely related term, application security) is about ensuring that software does not have vulnerabilities. This is different than "security software" (like a firewall, or antivirus, or using a crypto protocol). A security feature can be one way to tackle a security risk in software, but security software also needs to be secure.

Software security is one of the 'qualities' of software. Therefore, it is not surprising that secure software can be described by extending the notion of good quality. Security could be said to be "robust quality".

A key point is that security is quality against an active attacker. Therefore, traditional quality assurance does not always perform well enough. Active attackers will try to exploit the weakest (e.g., the least tested) areas.

There are a lot of ways in which to (try to) ensure that software has no vulnerabilities. Every tool vendor is likely to portray their solution as the best one, or at least a necessary one. In reality, what you use depends on your culture, budget, language / environment / frameworks, skills, and time.

BSIMM (Building Security In Maturity Model) is a survey of software security activities, where data has been collected from dozens of (mainly large) companies using qualitative interviews. (In the interest of full disclosure, both Nokia and F-Secure, where I have worked, have been BSIMM community member companies.) The activities have been divided into 12 practice areas. BSIMM has statistics on their popularity - but this does not mean that popular activities would be the most effective.

The most important
takeaway

If a software security activity is not on a backlog, it will not get done.

No, this is not entirely true. Some activities might get done if you have a security guy persistently lobbying and pushing for it. But in a real (especially larger than one team) enterprise, work that is not on the product backlog has a very dim future.

This point is based on economics. All security activities are work. If work has no time and human resource allocation, it will not be done. Time and human resources must be allocated through the same allocation strategy as your immediate value-creating work (e.g., functional features); otherwise, security work (which is rarely seen as creating immediate value) will get optimised away.

We will get into this topic later when we discuss non-optimal ways for driving software security activities.

R&D software security activities

  • One-off / per feature tasks
    • Security feature implementation
    • Threat modelling
  • Continuous activities
    • Security testing
    • Static analysis
  • Ways of working
    • Guidelines
    • Awareness

There are three categories of security activities: 1) one-off activities that are done once, either per product or per feature; 2) continuous activities, that should be done all the time and to all code; and 3) ways of working, which are about how things need to be done (as opposed to what).

Security features are functionality that control some aspect of security. For example, passwords, encryption or use of TLS, or a sandbox can be security features.

Threat modelling will be discussed in detail next.

Security testing and code review (static analysis) should be ongoing concerns, either automated or manual; if testing is manual, it won't be continuous but will be at least recurring. The bulk of security testing should be automated to the extent possible.

In addition, there are some activities that relate to how things should be done - for example, to follow secure coding guidelines.

In a commercial setting, it is not enough to just do these activities. You also need to be able to provide evidence (i.e., something auditable or tangible). This is especially important of you operate in an area that has compliance requirements.

The challenge in agile is that how you can drive these activities in a way that ensures they are done, you get evidence of them, and so that you do not destroy the properties of agility.

Threat modelling

  • Many schools of thought; I do data flow analysis
  • Use a Data Flow Diagram or a Message Sequence Chart, consider all flows and storage
  • Use a framework such as STRIDE (Microsoft)
  • Discovers other required security activities
    • Security feature needs, test requirements, rearchitecting work...
  • Can include a Privacy Impact Assessment (PIA)

There are many ways to do threat modelling (also called "architectural risk analysis" or "threat analysis"). Some people prefer threat trees, some have unstructured discussion. In my line of work, teaching engineers to do it is just as important as the results themselves, so I am using a data flow based threat modelling technique. I have successfully used it in dozens of facilitated sessions for components ranging from embedded device drivers to cloud-deployed web services.

In doing this sort of threat model, we would start with a Data Flow Diagram (DFD) or a Message Sequence Chart (MSC) depending on the complexity of interactions. Each data flow, data store, and processing entity, will be discussed from six aspects that make up the acronym STRIDE (Spoofing, Tampering [Integrity], (non-)Repudiation, Information disclosure [Confidentiality], Denial of Service [Availability], and Elevation of Privilege).

Findings are stored on the product backlog as tasks or as acceptance criteria to existing features. These tasks can be, for example, a functional security feature requirement; some test cases that need to be built into test automation; changes to architecture; documentation; or something else. The important thing is that everything needs to be on a backlog, either as their own item or as an acceptance criterion.

For more information on STRIDE, and its nuances, have a look at Threat Modeling: Designing for Security by Adam Shostack, or my Software Security course at Aalto University in 2015 (or 2014).

On how to integrate privacy aspects into threat modelling, see my presentation on discovering technical privacy requirements.

Driving the backlog

Worse alternatives

  • "Themed" sprints (e.g., "security" sprints)
  • "Bucketized" security development work
  • Teams' Definition of Done

There have been other suggestions of how security work ought to be driven. I, myself, have advocated using the Definition of Done too, but I’ve changed my mind.

Themed sprints cause work to be "pushed into the future", i.e., causes technical debt to accrue until a themed sprint. It is not guaranteed that the themed sprint will ever come, or that debt can be paid back during the allocated time. "Bucketization" of security development work is an idea (from Microsoft's SDL-Agile) where some security engineering things must be done at every increment, some at every second increment, etc.; this is bad as it decouples security work from the flow of actual functional requirements and takes the control away from product management. From outside the team, it also looks like the team just gets slower in their throughput (shows decreased velocity). Having work in a Definition of Done is just another way at looking at bucketization. Instead of a Definition of Done, use Acceptance Criteria that are tagged to specific functional requirements. When you toss out a functional requirement, the Acceptance Criteria - and the related security work - also goes out.

Work that is outside the backlog is problematic because it doesn't get prioritised against all other backlog items. This puts it in a category that is not subject to the same business decisions - either it becomes "sacred" work that cannot be touched, or work that doesn't get done at all.

Also, this is a kind of "invisible work" or "dark matter" - for an external observer, it looks like the team wastes their time doing something else than the features on the backlog. The external observer doesn't see that the team is busy fuzzing or conducting ARA or whatever. In the worst case, they'll start to pressure the team to drop this invisible work and just get on with the features.

(A side comment: A Definition of Done is a set of quality criteria that describes when a Backlog item is ready for delivery. Most agile coaches agree that a Definition of Done should be agreed between the Product Owner and the development team; and in many organisations, the Definition of Done is imposed as an external requirement set on the team. In the former case, is not guaranteed to contain software security activities and in the latter case, it is not guaranteed that the teams follow it.)

One-off /
feature specific
activities

  • Straight onto the Product Backlog
    • As backlog items or Acceptance Criteria
  • Attacker stories” or “Misuse stories” as a proxy

Functional security features are "easy" to drive. This is because they're just like any other functional feature, and agile product management is very good in getting this sort of work scheduled and done.

The challenge is that customers and Product Owners may not always be technically capable of requiring obscure security requirements - and especially prioritising them well enough.

A way to drive these requirements may be through "attacker stories" or "abuser stories". These would be negative stories that say what should not happen. The developers will then invert these into positive functional feature tasks later. This is, therefore, essentially a way to communicate about security needs without having to open them up too early.

See also Jim Bird's blog post on abuser stories.

Threat modelling itself (a source for much of these features) can be driven as Acceptance Criteria. New development that is seen as sensitive can be tagged with an Acceptance Criterion to do threat modelling.

Continuous security activities

  • Examples: Static analysis, security testing
  • Instead of creating a task to do the work, create a task to automate the work
    • Example: Instead of a security testing task, create a task to create automated security test cases
    • Example: Run an initiative to take pull request based code review into use in the team

Most of the security activities are engineering activities. These are typically either fact-finding exercises (like threat modelling) or testing activities.

Driving these recurring or continuous activities through a backlog is difficult, because backlog items are consumed and vanish. Just adding them on the backlog again and again is not really what the backlog is supposed to be.

On the other hand, just putting these into some generic quality gate (like Definition of Done) requires a very mature and disciplined team! More so if you need to get evidence of the activities - just saying "yeah, we do it" may not suffice.

"Easy" to automate?

  • Static analysis (at least linters)
  • Scan dependencies for known vulns
  • Extract software inventory, track vuln data
  • Web app automated vuln scanning
  • Cookie compliance scans
  • Database injection testing
  • Fuzzing
  • TLS configuration on deployed servers
  • Open ports, running processes on deployed hosts

If you talk to static analysis tool vendors, they're likely to portray the tool as a silver bullet. In test automation and especially Continuous Delivery, you'll want to ensure that the analysis provides rapid results. The closer you bring it to the actual coding, the better the feedback loop is likely to be. Some would like IDE or editor integration, and if that is available for your platform / framework combination, why not? When evaluating a static analysis tool, you should ensure you pilot it with your code and your set of frameworks. If you change your language and frameworks often from project to project, it may be that static analysis is not your first choice as a software security control.

Knowing your software inventory is very useful for vulnerability tracking and management, and the build process gives you a real opportunity to see exactly what you're including. As an example, Node.js applications' dependencies (and their dependencies) can be checked against known vulnerabilities with nsp. Running this sort of tools at build time (or test run time) makes sense.

If you deploy servers to a cloud, running sanity checks against them from inside and outside of those nodes is a good idea, because that helps to catch issues related to changed deployment templates. Having your host configuration and deployment automation under source control is a good idea.

Ways of working

  • Examples: Avoid some language specific pitfalls, use a specific API only in a certain way
  • The problem: They're not tasks with a beginning and an end.
  • The upside 1: Often, these are checklists that can be checked with tools.
  • The upside 2: Often, there is a framework / library that abstracts the issue away.
  • Therefore: If you can make it a checklist, do that.
  • Create a backlog item of taking a tool into use that enforces the checklist.

Things like "secure coding" are heavily language specific. In C, there's a lot of pointer arithmetic that can go wrong. In OO languages, you might have a singleton leaking info between several sessions. And you should use prepared SQL statements instead of string concatenation. And so on. This is primarily an education and training issue, and very difficult to drive as a "task".

Luckily, many of these can be detected using suitable tools. Then it becomes a backlog task of taking the tool into use. (And hopefully it will run on new code ever after.)

Also, mandating the use of a certain framework or technology may remove some of these threats. For example, session management in modern web frameworks is not broken by default - don't roll your own.

Checklists aren't the best way to come up with threats and requirements, because checklists cannot think outside the box.

However, checklists are great when you know exactly a set of things you have to do or remember. Think about preflight checking, medical operations checklists, etc.

Tools are a way to codify checklists into automation. Checklists are also encapsulated in frameworks, API wrappers, and other technologies, that transparently take care of things in the right way.

If a security aspect cannot be even checklisted, it sounds like "art" or "silent knowledge". I would take care of this by having some hardcore security people around and offering their services to the development teams as needed - if these are the only type of issues you have left, you are in a very, very good shape anyway.

Evidence
and compliance

(Or, talking to the CISO)

The benefit of
being backlog-centric

  1. All security work is visible and gets prioritised against other business needs
  2. Security work that has not yet been done is explicitly visible
  3. Security work has been done is in the "done" pile, showing as evidence of work done
  4. Feature-specific security work follows the feature. Drop the feature, drop security work

Caveats:

  1. If your organisation uses acceptance testing that has been segregated from your development (i.e., different people develop and test), it is hard to drive testing through the backlog.
  2. What to do to underlying architectural decisions that pre-date the actual implementation, like selecting a platform or implementation language? I don't have a really good answer, but it would fit the concept of a "Sprint Zero" - a kind of bootstrapping sprint many teams use. But that would be a themed sprint which I already denounced earlier. So I need to think more about this. For now, I suggest doing a "Sprint Zero" and considering applying attack modeling activities (BSIMM activity AM2.2) in that context.
  3. Also, high level "enterprise security architecture" work that has been built for BDUF is something is tricky. If the organisation has a separate enterprise architecture ivory^H^H^H^H^H team who does this sort of work outside the dev teams before actual development starts, it really isn't very Agile, and I don't have a good answer for that.

"Evidence" explained

    Over-confidence is regulated by requiring evidence.

    Fear and uncertainty is managed by producing evidence.

    Good evidence is:

    1. Auditable by someone else (e.g., customer)
    2. Created with little or no overhead
    3. Directly linked to the artifact (i.e., results, not plans)

In many cases, I will be referring to the requirement of creating evidence of software security. This is an over-arching theme: I reject agile security activities that are unable to create evidence.

Evidence is required if you want to do risk based software security work (which you do want to do). We will get back to this later.

Good evidence does not (and actually should not) be of forensic quality.

It needs to be auditable - meaning that someone else than you can check its existence. If it cannot be seen by a customer, you cannot sell it as added value to a customer.

It needs to be created with a very small overhead. Preferably all activities should produce the evidence as a side-effect. Writing an extra document is almost always the wrong answer - refer to the Agile Manifesto.

It needs to be linked to the artifact. Instead of test plans, provide test results as evidence. Instead of a general policy document, provide features' acceptance criteria as evidence.

Prefer "lazy" evidence

  • In most cases, a written document is waste
  • If it is really required, writing it must be a backlog item
  • Otherwise, just ensure that you can pull out evidence of software security activities from JIRA / Git when needed

Being able to gather evidence in a way that does not cause waste is important. Instead of specific extra documentation, you should aim to produce auditable evidence instead of up-front evidence.

If you can show that you have done backlog items (and those items have security aspects), or if you can show you have security test cases or tools running, these are essentially "free" evidence. You get it for "free" because you document that anyway as part of your work management process.

If you are a CSO/CISO that now shouts That is not enough!, you may be right. After all, you know your specific domain. But in this case, you should be prepared to require each and every piece of evidence through the backlog. And you need to convince the Product Owner that this evidence or document is worth generating. After all, Product Owners own the business case.

If you are a progressive CSO/CISO, you understand that you want an information radiator in your office that shows the current state of security work, pulled automagically from your backlog management systems. This is not as much for knowing the status, but for peace of mind knowing that the evidence (data) is there, and automatically retrievable, should you need it.

Residual risk

Identified risk - Mitigated risk = Residual risk

  • Backlog items that are done are evidence of mitigation
  • Backlog items that are not done are the residual risk

If you need to communicate with a CSO/CISO type in your organisation, they're often fond of things like "residual risk". This is a fancy term for risks you know exist and you haven't sufficiently controlled.

Residual risk is important because it has to be accepted; if it's too large, some other way to manage it may be necessary (e.g., insurance).

If you follow the method where threat modelling is done, and it generates activities on the backlog, you already have all the data you need. To enhance automatic visibility to risk management, you might want to tag the identified risks as "security" or something. This way, you could even have a real-time residual risk display for your risk management.

Some final words

Separation of duties

  • Some specific domains segregate development and testing
  • Your solution there is Test Driven Development
  • Redefine: Developers do TDD; testers verify test case validity
  • Only independently validated test cases accepted

Specifically in finance, there is the concept of separation of duties. One person should not be able to both develop and test a feature. This seems to be a fairly major block to using many agile methods in these industries.

However, you can redefine the segregation of duties so that developers still develop and implement the test cases; but for the tests to pass, a separate person has validated the test case (through machine and manual validation work).

This also cuts down on the test effort because test cases only need to be re-validated if they change.

Buying
security consulting
in agile

  • Buy threat modelling first, security test cases second
  • Integrate consultants through your backlog (e.g., JIRA)
  • Require them to coach your Product Owners
  • Pentest? Maybe, if you have money left

The traditional way to buy security consultancy is to buy a "penetration test" or a "security assessment". Usually, this does not fit in any way into an agile delivery project, much less Continuous Integration.

Instead, you should ensure that your teams know how to do threat modelling. A good facilitator (a consultant) can teach this by actually doing it with your developers. Therefore, you should buy threat modelling help as the first priority.

For all the security testing that can be automated, buy test cases that you can run in your test automation, and only buy manual testing if the tests really cannot be automated.

In order to trigger threat modelling, choose a security consultancy you trust, and integrate them (preferably through tools) into your product management process. Let them discuss with, and coach, your Product Owners and sell threat modelling and security test cases to you. In the best case, when your Product Owner puts a new feature on the backlog, the consultant could already propose some Acceptance Criteria.

Consultancies cannot work fully "on demand" unless the volume is large, and currently, this type of security consulting is rare. You need to have some sort of cadence in your backlog refinement ("grooming") and also probably have the consultancy on a retainer agreement.

Unsure where to start?

  • Threat modelling
    • It will feed your backlog
  • Ready-made "security stories"

If your product management or teams have absolutely no security background, it will may be difficult to get the ball rolling.

My suggestion would be to train someone in threat modelling (architectural risk analysis), or buy this as a consultancy service, and start it with a major new feature. Done properly, this will feed your backlog. However, if you buy it from outside, ensure whoever facilitates it understands agile.

If even this sounds too complicated,there are lists of "ready-made" security stories that you can inject on your backlog. Of course, doing that work requires security knowledge, but at least you have some tasks that very likely need doing - so at least you can discuss your residual risk. A list of stories we (me, @gedfi and @sukelluskello wrote at F-Secure has been published in the Secure Agile Software Development Life Cycle book. Also SAFECode has published a list of theirs.

CI and DevOps

  • We already were assuming mature test automation
  • See:
    • Infrastructure as code, immutable infrastructure
    • Gauntlt, Mittn and BDD-Security
    • Google "Rugged DevOps", "SecDevOps" for buzzwords
    • Unikernels

The ink wasn't even dry on the white papers of the DevOps bandwagon when "rugged" or "Sec" DevOps were coined. Essentially, this means focusing on the test automation, and running everything in a CI system. What I discussed earlier in this presentation is pretty much compatible with secure DevOps practices.

I myself authored one test framework, Mittn, that currently can run Burp Suite Professional's automated scanning, do fuzz testing against JSON/web form submissions, and check TLS server configs. This is not rocket science, and once running, should be a fairly small maintenance burden.

If you have your DevOps glasses on, you'll likely concentrate a bit more on the deployment side, and also take a "cloud style" approach to vulnerability management - stuff like Netflix' Security Monkey, and using immutable servers and software-defined networking that are deployed from sources under version control ("infrastructure as code").

Of the purely technical aspects, unikernels and "library OS" are trends that you probably want to keep your eyes on. I recommend "After Docker: Unikernels and Immutable Infrastructure" for a short primer.

Your reading list

Thank you

Always eager to hear about real cases!

Twitter: @anttivs

Email: avs@iki.fi

Available online at https://fokkusu.fi/agile-security-slides/

Powered by reveal.js