The Problem
With the modernisation and commoditization of development environments, every organisation can much more easily create apps than even five years ago. This rapid up-tick translates to not only more potential avenues for cyber criminals to pursue, but this democratisation also leads to the level of security professionalism dropping precipitously due to the sheer volume of devs. When the main metric for success are the capabilities, level of adoption, and user engagement, often security is seen as anathema to the overall success of the project. None of the aspects should preclude the other, and in fact to meet a level of security assurance that organisation will insist upon, they should be developed and evolved throughout the Software Development LifeCycle (SDLC) in concert.
Definition, sources & leadership, frameworks
‘Secure By Design’ or ‘Security By Design’ is a descriptor of the approach to software development that includes security considerations ‘baked in’ to the process. While there are several prominent frameworks, there are countless interpretations/forks and, as always, organisations ‘reinventing the wheel’, OWASP’s Security by Design Principles tend to be what most gravitate towards.
Motivations – the why
Historically, security measures have typically been retro fitted to new applications. While better than releasing untested apps into the wild, this after-thought approach has led to countless incidents. The only alternative has been monolithic endeavours to security, dissuading such ambitious measures by the very resources required. DevOps, and more recently DevSecOps orientated environments have pulled Test-Driven Development (TDD) into the milieu to ‘formalise and normalise’ the approach.
While security has always been largely seen as a perimetre issue, this new(ish) paradigm is often an attempt to ask someone to continuously pour disinfectant over accrued wounds rather than devs learning to handle a kitchen knife.
“By looking at a continuous and all-pervasive approach to security through all phases of the SDLC, we can see opportunities to ‘prevent rather than cure.”
– Gordon Draper
With a ‘shift left’ (nomenclature for when improved security measures are deliberately made) through the SDLC, we can head-off any significant issues so that they don’t end up built-in to the final product, leaving either security holes, or if testing does occur at culmination; a failed project. The optimal solution though, is for devs to really understand security from day-one rather than have someone mopping up behind them.
Including Security by Design for Security Assurance
Often a piecemeal approach isn’t sort, it typically stems from ignorance, apathy, ambivalence, or budget, and just as likely; a combination of all four. In this article, we want to not only establish an overview of Security By Design practices, but look at how we might introduce effective, low-friction methods to at least improve software project security and provide a minimal threshold for Security Assurance.
Principles & Architecture
According to OWASP, there are 10 key principles for secure development.
Of course as the size and complexity of an application increases, so too the potential for security issues to arise. OWASP articulates this universal law by Reducing the Attack Surface to disallow for new vectors to be introduced, often unnecessarily.
When applications are deployed, applications should be limited in terms of settings to the most secure. OWASP refers to this step as ensuring secure-by-default. Sometimes when new users are adopting a new application, there are settings that prohibit certain less secure operation, but that are otherwise beneficial to that users workflow. This consideration is about setting this parametre (and all others) to their most secure setting and allowing with full disclosure to the trade-off. Similarly, when accessing applications the concept of Least Privilege should be exercised. This is the default level of access or ‘power’ a user may have within the system, and should be no more than to perform the tasks they are assigned. A typical user doesn’t need to have an admin account.
Initially counter-intuitive to the “complexity is the enemy of security” sentiment, introducing more controls to mitigate potential risks is usually wise. The Defence In Depth principle is about exactly that point; multiple interwoven safety nets and the reduction of single-points-of-failure can mask
Things will go awry. Whether there’s a third party API you use, or a db become temporarily unavailable, or a solar flare takes one of your data centres offline, things will go wrong. It’s important to plan for worst-case scenarios and build in the ability to Fail Securely. If X happens, then Y should be the failover state. And X should cover everything.
Your due diligence and responsibilities don’t stop at your perimetre. IF your app is reliant (or even touches) a third-party, you need to understand their security posture, and how their failings may affect your app and therefore users. OWASP suggests we all Distrust Services by Default.
With various levels of privilege, we need to also examine role requirements. An admin may need to create user accounts, update passwords, or change any other configuration value. What they don’t need is the ability to use the service with that same account. This demarcation and limitation on function is captured under a Separation of Duties wherein account level only have the ability to perform their functions
Don’t be reliant on Security-by-Obscurity. In fact, we’ve seen measures where transparent, digestible suggestions like Google’s cautioning in the absence of SSL use really boost adoption through user-driven behaviours. While divulging source code isn’t the best way to stay secure – there’s nothing inherently safe about oversharing – your organisation or app shouldn’t be reliant on obfuscation as part of its security strategy.
In terms of both performance and security, overall architecture of an application is a critical concern. As we introduce more complexity than absolutely necessary, we are introducing more potential opportunities for attack. It’s vital devs focus as much on elegance as they do functionality, and to adhering to the initial design and architecture. Simple Security is best.
“Complexity is the enemy of security.”
– Bruce Schneier
When issues are identified – either in the wild after deployment or during testing – it’s pivotal that the most fit remedy is found and applied. Too often the least-hassle solution is used to paper-over the problem, and move on. While triage may be needed if the app is already live, if a security issue is identified, the best possible solution should be integrated, not just the most expedient. Be diligent when addressing security issues.
Practical Concerns and Implementations
In today’s modern computing environments, traditional client/server infrastructure is far from a guarantee. When developing any application, it’s vital to not design with a particular infrastructure in mind as part of the security evaluation. Relying on certain controls or functions within a particular platform may see those quickly evaporate when migration to/from the cloud or to another provider takes place. By ensuring practical security controls are in place within the app, you become (somewhat) independent of platform.
One of the advantages of modern cloud-led environments is that they can reduce the burden on IT staff. By migrating commoditised services like (non-sensitive) storage, email, and AV communications to the cloud, they are firstly, likely more secure, and secondly freeing up the cycles of your staff to focus on applications that can’t or shouldn’t be sitting on other peoples metal.
Automation of security elements should now be defacto standard. While the initial steps do require more resources, the reduction in errors from the human factor alone often end up in positive ROI. A consistent and reliable automated process precludes potential misconfigurations too.
Test. Test. Test. While it may be prohibitively expensive to conduct full penetration testing on applications at every step of the SDLC, more reserved measures like continuous (or at least frequent) whitebox vulnerability testing are well within the budgets for all but the smallest of bootstrapped projects. While this doesn’t replace proper pentesting for applications, it can act as a triage mechanism to identify potential issues at the earliest juncture. That being said, an independent audit function through an in-depth penetration test for applications is really irreplaceable, certainly before going live.
An often underappreciated elements in the SDLC is how security can effect usability, or more accurately, how that effect can be made null. By introducing security and UX, user workflows that may have to accommodate changes stemming from security challenges, can be adapted to ensure these required changes have minimal impact.
Many of the concerns in web application security (and many, many other niches) stem from the notion of building up a hard shell against a ‘bōli xīn’ or ‘glass heart’, as the Chinese idiom goes. The very nature of TDD typical in Agile environments is to continuously test the vulnerability of the applications nucleus by way of the security.
Installing granular security controls at every depth is admirable, but becomes almost recursive, with loads negatively impacting performance or access. So, like everything in the history of security, it’s a balancing act, but one that every dev should understand deeply.
And there’s no need to re-invent the wheel. There are maturing frameworks from the likes of Microsoft and NIST, and the emerging standard ISO 27034 that any organisation or project can incorporate into their processes. This adoption of an already proven framework ensures not only its effectiveness, but also that there are measures and talent that already exist in the market without the need for organisations to invest in developing that capability in-house.