Software Security

Security in software products is an emerging property dictated by the cohesion of multiple factors throughout the development process, from its very conception to the death of the product. When we talk about evaluating the security of software, we refer to a set of activities throughout the development cycle that is born with the idealization of the system, and extends to the design, coding and strengthening it.

We must not fall into the error of confusing the security of the system with its characteristics, such as the use of certain protocols such as SSL. Nor should we confuse it with the security components that are immersed in the system architecture, such as the presence of firewalls, or limit it to compliance with a particular standard or certification.

Product security is a dynamic property that varies over time, and is critical if we consider the role these applications play in modern society. It arises after recognizing that there are attackers, and that they are always willing to test every potential entry path to gain control of the system, and therefore force software development companies to think of control mechanisms to resist these attacks.

It is estimated that currently, 50% of vulnerabilities in systems have their origin in design flaws. These last ones are different from the implementation failures -commonly known as bugs, arisen in the codification or testing- because they are born early in the development process and have such a deep impact in the system that it demands the reengineering of the same one.

This is why it is crucial to deploy the necessary resources to identify and fix these design flaws early, in order to reduce the cost that they produce when rooted in the product being created.

How do we guide safe development?

To help evaluate the maturity of security in the software development process, the speaker presents a list of common issues when designing applications, which may affect the security of the final product. Let’s see what these tips for secure design are.

No component is reliable until proven otherwise

A common mistake in software development is to encompass sensitive functionality in a runtime environment over which we have no control. It should not be assumed that system components are reliable until this can be demonstrated.

For example, if you have a client-server environment, you should take precautions against possible corrupted clients by deploying verification mechanisms. We must think that this is in the domain of the user, who will not always have the best intentions.

Outline hard-to-avoidance authentication mechanisms

Authentication is the process that allows us to prove the user’s identity and assign a unique identifier. The development of centralized authentication methods that cover every possible access path is one of the pillars in the construction of secure applications.

If we are dealing with web pages, we must think about which sites will require the handling of authenticated users, and take care that undue third parties do not intrude into the system from unprotected URLs. The use of multiple authentication factors will allow us to reinforce the system by checking not only what the user knows but also, for example, what he owns.

Authorize, in addition to authenticating

Authorization is the process that designates whether or not an authenticated user can perform an action that changes the state of the system. Authorization processes on authenticated users must be thought out from the design and prevent against sessions that have fallen into the wrong hands.

Separate data from control instructions

This point is key when working with code capable of modifying itself, or languages that compile such code at runtime -such as JavaScript-, where the same instructions are received as data.

Then, it becomes of utmost importance to clean up the input the system receives to prevent attackers from being able to manipulate the execution flow by entering malicious data.

Validate all data explicitly

Entries into the system must be evaluated with a philosophy of white list over black list: determine what will be allowed, and deny anything that does not correspond. We must think that an attacker interprets the data as possible programming languages, with the intention of manipulating the state of the system. Therefore, it becomes necessary to inspect these input data, generating the automatic procedures to take them to well known canonical forms.

In addition, this input validation must be given close to the moment the data is actually used, since the time lag between validation and use provides a window of opportunity for the generation of attacks.

To implement this, common components can be designed to centralize both syntactic -structural- and semantic -meaningful- validations, and to take advantage of the types of data present in the programming language being worked on.

Use cryptography correctly

An understanding of the cryptographic notions that apply to the developing system is necessary in order to understand which elements and characteristics of these elements are being protected, against which forms of attack, and consequently, how best to achieve this goal.

Creating your own cryptographic solutions, as always, is a risky decision that can lead to a faulty system, and is therefore strongly discouraged. Instead, the right advice should be sought to find the libraries and tools that allow us to increase the cost of attack for the cybercriminal.

Identify sensitive data and how it should be managed

It is difficult to protect our information if we are not clear about what we are really trying to protect. The definition of the data whose protection is fundamental for the operation of the system is critical, since from it we will be able to start outlining the processes for the design of security from the very beginning of the development cycle, and not as an add-on in the implementation or deployment stages.

The definition of the anonymity requirements and the metadata to be handled will give rise to decisions regarding the paths to be taken to protect them.

Always consider the users of the system

A technically perfect system that does not meet the needs of the users is a useless system. Usable safety must be one of the goals to be achieved when setting the safety objectives for the system. On the one hand, it is not wise to transfer to the user security issues that can be solved by the developers themselves, in order to avoid fatigue.

On the other hand, it is necessary to maintain communication with the user to provide a certain degree of transparency about how the system operates. The default configuration should always be the secure configuration.

Integration of components changes the attack surface

Today’s applications are complex systems with many components interacting simultaneously. Every time a change is made to the system, the security landscape changes and must be re-evaluated. This reassessment is the result of coordination between areas and projects.

The components must be analyzed in a unitary manner and as a whole, taking into account how they are combined, maintained or replaced.

Consider future changes in objects and actors

From the design, we must consider that the properties of the system and its users are constantly changing. Some factors to consider are the growth of the user population, how migrations affect the system, or how they will affect future vulnerabilities on components that have been deployed on a large scale.

Upgrade procedures should be designed with a future horizon of months, years, or even decades.